text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
Last reviewed July 2015 Resources on Minnesota Issues Same-Sex Marriage in Minnesota Historical Context • U.S. Same-Sex Marriage Laws • Legislative History • Books and Reports Internet Resources • Additional Library Resources This guide is compiled by staff at the Minnesota Legislative Reference Library on a topic of interest to Minnesota legislators. It is designed to provide an introduction to the topic, directing the user to a variety of sources, and is not intended to be exhaustive. On June 26, 2015, the U.S. Supreme Court ruled in Obergefell v. Hodges that states cannot ban same-sex marriage. However, the state of Minnesota legalized same-sex marriage on August 1, 2013. It was a dramatic turn of events, occurring less than a year after a proposed constitutional amendment to define marriage as between a man and a woman was unexpectedly defeated during the November 2012 election. The summer of 2013 also saw the Supreme Court strike down the Defense of Marriage Act (DOMA), the federal law barring the recognition of same-sex marriage, by a narrow margin. The ruling caused a variety of issues for states that did not recognize same-sex marriage, due to conflicts between state and federal law relating to tax breaks, pension rights, and other benefits. The issue of same-sex marriage has a long history in Minnesota, including one of the first state Supreme Court cases on the subject in 1971. The Minnesota Supreme Court was one of the first in the nation to rule on the issue of marriage between same-sex couples. The Baker v. Nelson decision (291 Minn. 310, 191 N.W.2d 185) in 1971 held that Minnesota Statutes prohibited marriages between same-sex partners. The case was appealed to the United States Supreme Court. They issued a one sentence dismissal of the appeal (409 U.S. 810, 34 L Ed 2d 65, 93 S Ct 37; October 10, 1972) that stated, "The appeal is dismissed for want of a substantial federal question." Another significant event was the passage of Laws of Minnesota 1977, chapter 441, sec. 1. It amended Minnesota Statutes chapter 517.01 which included the phrase, "Marriage, so far as its validity in law is concerned, is a civil contract." To the end of this sentence they added the words, "between a man and a woman." Twenty-five years after the Baker decision, the federal Defense of Marriage Act (DOMA) was signed into law on September 21, 1996 (U.S. Code Title 1, chapter 1, section 7 and U.S. Code Title 28, chapter 115, section 1738C). On July 8, 2010, a United States District Court Judge in Boston, Massachusetts ruled in Gill v. Office of Personnel Management that portions of the federal DOMA were unconstitutional. The federal government filed an in January 2011. On February 23, 2011, the White House directed the U.S. Department of Justice to stop defending DOMA in court. In 1997, the Minnesota Legislature passed its own version of what has been referred to as the Defense of Marriage Act (Laws of Minnesota 1997, chapter 203, article 10). The governor approved it on June 2, 1997. This act clarified that "lawful marriage may be contracted only between persons of the opposite sex" and went on to specifically prohibit "marriage between persons of the same sex" (Minnesota Statutes, chapters 517.01 In November 2003, the Supreme Judicial Court of Massachusetts ruled in Goodridge v. Department of Public Health that the denial of civil marriage rights to gays and lesbians was unconstitutional in that state. Due in large part to this decision, efforts were made to constitutionally ban marriage between same-sex partners and, in some bills, its legal equivalent in Minnesota with the introduction of HF 2798/SF 2715, and SF 3003 in 2004. These efforts were unsuccessful. In May 2010, three same-sex couples from Minnesota filed a lawsuit, Benson v. Alverson, in Hennepin County District Court. They argued that Minnesota's ban on marriage between same-sex partners violates due process, equal protection, and freedom of association rights. On March 7, 2011, Hennepin County District Judge Mary Dufresne rejected their argument and dismissed the lawsuit. was passed by the Minnesota Legislature in May 2011. The bill proposed an amendment to the Minnesota Constitution stating that marriage is the union of one man and one woman. Although constitutional amendment legislation cannot be vetoed, on May 25, 2011, Governor Dayton issued a "symbolic veto" of Laws of Minnesota 2011, chapter 88 (SF 1308). The amendment was rejected by Minnesota voters in the 2012 election. On July 11, 2011 the couples involved in the Benson v. Alverson lawsuit filed an appeal with the Minnesota Court of Appeals. On January 23, 2012, the Minnesota Court of Appeals released their unpublished opinion. They affirmed that Minnesota's Defense of Marriage Act does no+t violate the single subject or freedom of conscience clauses of the Minnesota Constitution. However, they sent the case back to the district court for a more thorough review of the claims related to citizens' constitutional rights to due process, freedom of association, and equal protection. A Petition for Review related to the dismissal of the case against the state was filed with the Minnesota Supreme Court. On April 17, 2012, the Minnesota Supreme Court declined to review the case, allowing it to proceed to trial in Hennepin County. On September 14, 2012, Judge Mary Steenson Dufresne began hearing motions in the case. In late January, 2013, an agreement was reached between the county and the attorney for the plaintiffs to put off all action until June 1, after the end of the 2013 Minnesota Legislative session. Several bills were introduced in the 2013 legislative session to address marriage law as the same-sex marriage debate continued in Minnesota. A bill to establish civil unions was introduced in the Minnesota House of Representatives on April 04, 2013. A House bill that would eliminate the word "marriage" altogether from state law and enshrine "civil unions" in its place was introduced on April 25, 2013. HF 1054, with the description, "Marriage between two persons provided for, and exemptions and protections based in religious association provided for," is the bill that ultimately passed in the Legislature. On May 14, 2013, Governor Mark Dayton signed HF 1054 into law, making Minnesota's marriage law gender neutral. The bill was passed by the Minnesota House of Representatives on May 9, 2013, and the Senate on May 13, 2013. The law went into effect August 1, 2013. The Supreme Court ruled in Windsor v. United States that the Defense of Marriage Act (DOMA) was unconstitutional on June 26, 2013. The court struck down the federal law, which barred the federal government from recognizing same-sex marriages legalized by the states, by a 5-4 vote. because it denies same-sex couples the "equal liberty" guaranteed by the Fifth Amendment. The majority opinion was written by Justice Anthony Kennedy and joined by Justices Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor and Elena Kagan. Chief Justice John Roberts and Justices Antonin Scalia, Clarence Thomas, and Samuel Alito dissented. The ruling means gay and lesbian couples who are legally married will be able to take advantage of federal tax breaks, pension rights and other benefits that are available to other married couples. U.S. Same-Sex Marriage Laws Until the Supreme Court's ruling in June 2015, the issue of extending marriage rights to same-sex couples continued to be debated throughout the United States. Some states allowed same-sex marriage or civil unions, while a number of others banned them. States have taken different legal approaches. The issue has been addressed through both state laws and constitutional amendments, the scope of which vary. In states that had defined marriage as a union between a man and a woman, some banned same-sex marriage, civil unions, and domestic partnerships while others banned only same-sex marriage. In states that had legally recognized same-sex unions, some allowed same-sex marriage, while others allowed civil unions or domestic partnerships. The National Conference of State Legislature's guide, Same-Sex Marriage Laws, provides a helpful summary of states' actions on this issue. Here are some of the efforts that have occurred in the Minnesota Legislature to address this issue since 1969: - SF 178/HF 61 were introduced to define marriage as a civil contract between male and - HF 3016/SF 1674 were introduced and stated that Minnesota would not recognize homosexual marriages performed in other states. - HF 3773 was introduced authorizing same-sex marriage. - HF 16 and HF 1725 were introduced to create specific statutory prohibitions on same-sex marriage in Minnesota. Other than an unsuccessful effort to recall SF 11 from committee on the Senate Floor on March 26, 1997, none of these bills had hearings. However, language from these bills was amended into with the acceptance of the A-25 amendment that was offered in the House Judiciary Committee on March 19, 1997. Attempts to get this language into SF 830 in the Senate Judiciary Committee on April 7, 1997 with the A-17 amendment failed. Another attempt to add the same-sex marriage prohibition language occurred in relation to SF 1908 on the Senate Floor on April 17, 1997. The amendment was ruled not germane. The language appears to have been added to SF 1908 in the House Health and Human Services Committee on April 18, 1997 with the adoption of the MB34 amendment. The language was then brought into the Conference Committee on SF 1908 in the House version of the bill. This was the bill that eventually passed of Minnesota 1997, chapter 203, article 10). This issue may have been discussed in other meetings as well. The dates listed above are simply a few places to start your research and are by no means an exhaustive list. A complete legislative history research of all of the bills involved is the main way to determine when the issue was discussed elsewhere. SF 2715 were introduced to create a constitutional amendment recognizing marriage as between one man and one woman. These bills were heard in committee. SF 3003 was introduced to create a constitutional amendment restricting marriage definitions to the judicial branch. This bill had committee hearings. HF 6/SF 1691 were introduced to create a constitutional amendment stating that marriage would be the union of one man and one woman only. Hearings were held by both the House and Senate. HF 3921/SF 3504, HF 3922/SF 3501, SF 3503, and SF 3563 were introduced to create a constitutional amendment stating that marriage would be the union of one man and one woman. SF 1958 was introduced to create a constitutional amendment restricting marriage definitions to the SF 120/HF 893 (hearings held), SF 1210/HF 1644 (Senate hearing) and SF 2145 were introduced to make marriage laws gender neutral. - HF 999 was introduced to create civil union contracts. HF 1655/SF 1988 were introduced to create a marriage evaluation study group. A hearing was held in the House. HF 1740/SF 1732 were introduced to allow recognition of same-sex marriages performed in other states. Hearings were held by both the House and Senate. HF 1824/SF 1976, HF 1870/SF 1975, HF 1871/SF 1974 were introduced to create a constitutional amendment stating that marriage would be the union of a man and a woman. HF 1054/SF 925 and were introduced to make marriage laws gender neutral. HF 1054 passed as Laws of Minnesota 2013, chapter 74. - HF 1687 was introduced to establish civil union contracts. - HF 1805 was introduced to eliminate the word "marriage" altogether from Minnesota law and enshrine "civil unions" in its place. Significant Books and Reports Corvino, John. Debating Same-Sex Marriage. New York: Oxford University Press, 2012. (HQ1033 .C675 2012) Equality from State to State: Gay, Lesbian, Bisexual and Transgender Americans and State Legislation. Washington D.C.: Human Rights Campaign Foundation, 2004-2011. (HQ73.3 .U6 E68) Features of State Same-Sex Marriage Constitutional Amendments. St. Paul, MN: Research Dept., Minnesota House of Representatives, 2005. (HN79 .M6 S56 2005) Kastanis, Angeliki, and M. V. Lee Badgett. Estimating the Economic Boost of Marriage Equality in Minnesota. Los Angeles, CA: the Williams Institute, 2013. (HQ1034.U5 K37 2013) Minnesota State Bar Association Unmarried Couples Task Force Report. Minnesota, The Association, 2009. (KFM5502.A3 M56 2009) Stone, Amy L. Gay Rights at the Ballot Box. Minneapolis: University of Minnesota Press, 2012. (HQ78.U5 S76 2012). Buffie, William C. "Public Health Implications of Same-Sex Marriage." American Journal of Public Health, June 2011, p. 986-990. "The Case for Marriage (What Marriage is For)." National Review, September 20, 2010, p. 16-20. Collett, Teresa Stanton. "Constitutional Confusion: The Case for the Minnesota Marriage Amendment." William Mitchell Law Review, Number 3, 2007, p. 1029-1057. Cook, Mike. "To Have and To Hold." Session Weekly, St. Paul: Minnesota House of Representatives Information Office, February 25, 2010, p. 19, 23. Cox, Barbara J. "'The Tyranny of the Majority is No Myth': Its Dangers for Same-Sex Couples." Hamline Journal of Public Law and Policy, Spring 2013, p. 235-257. Eckholm, Erik. "The Same-Sex Couple Who Got a Marriage License in 1971." New York Times, May 16, 2015. "Gay Marriage." CQ Researcher, March 15, 2013, entire issue. "Gay Marriage: The Defense of Marriage Act on Trial." Supreme Court Debates, May 2013, entire issue. Goldstein, Charles M. and Micaela Magsamen. "Constitutional Concerns in Defining Marriage in Minnesota." Hennepin Lawyer, March 2012, p. 8-14. Kuznicki, Jason. "Marriage Against the State. Toward a New View of Civil Marriage." Policy Analysis, January 12, 2011, entire issue. Lau, Holning and Charles Q. Strohm. "The Effects of Legally Recognizing Same-Sex Unions on Health and Well-Being." Law and Inequality, Winter 2011, p. 107-148. Mannix, Andy. "When will gay marriage be legal in Minnesota?" City Pages. January 2, 2013. Pentelovitch, William Z., and Alain M. Baudry. "When Conventional Wisdom is Wrong: Why Recent Proposed Amendments to the Minnesota Constitution Should Not Have Been Put to a Popular Vote." Hamline Journal of Public Law and Policy, Spring 2013, p. 193-233. Povich, Elaine S. "Same-Sex Marriage Means Tax Windfall for States." Stateline, July 21, 2015. Schlichting, JoLynn M. "Note: Minnesota's Proposed Same-Sex Marriage Amendment: a Flamingly Unconstitutional Violation of Full Faith and Credit, Due Process, and Equal Protection." William Mitchell Law Review, Number 4, 2005, p. 1649-1676. Schultz, David. "Liberty v. Elections: Minority Rights and the Failure of Direct Democracy." Hamline Journal of Public Law and Policy, Spring 2013, p. 169-191. Wardle, Lynn D. "The Proposed Minnesota Marriage Amendment in Comparative Constitutional Law: Substance and Procedure." Hamline Journal of Public Law and Policy, Spring 2013, p. 141-168. Significant Internet Resources Minnesota's New Same-Sex Marriage Law - A guide and resources from the Minnesota Department of Human Rights. Includes a link to frequently asked questions. National Conference of State Legislatures' Same-Sex Marriage Laws page. Same-Sex Marriage Guide - A guide created by the Minnesota State Law Library. Same-Sex Marriage in the United States - Data from Wikipedia, the free encyclopedia. Provides a general overview of the subject including history, timeline, arguments for, arguments against, and current status worldwide. UCLA School of Law - Independent research on public policy and laws related to sexual orientation. Includes a number of publications on the economic impacts of extending or denying rights based on sexual orientation. Additional Library Resources For historical information, check the following codes in the Newspaper Clipping File and the Vertical File: C92.2 (Constitution-MN Amendments and Revision), H40 (Homosexuality), M22 (Marriage)
<urn:uuid:e5b73fcb-fc36-4518-88e3-3a90c23e007d>
CC-MAIN-2015-35
http://www.leg.state.mn.us/lrl/issues/issues.aspx?issue=gay
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00100-ip-10-171-96-226.ec2.internal.warc.gz
en
0.937576
3,640
2.84375
3
Karl Marx wrote that the value of an item is determined by how much labor goes into producing it. A diamond is valuable because of all the work that goes into mining it. Therefore, Marx argued, since value is created by the worker, any revenues that the capitalist receives constitute theft from the laborer. In 1871, Carl Menger, the founder of the Austrian school of economic thought, in his book Principles of Economics, destroyed Marx’s labor theory of value. The value of an item has no relation to the amount of labor that goes into producing it, Menger showed. Value is like beauty — it lies in the eyes of the beholder. A diamond on top of the ground is just as valuable as one dug from deep within the earth — it all depends on who is doing the valuing. The subjective theory of value came to be one of the bedrock principles of an unhampered market economy. (The theory was independently developed by two other economists at about the same time — a Swiss named Léon Walras and an Englishman named William Stanley Jevons.) Different valuations permit people to improve their standard of living through the simple act of exchange. Suppose I have ten apples and you have ten oranges. I value my tenth apple differently than if I owned only one apple. The same goes for you and your oranges. We exchange one apple for one orange because we both value what we are gaining more than what we are giving up. Otherwise, we would not make the exchange. Thus, we both improve our well-being by entering into a mutually beneficial transaction with each other. This was the original idea behind barter. People began specializing in producing goods and services in which they were comparatively more talented. Then they would exchange what they produced with people who were doing the same. A wheat grower would exchange with a cattle producer. A chicken grower would exchange with a cotton farmer. But barter quickly became complicated. Suppose Farmer A wanted something that Farmer B produced. Suppose Farmer B wanted something that Farmer C produced but not what Farmer A produced. Or suppose a cattle producer wanted a quantity of cotton that he valued at only 1/2 a cow. How would the trades be effected? Gradually, people began turning to a commodity that was readily marketable as well as easily divisible. For example, tobacco. Farmer A would trade his apples for tobacco even though he had no use for tobacco. He knew that he could take the tobacco and use it as a medium of exchange to buy what he wanted from someone else. And he knew that he could divide the tobacco into smaller quantities in order to buy less expensive items. People ultimately turned to the precious metals as money. Gold and silver were readily accepted in the marketplace, held their value relatively well, and were easily divisible. A trader desiring to purchase 100 bushels of apples would ask the seller what he wanted for them. The seller would respond with a quote of one ounce of gold. The buyer would agree and pull out his bag of gold. The gold would be weighed and the transaction would be consummated. Thus, the price of an item was determined by how much or how little value the owner put on it in relationship to other things he wanted. By placing a price of one ounce of gold on 100 bushels of apples, the owner was saying: I value one ounce of gold more than I do these 100 bushels of apples. Gradually, people found it cumbersome to weigh their gold and silver every time they made a transaction. Moreover, there was always the possibility that scales were false. Thus private minters came into existence. A minter would issue, for example, a one-ounce gold coin. His private minting company would certify the coin as to weight and fineness. He would sell the coin for a little more than one ounce of gold, and people would be willing to pay for it in order to facilitate their trades. If the minter had a good reputation, his coins would circulate freely as money. In his book Man, Economy, and State, Murray Rothbard refers to a private minter of great repute — the Count of Schlick, who resided in Joachim’s valley (or Joachimsthal) in Bohemia in the 16th century. His one-ounce silver coins were known as “Joachims-thalers,” ultimately shortened to “thalers.” This is the origin of the term “dollar.” Unfortunately, kings and queens recognized an opportunity for plunder. They put private minters out of business and monopolized the minting of coins, all for the “public good” of course. One-ounce gold coins, for example, would come into the hands of the king in payment of taxes. The king would clip the coin by shaving gold off the edges. He would then gather up the shavings and make a new one-ounce gold coin that he then would use to go buy things for himself and his royal ilk. It was the earliest system of inflation, the insidious process by which governments loot the people through the destruction of their medium of exchange — their method of communication in the marketplace. As people began discovering that the king’s one-ounce gold coins contained less than one ounce of gold (because of what had been clipped off), the coins would begin trading at a discount. That is, a merchant would not accept them at face value but rather at 7/8 ounce of gold, for example. This infuriated the king because it impugned the integrity of government. The king, or course, would blame the debasement of the currency on those “greedy, profit-seeking, bourgeois, capitalist swine” who were raising their prices. He would then decree “legal-tender laws” — laws that required people to accept government money at face value, no matter how much loss it caused people in the marketplace. Guttenberg’s invention of the printing press made the monetary situation even worse. The king began printing promissory notes or bills of exchange that he would require people to accept, on pain of fine and imprisonment, in lieu of gold and silver coins. This process of inflation would become the method of choice by which governments in the modern era would plunder and loot the citizenry. The price system is simply the intricate system by which people communicate in the marketplace all over the world. For example, suppose a hurricane destroys thousands of homes along the Texas Gulf Coast. The demand for plywood immediately skyrockets. But how is this increased demand communicated? Do people have to put ads in the newspaper or on the radio? No. The price system does the communicating for them. The price of plywood immediately soars as a result of the new demand and the limited supply. The skyrocketing price sends a signal not only to buyers along the Texas seacoast but also to sellers all over the world. Buyers are told by the higher price: Conserve! Sellers are told: Start producing more and ship it to Texas! As new plywood starts to arrive in Texas, the increase in supply begins a downward pressure on the price. But notice that no one — and especially not some politician or bureaucrat — has to start making big, public pronouncements or decrees regarding the need for new plywood. The price system that is central to an unhampered market economy does all the communicating that is necessary. Does a logger thousands of miles away have to read the newspaper every day to keep up with these types of things? No. All that he has to know is this: What’s plywood going for today? When the price soars, he doesn’t have to know why. The market signal is enough to say to him: Start producing more. He doesn’t have to know that the reason the price has gone up is that families in Galveston, Texas, need to rebuild their homes. Thus, when a politician condemns “profiteering” during a natural disaster, he actually aggravates the disaster. He tampers with the intricate system of communication upon which the market relies. Suppose that the Texas governor decrees: “Anyone who sells plywood at a higher price than existed before the emergency will be prosecuted and punished for ‘price-gouging.'” The result will be this: Purchasers will fail to conserve and sellers will fail to increase supplies because the financial incentive (higher prices) to take these actions will have been eliminated. The current stock of plywood will disappear and will not be replenished. In the name of “doing good,” politicians and bureaucrats who interfere with the price system, especially in times of emergency, do an infinite amount of harm, especially to the people who are already suffering. The entrepreneur or capitalist is the person in the unhampered market economy who risks his money in order to capitalize on an opportunity. Rather than condemning him, we should be exalting him! The profit he receives is not theft from the worker, as Marx suggested, but rather his reward for taking the risk that ultimately benefits others. For example, suppose that after the Texas hurricane, an entrepreneur has his agents immediately purchase enormous quantities of ice from ice houses in San Antonio, Austin, and Dallas, and ships the ice by helicopter into Galveston. He invests, let us say, $25,000 in ice, personnel, and equipment. If electricity has been restored by the time the ice arrives, the entrepreneur loses all his money. If electricity has not been restored, he stands to receive, let us say, $100,000 — a profit of $75,000. Is this exorbitant? Excessive? Obscene? How can a profit ever be any of these things? The capitalist takes the risk with his own money. Each of the purchases and sales of ice is simply reflecting the respective valuations of the parties in the light of the circumstances then existing — i.e., disaster conditions. Which is better: no ice or expensive ice? Profits are simply part of the intricate system of communication in the unhampered market economy. They are a way of saying to a capitalist: “Good job. You produced the goods and services that other people wanted and were willing to pay for under the conditions existing at that time.” Losses are the opposite. They say to the capitalist: “You did a bad job in satisfying consumers.” The fact is that any government regulation that interferes with mutually beneficial trades at any time is interfering with people’s ability to improve their well-being. People enter into trades to improve their standard of living. When politicians and bureaucrats interfere with these peaceful transactions, they frustrate the ability of people to seek their own happiness in their own way. Most important, government officials should never be permitted the power to interfere with prices and money. People’s well-being — sometimes even their survival — depends on the unhampered market economy’s simple but intricate system of communication.
<urn:uuid:fa453cdd-4343-47f9-9de9-386f17667ec8>
CC-MAIN-2015-35
http://fff.org/explore-freedom/article/vision-free-society-part-3/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz
en
0.976838
2,291
3.796875
4
Tell what the ceremony means Can you hear the sound of many waters (Revelation 14:2)? At Passover, God is calling for our collective testimony of His redemption in our lives. When you enter the land that the LORD will give you as he promised, observe this ceremony. And when your children ask you, 'What does this ceremony mean to you?' then tell them, 'It is the Passover sacrifice to the LORD, who passed over the houses of the Israelites in Egypt and spared our homes when he struck down the Egyptians.'" New International Version The historic event of the first Passover in Egypt is a Day that points to Jesus Christ who IS the PASSOVER. The Christian haggadah is the grateful voice and testimony of the Body of Christ declaring that "The Lord is Good and His Mercy (Hesed, Lovingkindness) Endures Forever... " 1 Cor 5:7-8 For Christ, our Passover lamb, has been sacrificed. Therefore let us keep the Festival, not with the old yeast, the yeast of malice and wickedness, but with bread without yeast, the bread of sincerity and truth. New International Version When we gather on the day of Passover, we are to tell OUR OWN STORIES. This is the Christian Haggadah, it is a collective event that is to be done in Unity. Haggadah means “Telling”. A Christian Haggadah (hey-gah-dah) retells the story of God’s redemption of Israel from Egypt and also our redemption through His grace by the death, burial and resurrection of our Messiah, Jesus. This re-telling is not birthed of tradition or of the mind of man; rather, it is the instruction of the Lord Himself to his people. Ex 12:14 -15 "This is a day you are to commemorate; for the generations to come you shall celebrate it as a festival to the LORD--a lasting ordinance. For seven days you are to eat bread made without yeast. New International Version Ex 12:17 -18 "Celebrate the Feast of Unleavened Bread, because it was on this very day that I brought your divisions out of Egypt . Celebrate this day as a lasting ordinance for the generations to come. In the first month you are to eat bread made without yeast, from the evening of the fourteenth day until the evening of the twenty-first day. New International Version Passover is one of three yearly gatherings that the Lord has given to His people to celebrate and remember who we are and who God is. We remember what he has done for us, and what He has promised to still complete. These Feasts, also called Sabbaths, are specific places in time that he has prepared for ALL his people. The knowledge and practice of these Feasts has been hidden away from a large portion of the Church for almost 2,000 years. It began with the accusation that Jews were God-haters and that anything associated with their traditions should be completely rejected and abandoned. In the fury of hatred, blame, and religious self-righteousness, the teaching and observation of the Lords intimate seasons were lost –but only for a season. In recent times, the Lord, blessed be His Name, has seen fit to re-ignite and re-veal the Truth and power that is hidden in these times. All of the Feasts speak of our Messiah, Jesus. They describe and reveal the operation of the Kingdom of God. As Heb 8:5 says of the Tabernacle, they are “a copy and shadow of what is in heaven.” So while we may not be able to see clearly into the realm of Heaven, we can see the shadow that it casts on the earth through the Feasts. The Feasts are not heaven, but they are quite accurate in their details. The observance of these times is by no means to be construed as some alternate method of salvation. Eph 2:7-10 says “For it is by grace you have been saved, through faith-and this not from yourselves, it is the gift of God - not by works, so that no one can boast. For we are God's workmanship, created in Christ Jesus to do good works, which God prepared in advance for us to do.” New International Version "'These are the LORD's appointed feasts, the sacred assemblies you are to proclaim at their appointed times: The LORD's Passover begins at twilight on the fourteenth day of the first month. On the fifteenth day of that month the LORD's Feast of Unleavened Bread begins; for seven days you must eat bread made without yeast. On the first day hold a sacred assembly and do no regular work. New International Version THE SEDER (say-dur) Traditionally, Passover is celebrated with a special service and meal called a Seder (say-dur) “Seder” in Hebrew means “order”. It is an ordered service. But don't let the word "ordered" scare you. The purpose of the Seder is to celebrate the freedom that the Lord has given us from the bondage and power of Sin. The power and religion of ancent Egypt is a picture of the power and bondage of sin in our lives. Specific foods are set on the table, and the retelling of the Exodus events begins. Diners take turns reading a scripted account and particular foods are eaten at appropriate times. Bitter herbs are eaten to remind us of the bitterness of slavery to sin. We eat greens dipped in salt water to remind us of the tears we shed in the land of our bondage. We eat bread that is made with no leaven to remind us that there was no time for raising bread when the Lord came to deliver Israel from Egypt. Also, the unleavened bread symbolizes Jesus the Messiah, who is the bread from heaven. He is pure and wholesome with no leaven, no sin; He is the absolutely spotless Lamb of God given for the taking away of our sin nature. At the end of the meal,there is a” treasure” hunt for children,where they seek the hidden piece of matza called the "Afikomen" complete with a prize! Afikomen means "that wich comes after." There is a picture of our Messiah Jesus, his death, ressurection, burial, and return in all of this. The child who finds the hidden bread exchanges it for a great prize, usually a fancy treat. When we find the "Hidden Manna" from heaven, we find access to the strong bonds of Gods Covenant of Love to us through Jesus Christ, who came to redeem all of us from the curse. What a prize indeed! It is well worth our seeking with all diligence and persistence. When you seek me with your whole heart, you will find me. The scriptures tell us to remove the leaven from our houses in preparation for this Feast. We are to eat bread made without leaven for seven days beginning on the 14 th day of Abib, the very day that Israel was delivered from Egypt. We are told to tell this story to our children When you enter the land that the LORD will give you as he promised, observe this ceremony. And when your children ask you, 'What does this ceremony mean to you?' Then tell them, 'It is the Passover sacrifice to the LORD, who passed over the houses of the Israelites in Egypt and spared our homes when he struck down the Egyptians.'" New International Version Some may find themselves becoming hungry as they anticipate the “real” part of the Seder meal, sbut that is part of the lesson. We must become hungry for the Lord’s provision – and then – He will fill us. Passover is the oldest of all Jewish holidays. It marks the beginning of the religious calendar (Exodus 12:1.2). The original account of the first Passover is found in Exodus 12, 13, & 14. Many "Tellers" - One Story Again, the word "Haggadah" means "telling" and there are many ways to tell the same story. Imagine the time before written books were common and the history of a people was passed from generation to generation by the telling of stories and events of their past. This, I believe is part of the beauty and strength of the Passover celebration, each storyteller bring his or her own personality and experience into the telling. We are not talking about fabricating lies and exaggeration, but we are talking about the uniqueness of people, their gifting, strengths, sense of humor, and individual experiences concerning deliverance from sin and entering into covenant with God. This collective sound of millions of people telling our stories of redemption and salvation together on the same day, the 14th of Abib, the first day of the Feast of Unleavened Bread is one of the sounds of "many waters". When we come together at the Feasts of the Lord, we come into unity WITH Him. It is then no longer just a collection of individual testimonies of lives redeemed at various times, but the stories are all united into one GREAT FLOOD of testimony that has the power to change the World! In Exodus, the Lord tells Israel to take the blood and put it on their doorposts. The blood of the spotless lamb "taken" for each house spares them from death. The wages of sin is death. The blood pays for sin. But the Lord tells them also to TELL THE STORY, each year, on the same day at the same time, and the Blood coupled with the telling of this story OVERCOMES ALL THE POWER of the enemy And they have overcome (conquered) him [satan, the devil] by means of the blood of the Lamb and by the utterance of their testimony , for they did not love and cling to life even when faced with death [holding their lives cheap till they had to die for their witnessing]. Therefore be glad (exult), O heavens and you that dwell in them! But woe to you, O earth and sea, for the devil has come down to you in fierce anger (fury), because he knows that he has [only] a short time [left]! [Isaiah 44:23; 49:13.] AMPLIFIED Bible Can you hear it? Can you hear the rumble and shaking of this Voice of Many Waters? It is the sound of our testimonies together unified in the very Sabbath of the Lord God who is Lord of the Armies of Heaven. And just as at the first Passover event in history, He has come down with a mighty hand and an outstretched arm to deliver His people OUT OF THE HANDS OF OUR ENEMIES in a single night! Now our voices can join with His in agreement and in Truth and no power of the gods of Egypt or any other lesser thing can stand against this flood of testimony. There is liberty to write or tell your Christian Haggadah of Passover in your own style and format, but we must remember WHEN. The Feasts of Passover and Unleavened Bread are gifts to us. We can enter into the Lord's Appointed Days and join ourselves to His purposes for us in them. The Lord is never late, and He always comes at Just the right time. So let's join with Him in the Feast and celebrate our FREEDOM IN CHRIST!
<urn:uuid:f02a0789-ea5f-4b33-b754-10ae27e956a0>
CC-MAIN-2015-35
http://www.your-study-bible-online.org/christian-haggadah.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00042-ip-10-171-96-226.ec2.internal.warc.gz
en
0.954386
2,397
2.953125
3
Few psychologists would propose that expectancy effects will cause errors in interpretation in every experiment. The role of cognitive bias in science and inference generally is quite complicated. There is no typical “blind experiment model.” The physicists who announced the discovery of the Higgs boson or those who believed they detected the marks of gravitational waves in the cosmic background radiation had no such model. In much of science, experimenters know what they are looking for; fortunately, some results are not ambiguous and not subject to subtle misinterpretation. There is the joke that if your experiment needs statistics, you should do a better experiment. That said, when interpretations are more malleable—as is often the case in many forensic disciplines—various methods are available to minimize the chance of this source of error. One is “sequential unmasking” to protect forensic analysts from unconscious (or conscious) bias that could lead them to misconstrue their data when exposed to information that they do not need to know. There is rarely, if ever, an excuse for not using methods like these. But their absence does not make a solid latent fingerprint match or a matching pair of clean, single-source electropherograms, for example, into small-scale versions of Inception. When and where was the start of forensic science? Ancient China? Renaissance Europe? Should the U.S. Department of Justice have been supervising the Los Angeles Police Department when it founded the first crime laboratory in the United States in 1923? The DOJ has had enough trouble with the FBI laboratory, whose blunders led to reports from the DOJ’s Office of the Inspector General and which has turned to the National Research Council for advice on more than one occasion. The 2009 report of a committee of the National Research Council had much better ideas for improving the practice of forensic science in the United States. Its recommendation of a new agency entirely outside of the Department of Justice for setting standards and funding research, however, gained little political traction. The current National Commission on Forensic Science is a distorted, toothless, and temporary version of the idea. The committee did not undertake a “scientific” study. It engaged in a policy-oriented review of the state of forensic science without applying any particularly scientific methods. (This is not a criticism of the committee. NRC committees generally collect and review relevant literature and views rather than undertake scientific research of their own.) This committee's quest did not begin in 2009. That is when it ended. Congress voted to fund the study in 2005. The report is far more nuanced (some might say conflicted) than this. Here are some excerpts: "The chemical foundations for the analysis of controlled substances are sound, and there exists an adequate understanding of the uncertainties and potential errors. SWGDRUG has established a fairly complete set of recommended practices." P. 135.Hardly a ringing endorsement of all police lab techniques, but neither is the report an outright rejection of all or even most techniques now in use. "Historically, friction ridge analysis has served as a valuable tool, both to identify the guilty and to exclude the innocent. Because of the amount of detail available in friction ridges, it seems plausible that a careful comparison of two impressions can accurately discern whether or not they had a common source. Although there is limited information about the accuracy and reliability of friction ridge analyses, claims that these analyses have zero error rates are not scientifically plausible." P. 142. "Toolmark and firearms analysis suffers from the same limitations discussed above for impression evidence. Because not enough is known about the variabilities among individual tools and guns, we are not able to specify how many points of similarity are necessary for a given level of confidence in the result. Sufficient studies have not been done to understand the reliability and repeatability of the methods. The committee agrees that class characteristics are helpful in narrowing the pool of tools that may have left a distinctive mark. Individual patterns from manufacture or from wear might, in some cases, be distinctive enough to suggest one particular source, but additional studies should be performed to make the process of individualization more precise and repeatable." P. 154. "Forensic hair examiners generally recognize that various physical characteristics of hairs can be identified and are sufficiently different among individuals that they can be useful in including, or excluding, certain persons from the pool of possible sources of the hair. The results of analyses from hair comparisons typically are accepted as class associations; that is, a conclusion of a 'match' means only that the hair could have come from any person whose hair exhibited—within some levels of measurement uncertainties—the same microscopic characteristics, but it cannot uniquely identify one person. However, this information might be sufficiently useful to 'narrow the pool' by excluding certain persons as sources of the hair." P. 160. "The scientific basis for handwriting comparisons needs to be strengthened. Recent studies have increased our understanding of the individuality and consistency of handwriting and computer studies and suggest that there may be a scientific basis for handwriting comparison, at least in the absence of intentional obfuscation or forgery. Although there has been only limited research to quantify the reliability and replicability of the practices used by trained document examiners, the committee agrees that there may be some value in handwriting analysis." "Analysis of inks and paper, being based on well-understood chemistry, presumably rests on a firmer scientific foundation. However, the committee did not receive input on these fairly specialized methods and cannot offer a definitive view regarding the soundness of these methods or of their execution in practice." Pp. 166-67 "As is the case with fiber evidence, analysis of paints and coatings is based on a solid foundation of chemistry to enable class identification." P. 170 "The scientific foundations exist to support the analysis of explosions, because such analysis is based primarily on well-established chemistry." P. 172 "Despite the inherent weaknesses involved in bite mark comparison, it is reasonable to assume that the process can sometimes reliably exclude suspects. Although the methods of collection of bite mark evidence are relatively noncontroversial, there is considerable dispute about the value and reliability of the collected data for interpretation." P. 176 "Scientific studies support some aspects of bloodstain pattern analysis. One can tell, for example, if the blood spattered quickly or slowly, but some experts extrapolate far beyond what can be supported." P. 178 According to an NRC press release issued in February, 2009, “[t]he report offers no judgment about past convictions or pending cases, and it offers no view as to whether the courts should reassess cases that already have been tried.” Such language may be a compromise among the disparate committee members. But to derive the conclusion that death row is “filled with innocents” even partly from the actual contents of the report, one would have to consider the deficiencies identified in the system, the extent to which these deficiencies generated the evidence used in capital cases, and the other evidence in those cases. Other research is far more helpful in evaluating the prevalence of false convictions. As the 2009 NRC Committee explained, there is no single "field of forensics." Rather, "Wide variability exists across forensic science disciplines with regard to techniques, methodologies, reliability, error rates, reporting, underlying research, general acceptability, and the educational background of its practitioners. Some of the forensic science disciplines are laboratory based (e.g., nuclear and mitochondrial DNA analysis, toxicology, and drug analysis); others are based on expert interpretation of observed patterns (e.g., fingerprints, writing samples, toolmarks, bite marks, and specimens such as fibers, hair, and fire debris). Some methods result in class evidence and some in the identification of a specific individual—with the associated uncertainties. The level of scientific development and evaluation varies substantially among the forensic science disciplines." P. 182.The courts have been lax in responding to overblown testimony in some fields and to those techniques that lack proof of their fundamental precepts. Blaming the persistence of the admissibility of the most dubious forensic disciplines on Daubert is strange. Daubert's standard did not spring into existence fully formed, like Athena from the brow of Zeus. A similar standard was in place in a number of jurisdictions. As the The New Wigmore: A Treatise on Evidence shows, the Court borrowed from these cases. A smaller point to note is that there were not one, but two partial dissenters (who concurred in the unanimous judgment). Chief Justice Rehnquist and Justice Stephens objected to the majority’s proffering "general observations" about scientific validity, and they did not complain about the ones the Mr. Stern points to as an explanation for the persistence of questionable forensic "science." That’s Chief Judge Alex Kozinski of the Ninth Circuit Court of Appeals writing on remand in Daubert itself. Judge Kozinski could not possibly be objecting to the Supreme Court's opinion on the ground that "widespread acceptance within a relevant scientific community" is an impenetrable standard. Quite the opposite. He applied that very standard in his previous opinion in the case and was bemoaning what he called the "brave new world" that the Court ushered in as it vacated his opinion. A recent survey of judges found that 96% of the (disappointingly small) fraction responding deemed the general scientific acceptance to be helpful -- more than any other commonly used factor in judging the validity of scientific evidence. Refusal to question scientific evidence is not what most lawyers call the CSI effect. In any event, jury research does not support the idea that jurors inevitably reject attacks on scientific testimony or that the testimony of the first witness in a figurative white coat is unshakeable. This option, which applies applies to a limited set of cases (and hence is no general solution) is not foreclosed by District Attorney for the Third Judicial District v. Osborne, 129 S.Ct. 2308 (2009). If there is a minimally plausible claim of actual innocence after conviction, let’s allow such testing by statute. Of course, it would be better to thoroughly test potential DNA evidence (when it is relevant) before trial—something that Osborne's trial counsel declined to request, fearing that it would only strengthen the prosecution’s case. Again, as the NRC committee stressed, "forensic science" is not a single, uniform discipline. Since the report, funding has increased, some guidelines have been revised, and significant research has appeared in some fields. Still, the pace resembles that of global warming. It is coming, notwithstanding resistance described in earlier years on this blog. As for the "investigation by Chemical & Engineering News," the latest I saw from that publication was an article in a May 12, 2014 issue with a map showing selected instances of examiner misconduct dating back to 1993 and indicating that only five states require laboratory accreditation. No effort was made to ascertain how many labs are still operating without accreditation. With no apparent literature review, The article simply asserted that [I]n the years since , little has been done to shore up the discipline’s scientific base or to make sure that its methods don’t result in wrongful convictions. Quality standards for forensic laboratories remain inconsistent. And funding to implement improvements is scarce. [¶] While politicians and government workers debate changes that could help, fraudsters like forensic chemist Annie Dookhan keep operating in the system. No reform could stop a criminal intent on doing wrong, but a better system might have shown warning signs sooner. And it likely would have prevented some of the larger, systemic problems at the Massachusetts forensics lab where Dookhan worked.I must be missing the real investigation that the C&E News writers conducted.
<urn:uuid:82ca9d07-c5b9-4bae-9600-1b21c25c08f4>
CC-MAIN-2015-35
http://for-sci-law-now.blogspot.com/2014/06/more-on-mistakes-in-forensic-science.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00044-ip-10-171-96-226.ec2.internal.warc.gz
en
0.944625
2,410
3.140625
3
Etymologically speaking the word Alappuzha is derived from two words, Ala and Puzha. According to Dr. Gundert the German Lexicographer, Ala means broad and Puzha is river. It is a Land Mark between the broad Arabian sea and a net work of rivers flowing into it. In the early first decade of the 20th Century the then Viceroy of the Indian Empire, Lord Curzon made a visit in the State to Alleppey now Alappuzha. Fascinated by the Scenic beauty of the place, in joy and amazement, he said, 'Here nature has spent up on the land her richest bounties'. In his exhilaration, it is said, he exclaimed, Alleppey, the Venice of the East. Thus the sobriquet found its place in the world Tourism Map. The presence of a port and a pier, criss -cross roads and numerous bridges across them, a long and unbroken sea coast might have motivated him to make this comparison. Of course Alleppey has a wonderful past. Though the present town owes its existence to the sagacious. Diwan Rajakesavadas in the second half of 18th century, District of Alappuzha figures in classified Literature. History says it had trade relations with ancient Greece and Rome in B.C and in the Middle Ages. For example, Purakkad an ancient port near Alappuzha was Barace for them. Different religious such as the Parsur, Gujaratis, Mamens and Anglo Indians to mention a few, commingled together and settled here. They built their churches and mosques and temples having architectural grandeur. Such sites are worth seeing. The whole of kuttanadu, the Netherland of the East presents another picturesque sight. Area1,414 Sq.Km. which constitutes 3.64% of the total state area. The population size2,105,349 which is 6.61 % of the state population. Population density- 1492 persons per Sq.Km, against 1415 in 1991- retains the first positionin the state. Sex-ratio(No. of females per 1000 males) - 1079 , earning 4th position (5th position in 1991 with 1051) Literacy Rate93.66 % which earns it 3rd position in the state.(State Average - 90.92%) Female Literacy rate91.14 which again earns 3rd position in the state.(State Average-87.86%) North Latitudes - 9o05' and 9o54' East Longitudes - 76o17' 30' and 76o40'. North "“ Kochi and Kanayannur Taluks of Ernakulam district East "“ Vaikom, Kottayam and Changanassery Taluks of Kottayam district and Thiruvalla, Kozhencherry and Adoor taluks of Pathanamthitta District South "“ Kunnathur and Karunagappally of Kollam District West "“ Lakshadweep ( Arabian ) sea The district is a sandy strip of land intercepted by lagoons, rivers and canals. There are neither mountains nor hills in the district except some scattered hillocks lying between Bharanikkavu and Chngannur blocks in the eastern portion of the district. Cherthala, Ambalappuzha, Kuttanad and Karthikappally fully lie in low land region. There is no forest area in this district. The climate is moist and hot in the coast and slightly cool and dry in the interior of the district. The average monthly temperature is 250 C. The district also gets the benefit of two outstanding monsoons as in the case of other parts of the state. Hot season - March to May South-west monsoon (Edavappathi) - June to September North-east monsoon (Thulavarsham) - October to November Dry weather - December to February The district has 8 reporting rain gauge stations at Arookutty, Cherthala, Alappuzha, Ambalapuzha, Harippad, Kayamkulam, mavelikkara and Chengannur as recorded in 1989. The average rainfall in the district is 2763 mm. The geological formations of the district are classified as : a belt of crystalline rocks of the archean group a belt of residual laterite a narrow belt of warkalli bed od tertiary group a western most coastal belt of recent deposits The most relevant crystalline rock type is Charnokites. Residual laterite is the resultant product of the insitu alteration of the crystalline rocks. Warkalli beds consist of a succession of variegated clays and sand stone. The coastal belt consists of recent sediments like aluvium, marine and lacustrine. RIVERS and LAKES The district has a network of rivers,canals and backwaters. Manimala, Pampa and Achancovil are the major rivers. Originates from Mothavara hills in Kottayam district enters the district at Thalavadi village in Kuttanad taluk and passes through Edathua and Champakulam villages and joins the Pamba river at Muttar.The villages of Manimala, Mallappally, Kaviyoor, Kalloppara, Thalavadi, Kozhimukku and Champakkulam lies in the course of the river Manimala. It has a length of 91.73 Km and drainage area of 802.90 Km. Pamba, the third longest river in Kerala is formed by several streams originating from Peerumedu plateau in Idukki district, enters Alappuzha district at Chengannur and flows through Pandanad, Veeyapuram, Thakazhy, and Champakulam through a distance about 177.08 Km and plunges into vembanad lake through several branches such as Pallathuruthi Ar, Nedumudi Ar and Muttar. The river has a length of 117 Km and is navigable to a length of 73 Km.. The catchment area of this river is 1987.17 Sq.Km. The main tributaries of the river are Pambayar, Kakki Ar, Arudai Ar, kakkad Ar and Kallar. This riveroften known as Kulallada river, originates from Pasukida mettu, Ramakkal Theri and Rishimalai of Kollam district enters the district at Venmony and has a catchment area of 1155.14 Sq.Kms and a marginable length of 32.19 Km. Passes through Cheriyanad, Puliyoor and Chengannur villages, enters Mavelikkara Taluk at Chennithala, flows through Thriperumthura and pallippad villages and joins Pamba at Veeyapuram. The Vembanad lake, the most important of the west coast canal system has a length of 84 Km and an average breadth of 3.1 Km. It covers an area of 204 Sq.Km. Stretching from Alappuzha to Kochi. Borders Cherthala, Ambalapuzha and Kuttanad Taluks of Alappuzha district, Kottayam, vaikom and Changanasserry taluks of Kottayam district and Kochi and Kanayannur Taluks of Ernakulam district. Pamba, Achankovil , Manimala, Meenachil and Muvattupuzha rivers discharge into this lake. Pathiramanal, often called the mysterious sand of midnight, having coconut palms and luxuriant vegetation is situated in the centre of this lake. Perumbalam and Pallippuram are the other islands in this lake. The Thannermukkom regulator constructed across Vembanad lake between Thannermukkom and Vechur is intended to prevent tidal action and intrusion of saline water into the lake. It is the largest mud regulator in India. Stretching between Panmana and Karthikappally, Kayamkulam lake is a shallow lake which has an outlet to sea at Kayamkulam barrage. It has an area of 59.57 Sq.Km., a length of 30.5 Km and an average breadth of 2.4 Km. It connects Ashtamudi lake by the Chavara Panmana canal. Alappuzha has a network of canals included in the west coast canal system which are used for navigation. The important canals are Vadai canal, Commercial canals and the link canals between these two canals. Apart from these, there are many inland canals which are mainly used for passenger navigation and commercial purposes. The lakes are used for inland water transport of passengers and cargo. Inland fisheries have also been flourished in these regions. Alappuzha has a flat unbroken sea coast of 82 Km length which is 13.9 % of the total coastal line of the state. An interesting phenomenon of this seacoast during the month of June is the periodic shifting of mud bank popularly known as 'Chakara' within a range of 25 Km in Alappuzha-Purakkad coast due to hydrolic pressure when the level of backwater rises during south-west monsoon. Places of interest 1. Pathiramanal - According to mythology a young Brahmin dived into the Vembanadu lake to perform his evening ablutions and the water made way for land to rise from below, thus creating the enchanting island of (sands of midnight) Pathiramanal . This little island on the backwaters is a favourite haunt of hundred of rare migratory birds from different parts of the world. The island lies between Thaneermukkom and Kumarakom, and is accessable only by boat. 2. R-Block. - These regions are wonders of the indigenous agricultural engineering kow-how of Kerala and remind the visitor of the famous dikes of Holland. Extensive areas of Land have been reclaimed from the backwaters and are protected by dikes built all around . Here cultivation and habitation are made possible four to ten feet below Sea Level. A leisurely cruise along the Canals that surround these Kayals is a memorable experience. 3.Karumadikuttan -Many fascinating legends are associated with this 11th century statue of Lord Buddha. 4. Kumarakodi - 20 km south of Alappuzha, Mahakavi Kumaranasan, one of the greatest poets of modern Kerala is laid to rest here. He was the P.D Shelly of Malayalam Literature. As an brought great changes in literature and could give the clarions call for changing the society also. 5. Saradha Mandiram, Mavelikkara - A.R Rajaraja Varma was a great poet and grammerian, The Malayalam literature is much indebted to Sri. A.R Thampuran. Saradha Mandiram was built by him as his residence. Now it is bought by the State Govt. & kept as his memorial. 6. Krishnapuram Palace - Built by Marthandavarma, this place at karthikappally in Kayamkulam is famous for its mural depicting the story of Gajendramoksham. Dating back to the 18th century, this exquisite piece of art is one of the largest murals in Kerala. This palace museum houses antique sculptures, paintings and bronzes. 7. Alappuzha Beach - This is one of the most popular picnic spots in Alappuzha. The pier, which extends into the sea here, is over 137 years old. Entertainment facilities at the Vijaya Beach Park add to the attractions of the beach. There is also an old light house which is greatly fascinating to visitors.
<urn:uuid:84c59378-5a89-4769-ac74-153a347201ca>
CC-MAIN-2015-35
http://www.mathrubhumi.com/travel/article/destination/alappuzha/1098/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00338-ip-10-171-96-226.ec2.internal.warc.gz
en
0.938587
2,569
3.203125
3
The Making of Football's Yellow First-and-Ten Line The Making of Football's Yellow First-and-Ten Line What would televised football be without the yellow “First and Ten” line? This graphic enhancement provides the viewer at home with an immediate visual appreciation of where the offense has to take the ball to make another first-down. For the spectators in the football stadium watching the game, there is no yellow line on the field. No matter where it’s placed, for the television audience, the yellow line appears to be an integral part of the playing field like any of the white yardage lines. When players fall on the yellow line their bodies cover it. Today, football fans at home rarely question how this yellow line appears on their television screens. To a whole generation of young television viewers, the yellow “First-and-Ten” line is as natural to the game as the green color of the football field. In 1997, when ESPN first aired the First-and-Ten Yellow Line, amazement and wonder were the universal reactions to this new broadcasting technology. Sports journalists were baffled. How was this line put on the field? All sorts of fanciful speculation filled the air after the first broadcast of the First-and-Ten Yellow line. “Is there a guy running out there with a vacuum and chalk?” “Could it be done with laser beams?” The actual details of the innovation were as incredible as the speculation. Sophisticated modeling based on precise measurements, ingenious real-time image processing, and a truck load of workstations made the First-and-Ten Yellow Line look as natural to the game as the turf itself. Behind it all was a small start-up company whose technical team was composed of an aeronautical engineer, mathematician, broadcast engineer, software engineer and a couple of electrical engineers. The new company was Sportvision and its president was an IEEE member. The story of the First-and-Ten in NFL football starts with an NHL hockey story. In 1996, a team of engineers at Rupert Murdoch’s NewsCorp had pioneered a system that tracked a hockey puck and highlighted the puck’s motion, in real-time, during the live broadcast of NHL games. Officially, the puck tracking system was called FoxTrax, but the popular name became “Glow Puck.” First aired in the 1996 NHL All-Star Game, FoxTrax was developed for the Fox Sports Network, which was part of the NewsCorp media empire. The story of FoxTrax can be found in the September 2009 issue of Today’s Engineer. Exhilarated by their success, the engineering team behind the Glow Puck wanted to develop other sports broadcasting applications for Fox Sports. But with a downturn in NewsCorp’s business, such opportunities were not forthcoming. So with Murdoch’s blessings, the engineering team behind the Glow Puck left NewsCorp to start Sportvision, with Stan Honey as the president. In return for exclusive licensing rights to prior patents, NewsCorp obtained an equity share in Sportvision. Once NewsCorp was onboard, finding other investors proved relatively easy. Sportvision now faced the challenge of coming up with a first product. The first effort was Air FX. The idea was to enhance the “color commentary” in NBA basketball by providing a graphical system that tracked, measured and displayed the players’ jumping abilities. But Air FX did not have the network appeal that Sportvision had hoped for. While work on Air FX was still ongoing, another product idea started to crystallize: to make the first-down line in football visible to television viewers. Adding graphical enhancement to video streams was not new to football. John Madden, a well know football commentator, had popularized the use of the “telestrator” in televised games. The telestrator allowed one to superimpose free-hand sketches on a video image. While useful in analyzing replays, telestrator sketches could not be used during the actual televised play because they obscured parts of the underlying video image. The Sportvision idea for a first-down line was radically different from the telestrator technology. Could a first-and-ten line be superimposed continuously on the broadcast image so that, wherever the line was place, it would look like a natural part of the field itself? The goal was to enhance the viewing experience without detracting the viewers’ attention away from the flow of the game. The idea was very attractive, but there was one very imposing technical obstacle. Could the “keying” problem be solved? “Keying”, or more correctly, “chroma keying” was, and still is, the standard technique used in broadcast applications like weather forecasts where the meteorologist stands in front of large map. In reality, the meteorologist is in the studio standing in front of blue (sometimes green) screen. The map is superimposed on the blue screen by a simple rule: replace all the blue with the corresponding map image but do not replace any color that is not blue, i.e. the meteorologist. For this technique to work well, the meteorologist’s wardrobe has to be carefully chosen so as to not be confused with the blue color of the background screen. Proper studio lighting also guarantees the effectiveness of chroma keying. Keying thus works well in the very controlled environment of the studio. Keying for a First-and-Ten line would be in the uncontrollable environment of a large football field with considerable variability in the background colors: wet & dry grass, natural turf artificial grass, mud and dirt, variable lighting, sunlight and shade, and even snow. The great color variety of team uniforms also raised issues. Could they be easily picked out from various colors in field environment? There were serious doubts within Sportvision. Could the available processing power, at the time, be able to sample the pixels in time to figure out when to draw or not draw the First-and-Ten line? J.R. Gloudemans provided the proof-of-concept that the keying problem could be solved. Glaudemans, who had worked for Shoreline Studios, joined Sportvision very soon after its creation. While working on the Air FX project, he heard that others within the company were brainstorming the “First-and-Ten line” idea. The chroma-keying challenge immediately piqued Gloudemans’s interest. He convinced his immediate superior, Marv White, to let him work on the problem quietly, under the company radar. Neither he nor White told the others in the company that he was experimenting with chroma keying and not working on Air FX. He took a video clip from football game and started to examine the keying problem within various color spaces. Glaudemans discovered that YCbCr was the best set of color spaces for keying in the First-and-Ten line. After a couple of weeks of experimenting, Gloudemans had shown that they keying problem could be solved with the available processing technology. There would still be a lot of refinement work to do on the keying technology, but the door was now open to develop the First-and-Ten line. Sportvision took the idea to the Jed Drake at ESPN. As head of Event Production, Drake is responsible for all events that ESPN televises outside the studio, which is essentially all the live sporting events. The idea intrigued Drake. Before ESPN could enter into any agreement with Sportvision, the NFL had to approve the idea. After seeing a demo tape, NFL executives enthusiastically endorsed the idea but with one caveat. The First-and-Ten line could not be shown during replays. NFL officials did not want this technology being used in any way to second-guess the referees. With the NFL on board, ESPN negotiated a one-year exclusive with Sport Vision for the 1997 football season. For Drake it was a big gamble. He could not be sure how the viewing audience and sports writers would react. Would they accuse ESPN of ruining the game? After ESPN signed the contract, Sportvision now had to design and produce a system that was reliable enough to be inserted into an ESPN live broadcast. A lot of hard engineering work was still needed. Once the proof-of-concept was shown, Glaudemans redesigned the keying component from the ground up to handle every possible lighting and color environment on the field in real time. The keying component of the system ensured that the color of the First-and-Ten appeared to be part of the field. But sophisticated modeling and calibrations of the cameras and playing field were still needed to make the line appear to be a natural part of the field. Camera lenses introduce subtle distortion as you go to the edges of the field of view. So the white yardage lines are not really straight. If the First-and-Ten line graphic were imposed on the video image as an exact straight line, the human eye would catch the discrepancy with the white yardage line. So, the First-and-Ten line had to be distorted to match the white lines. The adjustment became even more complex because a television camera lens’s distortion changes as a function of zoom and focus. As it zooms and focuses the lens elements move. A TV lens will go from 10 percent pincushion to 10 percent barrel all the way back to 10 percent pincushion. The lens had to be retrofitted with sensors to measure the movement of the lens. Tables were created to convert sensor data to line distortion data. The shape of the field’s surface also had to be accurately modeled in order to bend the First-and-Ten line to compensate for the surface shape. In rainy areas, fields had various dome shapes to help run-off and drainage. In dry areas, the surfaces of fields were flatter. All these surfaces had to be precisely surveyed by laser techniques. The insertion the First-and-Ten line into the video image also had to take into account the continually changing perspective of each camera. The exact position of each camera in three-dimensional space, which was fixed, had to be measured. Sensors were also put on the camera to measure pan, tilt and zoom. All the cameras had to be “synchronized” to ensure that computer knew, at any given instant, which specific camera was on-air before it inserted the First-and-Ten line. All this processing had to happen in real-time. The Sportvision team had to push the available computational power to the limit. A separate truck, filled with workstations and image processing hardware, would accompany the regular broadcast truck. Jed Drake was responsible for the actual color of the line. Originally, for technical simplicity, Sportvision was using reddish-orange for the color of the First-and-Ten. But Drake wasn’t happy with it. One day, he sat down with Glaudemans and they started to experiment with different colors. Drake would say “a bit more towards gold, a little bit more towards green,” and Glaudemans would make the changes. Finally Drake zeroed in on the color that he liked — the yellow color that is still used today. The First-and-Ten line went live for the 1997 Fall football season. The following year, ESPN won an Emmy in Sports Broadcasting for the First-and-Ten. More technical information on the First-and-Ten Line can be found in Method and Apparatus for Adding a Graphic Indication of a First Down to a Live Video of a Football Game, U.S. Patent #6,141,060 filed on 5 March 1999, issued on 31 October 2000. - YCbCr is a family of color spaces used as a part of the color image pipeline in video. Y is luminance component, Cb and Cr are the blue-difference and red-difference chroma components. YCbCr is a way of encoding RGB information
<urn:uuid:842a0d90-70fb-404f-9b58-53a80f04e0ae>
CC-MAIN-2015-35
http://ethw.org/The_Making_of_Football's_Yellow_First-and-Ten_Line
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00048-ip-10-171-96-226.ec2.internal.warc.gz
en
0.969064
2,489
2.828125
3
When choosing a car, one of the most important choices is picking the type of transmission the vehicle will have. In the past, this meant choosing one of two types: a manual transmission, also call a “standard” or a “stick-shift,” or an automatic. Times have changed. Now, the choices have multiplied as new technology seeps into every corner of our cars. Add in electric vehicles and their specialized transmissions, and things can get downright complicated. Before we wade in to what kind of transmission does what and how, here a quick overview of what a “transmission” actually does for readers who might not have grown up with Porsche and Ferrari posters on their walls. The transmission in a car (or any motorized wheeled vehicle) is a system of gears that literally “transmits” the power generated by the engine to the wheels that drive the vehicle forward. Figuratively and often physically located between the engine and the wheels, it’s a sort of middleman in the process that makes a car move, and it’s a complicated piece of machinery. Usually. Let’s start with the basics: Manual Transmission: Also known as a “standard” transmission or “stick shift” as noted above. This type requires you to push down on a clutch pedal and then change gears by hand with a shifter (the “stick shift”) in the center of the car. Most modern cars with a manual transmission have five speeds but some now have six, not counting reverse. In the early days of automobiles, all cars had manual transmissions. Overall, the design is fairly simple, efficient and it gives drivers very direct control over the car, something driving enthusiasts like. On the down side, it takes a hand off the steering wheel to operate and using one in stop-and-go traffic can be a mini-workout. It also takes skill and practice to proficiently master a manual transmission. Automatic Transmission: Also known as an “Auto.” First developed in the 1920s and refined ever since, most cars sold today come with an automatic transmission. And it’s easy to see why: there’s really no beating the convenience. Just put it in Drive, put your foot on the gas and off you go while the transmission picks the right gear for you no matter what the situation. But automatic transmissions are extremely complicated (albeit proven) and can cost you some miles-per-gallon due to their extra weight and slightly increased inefficiency when compared to a manual. In the past, most automatic transmissions had three gears (plus reverse) and if it had four gears, you had a real hot rod – or a luxury barge. Now, automatic transmissions have up to eight gears, either to placate performance drivers or to give cars optimal gearing for fuel efficiency – or both. With those two types out of the way, let’s move on to some sub-genres and new technology: The Automatic Transmission with Manual Controls: As computers continue their infiltration into every system in a car, the automatic transmission has been given new abilities. Like we mentioned before, modern automatics now have up to eight gears. For the best of both worlds, car makers have been giving drivers the option to control the transmission manually, using a special “shifting” position on the gear selector or by using two hand-operated “paddles” located behind the steering wheel. “Paddle shifters” are more common on sports cars but they are popping up in more vehicles. Drivers have always been able to “control” an automatic to some extent by using the gear selector but that really wasn’t the intended use and shifting an old-school automatic by hand could lead to the transmission failing if done improperly (or even when done properly, but too often). Now, computer controls have largely taken care of that shortcoming and as the “automatic-with-manual-control” type of transmission becomes more efficient, smarter and inexpensive, it could replace manual transmissions as a choice. But we’ll see. The Continuously Variable Transmission or CVT: If you’ve ever ridden a small modern scooter, then you are familiar with a CVT, or Continuously Variable Transmission. It’s a very simple design but one that works well under most conditions. Essentially, a CVT is comprised of two pulleys connected by a belt. But these are special pulleys since they can change their size and thus change the “gearing” in the vehicle. There are no set number of “gears” in a CVT because it can choose the exact gear ratio along a “variable” continuum between it’s lowest and highest gear ratios. So it can easily creep around a parking lot or blast down the freeway. Driving with a CVT is much like using an automatic except there are no “gear changes.” Instead, the engine just revs smoothly up and down. Mash down the throttle and the car’s engine will jump to a higher RPM and then just stay there while the car goes faster and faster as the two pulleys in the transmission change their sizes. It can take some getting used to, and because of the somewhat odd driving characteristics of a CVT, some carmakers offer it with paddle shifters that mimic an automatic/manual transmission. The CVT has been showing up in more cars recently. The advantage is the simplicity of the system and it can also be quite efficient if you don’t have a lead foot. If you do like to drive fast or want a high-performance car, this is an option you might want to pass on as it’s not really designed for that kind of driving. It would seem that a CVT would be ideal for most drivers but it has taken time to mature the technology – especially the strength of the belt inside the transmission – from what’s required in a little scooter to the huge loads it is under in in a large passenger vehicle. But technology marches on and the CVT is becoming more common. It may even be a good fit for electric vehicles. The Dual-Clutch Transmission (DCT): Widely known as a DCT or PDK transmission (thanks to Porsche) and others who use it in high-end sports cars and race cars, the DCT transmission is like a high-tech mashup of an automatic, a manual transmission and a computer. Like its name implies, the system uses two clutches to change gears. The transmission can be used in a fully automatic mode, with a computer determining gear shifts, or as a manual, with the driver using paddles or buttons to change gears as they see fit. Additionally, the computer controls and shift points can typically be adjusted by the driver or even the computer itself so the transmission shifts in accordance to your personal driving style, such as whether the car is being aggressively or you’re just going for a leisurely cruise. A DCT transmission can change gears with lightning speed – usually in a fraction of a second – and do so very smoothly thanks to the computerized controls, which makes it great for race and high-performance cars. While DCT transmissions are typically found in very expensive sports cars, they can be made compact enough that Honda also lists it as an option on several motorcycles. Riders can use it like a full automatic, or instantly change gears with two buttons on a small pod on the left handlebar. No manual clutch lever (or pedal, in cars) is required. A DCT can be fairly small, relatively light weight and still incorporate a large number of gears. Since the mechanism is computer-controlled, it’s nearly impossible to damage it with missed shifts, so with proper care, it should last a long time. If you think you might be taking your new car to a racetrack for some “track days” or to a high-performance driving school, see if a DCT is an option. It may cost extra, but it is also a very trick piece of gear. Electric Vehicle Transmissions Electric Vehicles, or EVs, place different demands on a transmission that gasoline and diesel engines do not and as such, they have their own types of transmissions or use modified versions of those found in gas-powered vehicles. Single-Speed Transmission: A common transmission at the dawn of the automobile and motorcycle eras was the simple connection of the engine to the wheels either directly or nearly so using a “one-speed” or single-speed transmission. At the time, automobile and motorcycle pioneers were more obsessed with getting their engines to run right, the transmission was usually cobbled together in such a way as to just get the wheels turning at all. But as the engines evolved, transmissions also became more complex. They started with one gear (often a belt attached to a reduction gear and then one of the wheels), and after losing a few muddy races, more gears were added to increase speed. And so it goes today. EVs are essentially at that same early point in development, but with the hindsight of over a century of transmission refinement to draw on. Due to the nature of an electric motor, which can supply enormous power (more than most gas engines) from essentially a standstill, very often more than one gear is not required. This keeps things very simple for the car makers and also for the drivers. The current poster child for electric cars, the Tesla Model S, for all it’s high-tech wonderment, has just one gear. But if you’ve ever driven one, you’d understand from the neck-snapping acceleration at nearly any speed, that’s really all it needs. Conversely, the makers of the Brammo Empulse R electric motorcycle opted to change development of their bike to include a six-speed manual transmission, as their research indicated that’s what potential buyers with experience on gas-powered bikes wanted. However, you can also ride the bike around town in just first or second gear and never even touch the clutch, so you also get the best of both worlds. Other EV makers have experimented with multi-gear transmissions, some unsuccessfully, and no one is quite sure what the future holds due to the different nature of power delivery from an electric motor. But like in the early days of gas-powered cars, you can bet there is much innovation and experimentation to come. Will a CVT be the perfect transmission for a purely electric car? Or a DCT? Just the one gear most use now? Or some mix of the current technologies? Only time, research and development will tell. As new transmission technologies evolve, so will this article. Check back to get the latest updates. Leave a comment about your favorite type of transmission.
<urn:uuid:20e53d6c-8811-4b6a-955f-0cfa2fa3f360>
CC-MAIN-2015-35
http://www.digitaltrends.com/cars/auto-manual-dct-cvt-whats-the-best-type-of-transmission-for-you-and-your-car/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00161-ip-10-171-96-226.ec2.internal.warc.gz
en
0.957201
2,260
3
3
According to Veli-Pekka Lehtola, author of The Sámi People: Traditions in Transition, “From the beginning, the main aim of young educated Sámi was to build a bridge between tradition and modern times; between old lifestyles and the influences of modern society” (9). The young Sámi men and women have achieved varying degrees of success in their quest towards this goal. Both inside and outside forces throughout the years have influenced the identity that the Sámi youth have created for themselves. As a result of the changing circumstances and attitudes constituting these forces, the amount of Sáminess deemed optimal by the Sámi youth has accordingly shifted. The history of the Sámi is very important to understanding the choices in identity that its young people have made. The Sámi are a population of indigenous people who traditionally inhabited Sápmi (for over 2,500 years), which today includes parts of northern Sweden, Norway, Finland and the Kola Peninsula of Russia. The Sámi now live in Norway, Sweden, Finland, Russia, and their ancestral land. In the past few centuries they have experienced progressively higher levels of colonialization and assimilation (cultural and religious) enforced upon them by the dominant populations in the aforementioned countries. This began to happen first in the southern and coastal parts of traditional Sámi territory. Over time it spread to the more isolated highland areas. The composition of the population in the southern and coastal areas started to shift and the Sámi became a minority. Those in the highlands still retained a majority but the effects of other cultures were still increasingly felt (Simms). To understand the changing identity of the Sámi youth we must not only understand the history of their people, we must have a general grasp of the construction of identity. According to psychologists: “The development of a sense of self—‘who am I’—is a major, if not the most important, developmental task of adolescence…the self-concept is an objective statement of a teen’s personal traits (MassGeneral).” Essentially, a person’s adolescent and teen years are the formative period in which he or she lays the permanent groundwork for the type of person they will be. However, they are not the only ones with influence over this construction of identity. One’s peers can be very influential in this development of self and most especially in the development of another essential psychological element, self-esteem. Though a young person will make an assessment of who they are, “…this self-image coupled with what others think of him/her will form the self-esteem. Any ridicule or putting down will damage the self- image and therefore lead to low self-esteem” (BharatMatrimony). Those who attend school (as opposed to home schooling) are surrounded by others their age. At a time when the feeling of acceptance is something highly sought after and valued, that which a person’s peers deem important or “cool” can easily influence said person’s outlook. The actions, what this person will choose to constitute themselves, and how they will choose to represent themselves to others is very dependent on the opinions of others. The experiences of adolescent minorities have been complicated and oftentimes emotionally difficult due to prejudice and other factors. When it comes to young Sámi men and women it is no different. Before the encroachment upon their land and traditions by more dominant cultures, the Sámi lived in small siidas (villages) and had numerous methods of livelihood such as fishing and reindeer herding. They also had their own non-Christian religion, which included polytheism and shamanism. This way of life led to a people that were intertwined and in tune with both nature and each other. As the Sámi were pushed to enter modern society their traditional way of life changed dramatically. Though in the past the Sámi had absorbed influences relatively easily the level to which the dominant culture infiltrated meant that they now had to change the core of themselves. They were forced to assimilate and adapt to new forms of technology, religion, education, and much more. For over half a century (1898-1959) schools in Norway were not allowed to teach in Sámi languages. This caused many Sámi students to lose their language. If students did not completely lose it then they were more likely not to teach it to their own children later in life (Lehtola 60). The loss has functioned as an upside down triangle with the spread of the language becoming ever-narrower generation after generation. This phenomenon is very similar to the English Only Movement, which held influence over America’s educational institutions until recent decades. During this time students would not be allowed to speak any language other than English and would be punished if they did so (Crawford). However, this attempt to assimilate young people into the dominantly Anglo culture of America did have one large difference from the assimilation of the Sámi. America did not separate the youth from their families, which frequently act as a reminder of cultural heritage through language, religion, food, traditions, and many other things. With the end of World War II and establishment of central schools during the 1960s, such a separation did occur and the affects felt by the Sámi youth were frequently negative. According to Lehtola, “Living in an environment of a foreign language and foreign opinions caused feelings of insecurity, stemming from being different and being harassed for it: intimidation was accompanied by shame for oneself and one’s background” (62). Instead of a family there to provide a sense of one’s heritage, the only family children going to these schools had were their teachers and peers. While it was certainly difficult enough to be different amongst children of a dissimilar background already in possession of a sense of what was and was not “normal”, learning in a curriculum system designed to exclude the teaching of Sámi history and language would have made it even more so. Often people feel the tendency to reject the unfamiliar and the Sámi population, as an overwhelming minority in these Nordic countries, was seen as unfamiliar and subjected to this rejection. Recognizing the disdain cast upon the Sámi, many of its people, most especially the youth, began to reject their Sáminess in favor of acceptance from their peers and the dominant society in general (Lehtola 62). Teacher Iisko Sara, addressing the attitudes of the Sámi youth in Finland shortly after the end of World War II, said that in the end Sámi wanted to become more Finnish than the Finns themselves. She said they believed that in order to succeed they and their children would have to adopt the Finnish language and value system effectively changing their identities. Considering the difficulties created for these children by their peers and the government of the dominant cultures (in the form of the curriculum they supported and taught) it does not seem particularly surprising that the Sámi youth of these times grew up with the idea that it would be better not to teach their sons and daughters the Sámi languages and, to a degree, the Sámi culture. The sons and daughters of those raised around the World War II period ended up having a rather different experience from their parents. Unique events occurring during these young people’s lives, the lack of Sámi education received from their parents and school, as well as the attitude of others toward the Sámi coalesced into the driving force behind a change in the collective identity of the Sámi youth. These men and women grew up in the 1970’s at a point when Sámi political and cultural rights were being re-examined and heavily fought for (this was termed the Sámi Movement). According to Harald Eidheim in his article “Ethno-Political Development among the Sámi after World War II: The Invention of Selfhood,” “The Sámi, began to vehemently argue that they had been, “dispossessed of the possibility to develop as a people,” and had been, “…denied access to cultural competency” (34). Eidheim believes that this was a testament to the feelings of inferiority deeply ingrained in most Sámi as a painful complex of shame, self-contempt and unreleased aggression” (34). School curriculum was eventually changed with their efforts and the Sámi language began to be taught. However, it was the Alta dam controversy of the late 1970’s that threatened the land rights and livelihood of the Sámi, which seemed to have the largest affect on how the Sámi viewed themselves. This is especially true of the young Sámi who were very active in the cause, joining organizations and protests fighting for awareness and rights. The Sámi bonded as they rallied against a common enemy. Other visible manifestations of the changing of the young Sámi’s attitudes towards their heritage could be seen in the increase of the wearing of cultural emblems, listening to folk music, doing traditional Sámi handicrafts, and various other things. According to Harald Eidheim (32): This awakening, which, on the level of the individual also signified a new experience of ethnic pride, fellowship, and spirit, also implied the dissemination throughout the population of a clearer perception of the Sámi’s relationship to the majority population and to the state. That is, a new perception of the Sámi inter-cultural relationships, which had been traditionally characterized by powerlessness, cultural stigma, and feelings of inferiority and ethno-political apathy, was forged. The Sámi were transitioning from being seen as a people who needed to be assimilated into the dominant culture into a people seen as the equal of any other. One might venture to say that their self-esteem grew proportionately as the desire to hide their heritage was replaced by an intense pride. This was a time for the reclamation of the Sámi traditional identity, especially by the young Sámi. Their generation, though still surrounded by other more dominant cultures, had a unique set of experiences that caused them to buck the mainstream and crave an identity with more Sáminess (Stordahl 144-5). Once again the identity of the Sámi youth has undergone another reinvention due to different conditions and events than in years past. As opposed to their parents, the Sámi youth of today in general do not display the same invested interest in their culture. While others might mistake this as a rejection of Sáminess, it should instead be read as an attempt by these young men and women to carve out their own identity. In a sense today’s youth are combining the two preceding generations by accepting both the dominant culture and the Sámi culture (Stordahl 143). According to Vigdis Stordahl in her article Sámi Generations,” “…[today’s Sámi youth] have received training in Sámi language and culture throughout their school years…” and therefore, “…have no lost Sámi past to avenge or mourn” (147). Unlike their grandparents who were forced to give up their culture and their parents who set about to vigorously reclaim it, Sámi youth are not actively and vehemently accepting or rejecting cultures. Rather, the young men and women recognize themselves as part of the indigenous but accept the influence of other cultures in proximity to them. Many young Sámi today see the ways and actions of their parents as “…overdo[ing] it” (147). They desire neither to possess an identity solely assimilated to the majority culture or a solely Sámi identity. They crave a more harmonious identity that includes both cultures and sees them as equals in all ways (Eidheim 31). Still, the parents of today’s young Sámi are often confused by what they view as apathy in their sons and daughters. When comparing the youth of today to their activist parents this judgment does not seem such a leap. A certain amount of rebelliousness common at this stage of life might also be a factor in this generation’s outlook. Those Sámi who grew up in the 1970s are now in charge of the Sámi governmental system and are in other positions, which have the power to influence how Sámi history and culture is taught to newer generations. This young generation of Sámi does not want to have their relationship to their heritage be determined by others. Therefore, what may seem to their parents as a rejection of Sámi culture is instead their sons’ and daughters’ rejection of being defined by others (Stordahl 148-9). However, despite the fact that decades worth of progress in rights and attitudes towards the Sámi have occurred, today’s youth still feel pressure to assimilate and face torment from others. Recent reports have revealed that Sámi young people have been committing suicide at disturbing rates. In regards to this trend the head of the Sámi’s youth council, Paulus Kuoljok, said, “We Sámi often face stereotypes and have to defend ourselves all the time,” he added that, “There are few employees at my own work place at [state mining company] LKAB with Sámi background. I often hear things like ‘damn Lapp’ and that we Sámi have things so good because we can fish and hunt where we want to and we always get welfare payments.” In addition to this he expressed the difficulty he and his fellow Sámi youth face the added pressure, which comes from a perceived duty to protect traditional Sámi culture (The Local). Though we should not give a blanket description of the current Sámi youth experience, manifestations of racial discrimination that still exists amongst the more dominant Nordic cultures are certainly capable of influencing the thoughts and actions of these impressionable young people. In an interview with author Charles Peterson, Dean of University Faculty at North Park University, Piera Balto, regarding Sámi in the media, Balto offers a partial solution to the prejudice Sámi youth encounter: Although it is true that the Sámi population is not being served by TV2 and P4, a much bigger problem is that other Norwegians are not learning about Sámi culture…the result is a continued ignorance of Sámi culture, prejudice, and the subsequent discrimination toward Sámi and other minorities (Peterson). Though the further education of Norwegians and those of the dominant Nordic peoples in general about Sámi culture will likely not alleviate the problems the Sámi youth are facing completely, it is certainly a good start. Hopefully with time the situation will improve. Though each person is different and a blanket diagnosis of the nature of the identities chosen by recent Sámi generations is not possible, the trends are clearly visible. The issue of self-identity and self-esteem are delicate subjects, especially in regards to young adults in this formative period of their lives. The varied amounts of pressure, prejudice, and expectations put upon those belonging to a minority group can often complicate matters. However, just as Sámi youth have moved from rejecting their own culture, to rejecting the dominant culture, to accommodating both, a more accepting climate seems to be emerging and becoming more widespread over time. One can only hope that this trend will continue. Ávdnos (Eleana Díaz). 1. Crawford, James. “Anatomy of the English Only Movement.” 23 Oct. 2008 2. Between Us: The Counseling Helpline. “Teenage self-esteem.” Bharat Matrimony. 23 3. Eidheim, Harald. “Ethno-Political Development among the Sámi after World War II: The Invention of Selfhood.” Sámi Culture in a New Era: The Norwegian Sámi Experience. Ed. Harald Gaski. Davvi Girji, 1998. 4. Kandes, David. “Sami Youth Shine Light on Suicide Problems.” The Local. 6 Oct. 2008. 20 Oct. 2008 5. MassGeneral Hospital for Children. “Self-esteem.” Adolescent Health. 23 Oct. 6. Peterson, Charles. “Sami culture and media.” Scandinavian Studies 75.2 (Summer 2003): 293(8). Academic OneFile. Gale. University of Texas at Austin. 15 Oct. 2008 7. Simms, Doug. “The Early Period of Sámi History, from the Beginnings to the 16th Century.” Sami Culture. University of Texas. 19 Nov. 2008 8. Solbakk, John T. “Sámi Mass Media – Their Role in a Minority Society.” Sámi Culture in a New Era: The Norwegian Sámi Experience. Ed. Harald Gaski. Davvi Girji, 1998. 9. Stordahl, Vigdis. “Sámi Generations.” Sámi Culture in a New Era: The Norwegian Sámi Experience. Ed. Harald Gaski. Davvi Girji, 1998. 10. Veli-Pekka Lehtola. The Sámi People: Traditions in Transition. “Participants in Modern Society.” Aanaar-Inari 2002. Pp 57-62.
<urn:uuid:fc9fa128-4b48-4ece-a8af-dedf9119fdbf>
CC-MAIN-2015-35
http://www.utexas.edu/courses/sami/dieda/socio/youth.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00279-ip-10-171-96-226.ec2.internal.warc.gz
en
0.964671
3,653
3.546875
4
a country of towering volcanoes, white sandy beaches, virgin forests, abundant wildlife, friendly people and much, much more. These are several of the contrasts of the biological diversity. Few places in the world present such natural variety in such small territory. Over the 51,100 Km2 you will find endless special places to discover because Costa Rica has something for everyone, depending on their interests: ecotourism, bird watching, adventure sports, fishing, archeology or just leisure, there's fun for all. Costa Rica is located in Central America, bordered to the north by Nicaragua and to the south by Panama, it has both a Pacific and a Caribbean Coast. In the heart of the Costa Rica, Costa Rica is located in Central America, bordered to the north by Nicaragua and to the south by Panama, it has both a Pacific and a Caribbean Coast. In the heart of the Costa Rica,there is a jagged mountainous spine that runs from the northwest to the southeast corner of the country, forming the Pacific and Caribbean slopes. The volcanic mountain ranges (Guanacaste, Tilaran & Central) remain separate from each other by relative low passes and valleys. The Talamanca range cuts across the southern part of Costa Rica and continues into Panama, these are the highest peaks in the Country. In the Central range the heart of Costa Rica is located, (the Central Valley), where about 60 % of the Costa Rican population lives. Geographically, Costa Rica presents a great variety of areas that permit the wonderful biological, geological and human diversity and richness. Costa Rica has an exceptional National Parks System, 24.6 % of the territory is protected under one of the categories contemplated in the System such a National Park, Biological Reserve, Forestry Reserve, Wild Life Refuge, principally, this reflects the strong national commitment towards the preservation of nature. These outstanding wilderness areas provide shelter for almost all the 120,000 varieties of plants, 237 species of mammals, 848 species of birds and 361 different amphibians and reptiles, which are natives of this country. The Geography, topography and weather permit the existence of the great variety of environments including: deciduous forests, mangrove swamps, rain forests, marshes, Paramus, cloud forests, coral reefs. Costa Rica, located just ten degrees north of the Ecuator, enjoys the advantages of the perfect tropical climate. With exclusively two seasons all year round: the rain season and the dry season. The averages temperatures change little between the seasons, the most important factor that affects the temperature is the changes in the altitude. San Jose & The Central Valley: The heart of Costa Rica, San Jose and the Central Valley, is surrounded by beautiful mountains and volcanoes where it is easy to find coffee plantations - better known as a " the Golden grain". It boasts a nearly - perfect climate, excellent museums, a wide selection of restaurants and other services. The city of San Jose is located in the Central Valley, at 1149 meters above sea level (3769 feet), has an idyllic climate, with temperatures that vary from the 24 a 16 0C (75.2 - 60.8 0F)during the two seasons. San Jose offers visitors a wide variety of activities, including tours of some impressive museums. Underground, at the Plaza de la Cultura, a fantastic collection of gold gleams inside the enormous vault of the Gold Museum, The Jade Museum with the largest collection of pre Columbian jade in the Americas, also includes some fine polychrome ceramics, a fascinating collection of grinding stones. The National Museum, has an excellent historical and archaeological exhibit. San Jose also boasts an art museum, entomology museum, natural history museum, butterfly park and zoo. The location of the Hotels at the pedestrian walkway or close to it meanders through the city center, past shops, coffee shops and restaurants, During the stay in San Jose you are able to visit close attractive places such as : The Poas Volcanolocated in Alajuela Province, 2708 meters above sea level (8884 feet) and it is one of the most spectacular volcanoes in the country. It is an active volcano, the crater is a huge depression that measures 1.5 km in diameter and 300 meters deep and has a sulfuric and acid hot water lake with a slow effusive activity. There are also cinder cones that rise about 40 meters above the lake and has a very active fumaroles. The second crater is the site for the Botos lagoon, it has a cold - water lake. The Irazú Volcanolocated on the just 32 Km northwest of the City of Cartago, 3432 meters above sea level (11260 pies). It is an active volcano with a long history of violent eruptions. Its altitude (the highest in the country) and location permits, on clear days at the top of its Alto Grande peak, to have a magnificent view of both the Atlantic and Pacific Oceans together with a large expansion of the mainland. At the summit there are 4 craters: the main, Diego de la Haya is the main place to visit, there is a permanent lake at the bottom with yellow - green water. White Water Rafting: For those that want to enjoy more adventure, the white water rafting is an ideal experience on the rivers Reventazon, Pacuare or Sarapiquí where it is nice to enjoy the crystalline water and the beautiful tropical vegetation. Costa Rica's Atlantic region, filled with pristine untouched nature, is home to several of the country's most notables National Parks, including Tortuguero, Cahuita, Braulio Carrillo, Chirripo y la Amistad. In addition, several Biological Reserves also share this fertile coastal habitat: Hitoy - Cerere and Gandoca - Manzanillo. This is a coast where the evergreen mountains of the deep forest line the long stretches of quiet, pristine sandy beaches. Year round, these long stretches of white or black sand beaches are ideal for leisure activities, such a horseback riding, sun bathing, snorkeling among the colorful coral reef, home to an infinite variety of under water sea life or the visit to the indigenous communities.Tortuguero: One of the most attractive ecological destination sites in Costa Rica: the canals, rivers and lakes of Tortuguero National Park are the place for studying the rain forest, freshwater and marine biology. The access to the area can be by boat or plane. (+506) 2296 7378 Puerto Viejo - Cahuita: Cahuita National Park is located on the extended beaches of the South Caribbean coast, about an hour south of Limon. It has several miles of white sand beaches lined with palm trees, trails that leads from the beach into the rainforest and is able to do snorkeling in his coral reef home to an great variety of under water sea life. without a doubt, one of the coastal regions with the most magnificent scenic beauty in the country. Beaches of dazzling white sand fringed with coconut palm trees and a sea of gently curling waves and of coral formations in shallow waters, make this wilderness area a paradise for lovers of Nature. Puerto Viejo - Cahuita: Cahuita National Park is located on the extended beaches of the South Caribbean coast, about an hour south of Limon. It has several miles of white sand beaches lined with palm trees, trails that leads from the beach into the rainforest and is able to do snorkeling in his coral reef home to an great variety of under water sea life. Gandoca - Manzanillo: Is without a doubt, one of the coastal regions with the most magnificent scenic beauty in the country. Beaches of dazzling white sand fringed with coconut palm trees and a sea of gently curling waves and of coral formations in shallow waters, make this wilderness area a paradise for lovers of Nature. Continuing to the South, we will arrive to Puerto Viejo, an ideal place to rest a couple of days enjoying its white sandy beaches at Punta Cocles or Gandoca Manzanillo MOUNTAINS & NORTHERN PLAINS Land of volcanoes, cloud and humid forest, waterfalls, trekking surrounded by nature. One of the main attractive sites in the area is the flawless silhouette of the Arenal Volcano. It is a breathtaking sight, perfectly cone shaped and jetting out of the rain forest to approximately 5000 feet above sea level. Its loud rumblings and frequent explosions of lava and ash during the day or night when the red glow is dramatic can be seen easily from several vintage points in the area. Heading northwest from the volcano, you'll wind through countryside through rolling hills and green pastures on one side and the sparkling blue waters of the Arenal Lake. This man - made lake, more than 40 square miles of it, provides water for a hydroelectric plant. Activities on the water are varied, but the most popular adventure sport practiced here iswindsurfing. Arenal's winds blow at world class speeds (72 kms per hour). The spring water of the Tabacon River is another place to visit in the area at the Tabacon Resort. You can enjoy this natural spa surrounded by the fresh shadow of the forest and the quiet wildlife whispering. Celeste River place with a great scenic beauty where you are able to do horseback riding through natural forest in the Río Celeste area, a river with a turquoise blue natural water. Monteverde: The Monteverde Cloud Forest Reserve has earned its fame as one of the most outstanding wildlife sanctuaries in the New World tropics. Positioned at the start of the Continental Divide in western Costa Rica, this reserve extends down both the Caribbean and Pacific slopes. The resulting combination of climatic and geographic factors creates temperature and humidity gradients, which changes dramatically over relatively short distances. The reserve supports six different vegetational communities. There are over 100 species of mammals, 400 species of birds, 120 species of amphibians and reptiles and 2500 species of plants (among them 420 different kinds of orchids). Spectacular wildlife includes the Jaguar, Ocelot, Tapir and the resplendent Quetzal. In the area you are able to enjoy some others activities like horseback riding and the canopy tour. Volcan Rincon de la Vieja: Located in the Guanacaste Mountain Range just 27 km northeast of Liberia and at 1916 meters above the sea level (6286 feet). On the summit, nine sites of volcanic activity have been identified, one of which is active and the rest inactive or in the process of degradation. At the foot of the volcano is an area known as Las Hornillas ("Kitchen Stoves") which extends over 50 Ha. It is also the site of hot springs which form small streams with very hot water: sulfate lagoons which fill small hollows in the ground and consist of constantly bubbling muddy water. Its flora and fauna is abundant, this is the place with the largest population of the Guaria morada (the national flower). In this area you are able to do Canopy tour, an amazing experience where it is possible to see the forest and his life from a different perspective. Central - south pacific: This coast has spectaculars beaches all along it, some of them are far from the noisy and active life in the city, something that permits a complete, relaxing experience. The are several ways to spend the time in the region, several National Parks, Biological Reserves, lined African palms join the travel parallel to the coast up to Quepos. Manuel Antonio National Parkis one of the most beautiful parks in the entire system. It is especially attractive because of the white sandy beaches at Espadilla Sur and Manuel Antonio, which slope gently into undisturbed, crystal - clear water. The beaches are furthermore fringed by a tall evergreen forest, which grows right down to the high tide mark and provides pleasant shade.
<urn:uuid:f7444911-e6df-4bb1-853c-a8a597e35f96>
CC-MAIN-2015-35
http://www.ilan-ilanlodge.com/english/infoCRIngles.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00166-ip-10-171-96-226.ec2.internal.warc.gz
en
0.919584
2,525
2.8125
3
Does the Family Farm Really Matter? Harold F. Breimyer Department of Agricultural Economics A.L. (Roy) Frederick Department of Agricultural Economics, Kansas State University Family farms have been revered in America since the days of Thomas Jefferson. But as more complex technology and commercial management enter farming, and as the national economy becomes more urban-industrial, the question arises whether old values in agriculture can — and should — survive. To put it squarely: Does the family farm really matter? Does it matter whether the agriculture of the future is composed of family farms — or of big corporation farms, franchised or contract farming, tenant farming for absentee owners, or even new arrangements such as cooperative farms? Does it matter to farmers, to consumers, to rural communities, to the nation? Does it matter with regard to productivity, food supplies, conservation of natural resources, and protection of the environment? Finally, does the kind of agriculture matter enough to affect public policy? National policy is involved in present trends away from family farming. Policy will have a bearing on whether family farming survives in the future. What is meant by "family farming"? The farming our founding fathers had in mind was seen as a welcome change from the feudal system that had prevailed in medieval Europe. That system was class stratified. The landowner was lord and workers were serfs. Plowing fields and tending herds was the lowest form of employment. By pleasant contrast, in the big and fertile new continent the farmer could hope to enjoy the exalted status of freeholder. He could own and manage the land of his labors and receive its product. That dream has never been expressed better than by the late John Brewster: In permitting the hitherto separate roles of lords and serfs to be recombined within the same skin, the virgin continent gave working people the chance to (become) free-holders. The emerging agriculture of family farms generated within everyday people an envisioned realm of equal dignity and worth, which all America soon enshrined within her national self-image. The family farmer still plays a multiple role. He is owner, worker, and manager. Moreover, he is a marketer — and markets are open to him. Family farms are implicitly of modest size, but size is defined in terms of what family labor can care for. Acreage, investment or volume of sales figures are less applicable. Most labor on a family farm is provided by the family. Thus, hired labor cannot exceed labor provided by the farmer and family. The maximum amount of hired labor is often put at either 1-1/2 or 2 man years. The key feature is that family labor dominates. All land need not be owned by the farm operator, but most family farmers will own at least part of the land they farm. A few may temporarily be full tenants, but neither widespread nor life-long tenancy is considered to be family farming. Management may be vested in individual proprietors or family partnerships, but the right to make independent production and marketing decisions is crucial. Family farmers can freely buy supplies and sell the commodities they produce. If they must have production contracts, they are not truly family farmers. Open markets are essential Even though family farms are not defined here by volume of sales or assets, a system dominated by a relative handful of very large farms would not be considered a family farming system, regardless of who owns, works on, or manages individual farms. On the other hand, a family farm does not mean an unprofitable small farm. It is one where efficient production methods enable farmers to earn acceptable incomes in line with their personal abilities. Because family farming, like any human institution, stands in more danger of disregard than of denial, these definitional concepts must be adhered to fairly strictly. Family farming is not a closed shop One of the most important intangible qualities of family farming is that such a system offers opportunity. Young people have been welcomed into family farming. In recent years, sharp increases in the cost of entry have worked against preserving this characteristic of family farming. Entry barriers now bring urgency to the question of whether a system of family farms can survive. The trend away from family farms Present trends indicate that the family farm as the nucleus of U.S. agriculture is slipping away. We are moving toward a dual agriculture. At one extreme are many small farms, most of them part-time. Fifty percent of all farms, as defined by the U.S. Census, market only about three percent of all farm products. Most of these farmers depend on non-farm income for their living. They are not easily dislodged from farming, although rising costs of fuel for transportation may work against them. At the other extreme are very large farms. In 1978, the very largest farms, 2-1/2 percent of the nation's total, accounted for 40 percent of all marketings. Among the largest farms are some large land holdings, but commercial cattle feedlots, egg cities and large hog operations also are prominent. About one-sixth of all farm marketings come from contractually integrated production. Poultry and fruits and vegetables for processing are well known examples, but contracting extends across much of agriculture. Family farms, intermediate in size and distinguished by their market connection, now contribute no more than half of all farm marketings. Why the trends take place Some argue that new technology, particularly larger field equipment, is almost totally responsible for the trend toward bigness in agriculture. But many explanations now center on financial incentives that favor larger farms. Farm program benefits, applied on a bushel or acre basis, favor larger farms. Tax rules favor high-bracket investors in agriculture, including high-income non-farmers. Larger farmers can often enjoy better access to credit. Lack of access to markets may be the most subtle threat to family farms. By definition, a family farmer engages in open buying and selling of materials used in production and of commodities produced. The largest farms, however, tend to buy and sell direct. Bypassing local and central market firms, whether of machinery dealers, livestock auctions or a dozen others, threatens the dispersed local market institutions on which family farming depends. How psychology affects survival Almost by their nature, family farmers lack powers of survival. The reason lies in the psychology of the individual farmer, whose concentration on his own operation tends to distract him from concern for forces that affect family farming as a whole. This has been referred to as family farmings' "non-instinct for self-preservation." Although many examples could be given, a prominent one is family farmers' support for income tax concessions. Each concession — a deduction from income subject to tax, or to the tax itself — looks attractive to the individual family farmer. But because of our tax structure, most concessions are relatively more advantageous to the high tax-bracket investor, whether farmer or non-farmer. The net effect of these concessions is harmful to ordinary family farmers. High productivity and a good food supply Among the questions associated with family farming, the productivity record of family farm agriculture is perhaps most widely acclaimed. American consumers have an abundance of nutritious food available to them. In addition, sizable quantities of farm products, notably the grains, are exported to foreign markets each year. But the record does not prove family farming to be more productive than, say, a system organized along the lines of industrial corporations or even a tenant-dominated agriculture. Big, well-managed corporations are adept at using the latest technology. Non-farm landlords relieve operating farmers of the burden of raising capital for purchase of land. Certain operations such as commercial feeding of cattle can be more economical as large operations. The big new hog facilities may have a genuine advantage. But these hog facilities often are subsidized by income tax deductions. It remains to be seen whether they can weather the low price period of the hog cycle better than family hog farmers can. There are some production economies in farming up to about two worker-years of labor. Beyond that size, the output per unit of input changes little. Productivity differences between the well-managed family farm and most other kinds of farming are not wide enough to be the basis for policy choice. Going beyond production efficiencies, could farming operations get so large as to exert damaging market power? Particularly if the big firms can join together, directly or tacitly, they may be able to lift the price of farm products exorbitantly and resist price declines when supplies are large. This is the threat consumers are most sensitive to. Nor is there clear evidence that higher prices imposed by huge firms would help workers on the land. Conservation of soil and protection of the environment Farm leaders often declare that family farmers accept a stewardship relationship to soil and the environment. Family farmers want to preserve farm productivity for future generations, it is said. By implication, other farmers are thought less likely to be good stewards. This attitude prevails widely and is sincerely believed. Unfortunately, not all family farmers have lived up to these noble ideals. The family farm tradition is potentially positive toward conservation and environmental protection. But measures to improve conservation and regular use of chemicals will necessarily be initiated through government. Family farmers will cooperate as well as others, but family farming is not a guarantee by itself of adequate conservation and environmental protection. Financial welfare of farmers Family farmers have not always fared well financially. The succession of farm programs enacted since 1933 is evidence of the public's concern about the financial well-being of farmers, especially family farmers. Would they do better in a different system of agriculture? Some "farmers" could become employees of industrial-type corporations. They would qualify only for wages and salaries. Over time, their income would become similar to earnings in industry. But they would get no returns from managing or land holding, as a family farmer does. Wage workers, and some salaried ones too, would eventually be unionized — perhaps to their gain. They would be protected by unemployment insurance and other fringe benefits that go with industrial employment. They also would be subject to seasonal changes in employment and layoffs. What about family farming versus full tenancy? The tenant farmer receives only the income generated by his labor and by the amount of capital he provides. He gets none of the return creditable to land. Moreover, the historical record shows that when tenant farming becomes widespread, it is difficult for the tenant farmer to protect his income because of the intensified competition for land. The question of financial returns brings us back to the multiple role of the family farmer. The farmer who owns at least part of his land gets a combined income from land, labor, capital and management. As land becomes relatively scarce, more of the total return generated in farming (including capital gains) will go the landholder. Family farmers then will be likely to enjoy a growing advantage over farm workers, tenants or contractees. However, they may not do as well as absentee landowners, or the owners/managers of large land holding corporations, all of whom will want to acquire land for its increasingly attractive returns on investment. Opportunities and other values In the early 19th century, land settlement was a big part of the economic growth of the nation. Family farmers who cleared land and plowed the virgin soil provided the underpinnings for increased commercial activity. There also was a belief that those who owned and lived on the land would want to protect it, their home, their community. Family farmers were seen as responsible citizens and the backbone of a democracy. The idea that life on the land develops superior personal qualities is known as agricultural fundamentalism. The doctrine still has strong adherents, even though farmers are less different from non-farmers than they used to be. However, these good attributes are most often associated more with the proprietary or family farmer. They would be less visible in wage workers or lifelong tenants. The man or woman in charge of his or her own land and livestock is, supposedly, the one most possessed of "fundamental" values. These values were easier to realize when the open frontier was an invitation to opportunity. If the qualities of family farming are worth preserving, conscious effort must be made to keep the door of opportunity open. Does the family farm matter to the rural community? Of all questions about the qualities of family farming, its meaning to the rural community offers the most clear-cut answer. Whether or not family farming is preserved does matter to the rural community. The question cuts two ways. First, does family farming contribute to the financial strength of local businesses? Second, are proprietary farmers better community participants than farm wage hands, tenants or contractees? An especially strong case can be made in answer to the first question. Family farmers buy most of their inputs from local suppliers (including their cooperatives). They sell most of their products into local or regional markets. Much of the business enterprise in rural towns and small cities is farm-connected. In sharp contrast, large corporations engaged in farming are less likely to get their credit from local banks, their machinery from local dealers, or their fertilizer from the local farm supply firm. They also are more likely to sell their products directly to a distant market or processor. An absentee-landlord agricultural system lies midway between family farming and big corporations in support of local businesses. Tenants do not bypass local suppliers and markets the way big corporations do. Even so, absentee-landlords, like industrial corporations, drain farm income away from the local community. Less of it remains to be spent locally for farm inputs and especially for food, clothing, recreation and other items for family living. How well farmers of various categories enter into local community activity is more difficult to generalize. Family farmers clearly enter into community activities more actively than wage workers, but little data is available on how well tenant farmers participate in community affairs. The pattern for farmers producing under contract is mixed. Some contracting farmers have low incomes and may feel themselves to be of low standing in their communities. But contractual producers of vegetables for canning, even though they have transferred much risk-bearing and management to the contractor, enjoy relatively high income and hold positions in their communities. Owners or managers of large, industrial-type farms who live outside the local community would assume few civic responsibilities within the community. Summary and implications This guide has presented the unchallengeable data on the gradual decline in family farming, more judgmental notions as to why those trends are occurring, and the highly personal concern as to whether the trends matter. Whether they matter depends on one's appraisal of the impact to be expected from a highly concentrated agricultural system as contrasted with dispersed family farming. In an industrial-type structure, the incomes of persons working in farming might be protected reasonably well, especially if all the trappings of unionization and fringe benefits were added. But those persons would still be wage workers. Hence the nagging question arises once more: How much importance is to be attached to the status of the family farmer who both labors on the land and owns and manages it? Where does the public interest lie? When economic and sociological values are taken into account, is it better to have a farming sector of proprietary farmers who provide most of their own labor as well as capital and management? Or is there nothing to fear from a class-stratified agriculture — either one of tenancy as farmers work the land held by absentee landlords, or one of industrial corporation control through contract in which "farmers" are essentially wage-hands? - John M. Brewster, "The Relevance of the Jeffersonian Dream Today," in Land Use Policy and Problems in the United States, Howard W. Ottoson, editor, University of Nebraska Press, Lincoln, 1963, pp. 94-95. - Who Will Control U.S. Agriculture? Policies Affecting the Organizational Structure of U.S. Agriculture; and Who Will Control U.S. Agriculture? A Series of Six Leaflets, University of Illinois at Urbana-Champaign, Cooperative Extension Service, Special Publications 27 and 28, 1972 and 1973. - "Farming's Non-instinct for Self Preservation," Harold F. Breimyer, Farm Policy: 13 Essays, Iowa State University Press, Ames, 1977. - William D. Heffernan, "Agricultural Structure and the Community," in Can the Family Farm Survive?, MU Agricultural Experiment Station Special Report 219, 1978.
<urn:uuid:f48febba-f4f7-4491-a330-950868dfe533>
CC-MAIN-2015-35
http://extension.missouri.edu/publications/DisplayPrinterFriendlyPub.aspx?P=G820
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00153-ip-10-171-96-226.ec2.internal.warc.gz
en
0.963178
3,372
2.671875
3
Ancient Timekeepers, Part 2: Observing the Sky We perceive the universe we inhabit has three dimensions of space and one dimension of time. Our planet wobbles slowly like a gyroscope in space spinning once per day while it “circles” the sun during its annual course. Despite of observing the sky from the “moving point of view”, ancient astronomers were able to discover and precisely describe basic astronomical cycles of our planet. They created calendars to keep track of these cycles and to predict solar and lunar eclipses. Observing the sky also enabled the ancients to establish cardinal points and latitude for any place on Earth. Thousands of years ago people were as intelligent (or more ) as we are today. Without prior knowledge about astronomy, using simple methods to observe the sky, their logical mind and imagination, they were able to gain incredible understanding of our solar system… 2.1 Observing the Sky – Basics 1. Our Planet is Round Looking at the Sun and the Moon could easily provide a “hint” that our planet might be also round. Such a “hint” could be confirmed by observing lunar eclipses (shadow of the Earth is round as well). 2. Our Planet is spinning on its axis Observing the movement of stars at night as well as the movement of the sun and the moon, makes it obvious that our planet is rotating around its own axis. 3. The axis of rotation is tilted Observing during a year changes of the position of the sun at noon (or specific stars at night) relative to the local horizon could lead to discovery of the axial tilt and measurement of its exact value. At night we can look directly at a star (or planet) and measure its angle above the local horizon. During a day, looking at the blinding sun directly is difficult (without a filter) and not practical; it is much easier to observe length of a shadow of a vertical obelisk (or an angle of a “stick” pointing exactly at the sun so it casts the smallest shadow.) The Sun reaches the highest point above horizon at noon each day and this is all ancient astronomers needed to observe (the angle). It is worth mentioning here that the relative location of the Sun above the horizon is not constant from day to day when observed at the same clock time each day. This is due to the tilt of Earth’s axis (23.44°) and its elliptical orbit around the Sun. Analemma for Earth plotted as seen from the Royal Observatory, Greenwich (latitude 51.4791° north, longitude 0°). The equinoxes for this location occur at altitude “phi”= 90° – 51.4791° = 38.5209°, and the solstices occur at altitudes “phi” ± 23.439° (the angle of the axial tilt of the earth). Image source: Wikipedia.org Since the Earth’s mean solar day is almost exactly 24 hours, an analemma can be traced by plotting the position of the Sun as viewed from a fixed position on Earth at the same clock time every day for an entire year. The resulting curve resembles a figure of eight. An analemma is basically the figure “8” loop that results when one observes the position of the sun at the same time during the day over the course of a year. As a result of the earth’s tilt about its axis (23.5°) and its elliptical orbit about the sun, the location of the sun is not constant from day to day when observed at the same time on each day over a period of twelve months. Furthermore, this loop will be inclined at different angles depending on one’s geographical latitude. Analemma with the Temple of Olympean Zeus (132 AD) Copyright © 2001-2011, Anthony Ayiomamitis. All rights reserved. 4. Precession – the axis of rotation wobbles Based on the above observations, we can see that it is not difficult to notice the basic movements of the Earth. Two of them (daily and annual cycle) are easily noticeable by anyone within a short period of observing (1 day for the rotation cycle and 1 year for the orbital cycle). The 3rd cycle called precession (also known as Platonic Year ) would take longer to observe . Although it is 25,920 years cycle, it can be noticed in just 36 years as the result of systematic observations of the night sky. Precession is causing slow regression of the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time, say the vernal equinox. This change is only 1 degree in 72 years (25,920/360 = 72) . Considering the Sun and the Moon have angular diameters of about half a degree, it is reasonable to say that any changes in the night sky of this magnitude should be easily observable, therefore precession is noticeable in just 36 years (half degree change). 2.2 Simple Instruments of Ancient Astronomers Ancient astronomers used primarily “naked-eye” horizon-based astronomy The easiest way to record spatial observations — other than by framing them against a prominent topographic features — was by erecting durable markers of their own in the landscape (including pyramids and temples) to calibrate the rising or setting azimuths of the sun, moon, planets and stars. Nabta Playa, Stonehenge, Chichen Itza are just few examples. Vincent H. Malmström of Dartmouth College describes an interesting astronomical relationship that exists between the Three Ceremonial Rings Of Zempoala (Mexico): In the central plaza of Zempoala, just beneath the massive pyramids that frame its northeastern corner, are three intriguing rings of stone, each fashioned of rounded beach cobbles cemented together to form a series of small, stepped pillars. The largest of the rings contains 43 of the stepped pillars, the middle- sized ring has 28 such features, and the smallest ring numbers 13 stepped pillars around its circumference. It would appear that the three rings were used to calibrate different astronomical cycles, possibly by moving a marker or an idol from one stepped pillar to the next with each passing day (in somewhat the same way that has been suggested for recording the passage of time at the Pyramid of the Niches at El Tajín). Astronomers also used very simple devices to measure the altitude of a celestial body above the horizon. Among them were the quadrant, cross staff and (later) the astrolabe. A quadrant is an instrument that is used to measure angles up to 90°. With “Back Observation Quadrant”, the observer viewed the horizon from a sight vane (C in the figure on the right) through a slit in the horizon vane (B). This ensured the instrument was level. The observer moved the shadow vane (A) to a position on the graduated scale so as to cause its shadow to appear coincident with the level of the horizon on the horizon vane. This angle was the elevation of the sun. Note: Chrichton Miller proposed recently that version of cross with circle and plumb bob could be one of the early astronomical instruments. He discovered that the only appropriate instrument that could have been used by the architect, in the place of a theodolite, was a derivative of the cross, with the addition of a plumb-line. This incredibly simple, yet complex instrument, has the potential to measure angles and inclinations to an accuracy of 1 minute of arc or 1/60th of a degree, depending on the size of the instrument used. This is an extraordinary accuracy for what appears to be only two pieces of wood, a scale and a plumbline. One of the most interesting but obscure abilities of the Cross is its capability to take sidereal measurements. Astrolabe was originally used by astronomers to find the position of the stars and planets. Position of various stars and planets are marked on the face of the astrolabe and by setting the moveable parts of the astrolabe to a specific dates and times, the entire sky, both visible and invisible, is represented on the face of the instrument. Typical uses of the Astrolabe include finding the time during the day or night, finding the time of a celestial event such as sunrise or sunset and as a handy reference of celestial positions. The oldest known astrolabes were created a few centuries BC, possibly by Hipparchus. Obelisks and Sundials To determine noon time and cardinal points, ancient astronomers used obelisks (set vertically with use of a plumb line). The gnomon is the part of a sundial that casts the shadow. In the northern hemisphere, the shadow-casting edge is normally oriented so that it points north and is parallel to the rotation axis of the Earth. That is, it is inclined to the horizontal at an angle that equals the latitude of the sundial’s location. At present, such a gnomon should thus point almost precisely at Polaris, as this is within a degree of the North celestial pole. On some sundials, the gnomon is vertical. These were usually used in former times for observing the altitude of the Sun, especially when on the meridian. The style is the part of the gnomon that casts the shadow. This can change as the sun moves. For example, the upper west edge of the gnomon might be the style in the morning and the upper east edge might be the style in the afternoon. 2.3 Finding Cardinal Points Polaris (Alpha Ursae Minoris, commonly called North(ern) Star, Pole Star, or Lodestar) is the brightest star in the constellation Ursa Minor. Today it is very close to the north celestial pole, making it the current northern pole star. Over long periods of time (hundreds of years), precession of the Earth’s axis of rotation causes it to point to other regions of the sky, tracing out a circle over 25,900 years. Other stars along this circle would have served as the pole star in the past and will again in the future, including Beta Ursae Minoris , Thuban and Vega. As the result, ancient astronomers had to use other methods to precisely locate cardinal points (N, S, E, W). Simple methods of finding cardinal points are based on observing night sky or the Sun. 1. Solar Methods (3 variations): – Observing Sun direction at noon (highest position above the h0rizon) This method does not work so well closer to the equator (i.e. between the Tropic of Cancer and the Tropic of Capricorn) since, in the northern hemisphere, the sun may be directly overhead or even to the north in summer. Once or twice each year, people who live at lower latitudes (within 23.5 degrees of the equator) can see the sun reach the zenith, an imaginary point directly overhead. A vertical post would make no shadow when the sun was at its zenith. The path the sun takes on these days—from sunrise through zenith, to sunset—is called the zenith passage. Right at the equator, the zenith passage coincides with the equinoxes. One of the simplest tools to measure angle of the sun above horizon is gnomon. The gnomon at the base of Edzná’s principal pyramid,“Cinco Pisos”. It consists of a tapered shaft of stone surmounted by a stone disk having the same diameter as the base of the shaft. At noon on the days of the zenithal sun passage, the entire shaft is in the shadow of the disk; at other times, the shaft itself casts a shadow, as in the photograph above. Photo by Professor Vincent H. Malmström, Source: http://www.dartmouth.edu/~izapa/Beyond-the-Dresden-Codex.pdf – observing shadow of an obelisk and marking two points on a circle where shadow has the same length (the line dividing in half the angle between both points will give true S-N direction) The image of a model shows the gridlines and the cast shadow of the obelisk. The tip of the shadow moves during a day along a hyperbola. For specific days, e.g. the solstices or equinoxes, the hyperbolic shadow path is marked on the area on which the obelisk is standing. The observation of the progression of a shadow on a surface marked with a grid enables the measurement of date and local time. Image Source: http://www.horizontastronomie.de/eobelisk.html – observing sunrise and sunset on the day of equinox This simulation reveals how the Sphinx is aligned with the 2nd pyramid on Sep 21 equinox. Wind rose markers are the 16 elliptical plaques embedded in the ring surrounding the ancient Egyptian obelisk at the center of Vatican Square. The obelisk’s shadow also works like in a sundial – it can measure daytime… 2. Stellar Method – observing one of bright circumpolar stars (e.g. Alkaid) and marking its 2 extreme positions crossing a horizontal line. For example, 4,500 thousand years ago, Alkaid appeared in the evening sky 17 deg west of meridian and moved to 17 deg east of meridian before sunrise when it reached the same altitude. Bisecting the 34 deg angle would give line pointing to the perfect North – observing 2 stars that pass meridian near midnight (as described by Dr. Kate Spence) Dr. Kate Spence, a British Egyptologist, believes she may have solved two of the great mysteries of archaeology – how the ancient Egyptians aligned the pyramid with such remarkable geographical accuracy and when the vast royal tomb was built. The Great Pyramid is extremely accurately aligned towards north. The sides deviate from true north by less than three ark minutes, that’s less than a twentieth of a degree, which is extremely accurate in terms of orientation. In the past the date of the ancient Egyptian pyramids has been a source of much debate among historians, who have put it at around the middle of the third millennium BC, by tracing the chronology of Kings. However, publishing her research in the scientific journal Nature, Dr Kate Spence says her theory gives a more precise date for the beginning of construction work on the Great Pyramid. The celestial north pole was only aligned exactly with Kochab and Mizar in 2, 467 BC, which would put the beginning of building work about 70 years later than many archaeologists, have previously thought. 2.4 Measuring the geographic Latitude of given location The angular distance north or south of the earth’s equator, measured in degrees along a meridian, as on a map or globe is called “latitude”. Angle of the sun above the horizon at noon on the day of equinox can give us geographic latitude of this location – example below: 90 – 60 = 30 deg. Simple geometry shows that the angle between the zenith and the celestial equator (i.e. the zenith’s declination) must also be the angle between the north celestial pole and the north horizon. Measuring latitude using the sun can only be done at noon, when the sun is at its highest point in the sky. If you measure it on equinox the angle Phi is the latitude. On any other day you will have to compensate for the angle of the axial tilt (23.5 deg). Since the zenith’s declination is equal to one’s latitude, one can determine his/her latitude by measuring the altitude of the celestial North Pole (currently well approximated by position of the star called Polaris). Measuring latitude using the sun can only be done at noon, when the sun is at its highest point in the sky. The study of how people in the ancient past “have understood and used the phenomena in the sky, and what role the sky played in their cultures is called “archaeoastronomy”. In the “Ancient Timekeepers, Part 3: Archaeoastronomy” we are presenting ancient calendars and the evidence of very good understanding by the ancient people of astronomical cycles of our planet. Examples of such evidence can be found in surviving ancient writings (Mesoamerica, Egypt and India) and monuments. Places like Stonehenge, Pyramids of Egypt and Mesoamerica, have encoded in their location, orientation, shape and dimensions incredible understanding of astronomy.
<urn:uuid:84cd8dcd-e7de-4908-bf38-e322953277d9>
CC-MAIN-2015-35
http://blog.world-mysteries.com/science/ancient-timekeepers-part-2-observing-the-sky/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00220-ip-10-171-96-226.ec2.internal.warc.gz
en
0.929165
3,515
4.03125
4
How Does a Heatsink Work? Effective heatpipe design is significantly more complex than gluing a copper brick to a semiconductor, of course. Most of the action within a CPU heatsink happens inside of the copper heatpipes, which often use material phase changes and capillary action to cool microprocessors, but before we get into the specifics, let's cover the basics: A heatsink's objective is to draw heat away from the hot, underlying chip, which generates heat as a result of its (relatively) high frequency and the electrical current coursing through the cores; improving core stability by amplifying voltage (in the form of vCore) will generate yet more heat, so in overclocking applications, aftermarket heatsinks are particularly noticeable. Stock heatsinks are much more simplistic than the aftermarket products we review, so we'll focus almost entirely upon aftermarket cooling technology for this article. The stock sinks tend to be a composition of a top-mounted fan, aluminum fins, and a flat copper base -- a far cry from the liquid-filled, sintered/grooved copper heatpipes that are used in aftermarket sinks. Using a fusion of these heatpipes, fan design that minimizes air resistance, aluminum or copper fins to maximize surface area, and high thermal conductivity interfaces, heatsinks and coolers are able to conduct heat from the surface of the CPU and escort it out the back or top of the case. Much of this comes down to thermodynamics and sciences pertaining to thermal conductivity and materials engineering, which we'll cover on a very top-level in a section below (see: Materials & Thermal Conductivity). We've put together the below image to help familiarize you with the inner-workings of a CPU heatsink and its related terminology: The primary elements of a CPU cooler are all covered in this graphic. For the most part, the action happens in the heatpipes, but we're also faced with the actual heatsink, the overall surface area, the contact technology used to transfer heat to the pipes, and fan positioning. The cooling pipeline for a heatsink is pretty straight-forward, here's what we're usually looking at: - The CPU generates heat; this heat is absorbed through a conductive baseplate or directly touching heatpipes on the heatsink. - The heat causes liquid within the heatpipe to undergo a phase change, resulting in its transition to a gas. A significant amount of energy is consumed during this phase change (in the form of heat), this is responsible for a lot of the heat reduction we experience. We then move to the dissipation stage... - The heat (gas) travels up the pipe and eventually reaches the condensor, which condenses the gas back into liquid form and uses capillary action to transport it back to the evaporator. - During its trip through the pipe, heat is absorbed by the adjoining (hopefully large) heatsink, where it is dissipated through the fins and cooled by the new, cool air being injected by the fan. - The liquid is guided back down to the evaporator section of tubing (atop the CPU) through sintered, grooved, mesh, or composite tubing (explained further below), called a "wick" or "capillary structure." Capillary pressure is created by the wick, forcing coolant to return to the evaporator where it can be re-used. Pretty cool stuff, right? Yeah, yeah. What Makes a Good CPU Cooler / Heatsink for my purposes? All of this information can be used in buying decisions to help weed through the ever-increasing amount of heatsinks available. Understanding the basic physics behind a heatsink's functionality helps us determine what design and engineering elements govern a quality product; as always, if you'd like more direct input from us on your system building endeavors, feel free to comment below or post your question on our hardware forums! Let's expand on each of the previous topics: Materials & Thermal Conductivity Materials have everything to do with the efficiency of your heatsink. Starting with a basic chart of relevant materials makes sense: |Material||Thermal Conductivity (W/mK) at 25C| |Thermalpaste (Avg)||~5.3 - 8.5| Given air's low thermal conductivity, it's evident why we can't just blow air past a CPU to achieve performance-grade cooling. Copper and aluminum, on the other hand, make excellent heatsink materials for our purposes: Copper is objectively the best material for gaming-grade PC heatsinks, but aluminum tends to be the most cost-friendly option and can still exhibit considerable cooling capacity given solid enough design. However, that doesn't change the fact that copper has the best conductive heat transfer potential; it's commendable to search for heatsinks that use copper heatpipe structures and copper fins, though copper fins are not required by any means -- we do always recommend copper heatpipes, though. Conductive heat transfer is expressed through Fourier's Law as: q = k A dT / s, where A = heat transfer area, k = the material's thermal conductivity, dT = temperature difference across the material, and s = material thickness. (Read more about this at Engineering Toolbox). Despite copper and aluminum differences, we're still limited in cooling efficiency by the fan, the case airflow, and the surface area of the heatsink and surface roughness of the contact plate. As a sort-of side note, a lot of manufacturers use nickel plating or other aesthetic-only materials to cover up copper and aluminum, so don't just use looks to determine whether something is aluminum or copper. Cooler Master's T812 is an example -- it uses a copper base, but is coated in a way that almost makes it appear aluminum. Always check the specs for the final word. Surface Area & Surface Roughness Surface area was rated by our Zalman contact (Edmund Li) as one of the most important aspects to a cooler's functionality, and it makes sense: A larger chunk of grooved/finned metal provides more area for the heat to distribute itself. This is largely bolstered by fin designs that are optimized to maximize surface area, further enabling the unit's ability to cool. Luckily, this is one of those items that's pretty simple to shop for - big being better, in this case - just make sure you choose something that makes sense for your system. Grabbing the heaviest heatsink out there won't matter if it doesn't fit in the case and puts too much strain on the CPU or motherboard. Just grabbing any massive aluminum heatsink is probably not for the best, of course, given the importance of heatpipes, surface smoothness, and copper's place in the world. Surface roughness is a measurement of the base plate's smoothness (measured in microinches) and overall ability to connect directly with the surface of the CPU. In a perfect world, there would be no thermalpaste and the copper base plates would come in direct, flush, perfectly smooth contact with the CPU... but we don't live in a perfect world, and if we did, I'd be playing games while floating in a tube of water, not writing about heatsinks. The reason we even need thermalpaste, as we explained in this previous post, is because microscopic divets in the surface of the connecting materials create air pockets. Air gets trapped in these pockets at high temperatures, causing uneven thermal distribution and resulting in hotter core temps. A thermal interface, while significantly lower thermal conductivity than pure copper or aluminum, provides an air-tight sealant between the divets that allows heat to cleanly migrate from the CPU surface to the cooler base plate. Smoother is better. Thermalpaste's thermal conductivity will impact the temperature moderately, but not normally enough where it's justifiable to spend lots of money on thermal compound. If you're doing serious overclocking and need every single degree you can muster, then by all means, consider a tube of MX-4. But for most of us, 5.3W/mK - 6.x W/mK is more than enough to keep things under control. And it's affordable. Heatpipe Exposure and Wick / Capillary Design And now we're back to heatpipes! There are two prevailing chamber designs in the CPU heatsink market: Vapor chambers and traditional capillary heatpipes. We'll cover the latter first due to their dominance. As this image shows so well, a heatpipe contains a very small amount of coolant or liquid (normally a mix of ammonium and ethanol or distilled water) which undergoes chemical phase changes - this is the catalyst for our reduced temperatures. The evaporator (CPU surface region) evaporates the liquid, where it travels in gaseous form toward the condensor. The condensor then—you guessed it—condenses the gas back to liquid form, where it travels down grooved, sintered, metal mesh, or composite tubing as a result of capillary action. The grooved wick design looks precisely like you'd think -- it's grooved cleanly down the interior of the tube, meanwhile the sintered design carries a more foamy and porous look. Metal mesh designs are more common among consumer heatsinks and vaguely resemble a basket's woven pattern. Thermolab cut open some heatpipes to reveal their insides, which makes the explanation a bit easier. Zalman uses a fourth design—composite heatpipes—which mix copper powder inside of the pipe to help aid in thermal transfer (the steam travels faster). Composite and sintered heatpipes have much higher production cost than grooved pipes; as for which makes a "better" heatsink, it really comes down to individual product testing due to the many other variables -- but composite and sintered heatpipes are preferable, albeit rare. Heatpipes connected directly to the surface of the CPU will cool it more efficiently for a short period of time (we were told "about an hour" by Zalman), but as heat builds and time progresses, that tends to equalize; direct touch heatpipes are not often noticeably more effective than polished base plates when it comes to endurance cooling. What is noticeable, though, is a copper base versus an aluminum one -- you'll want copper exposed directly to the CPU for best heat wicking potential. Vapor changes are a little bit different and aren't quite as common, but are still worth a quick mention: Vapor chambers are used for disproportionately high, localized heat generation by processing units; a vapor chamber helps spread this additional heat more evenly across the fins within the heatsink (rather than favoring fins in close proximity to the hotspot). Cooler Master's 812 uses both vapor chambers and heatpipes, and they created this image to help explain their usage: It's effectively the same as a heatpipe in its functionality, they just use a slightly different design to attract location-specific heat. Fan Positioning & Noise Reduction Noise levels are always going to be a problem with small fans, but fan positioning and cooling optimization can help reduce the requirement of high RPMs and high decibel levels. Fans generate noise within a CPU cooler for a few primary reasons: Bearing type, fan size and RPM, and rattling within the cage. Of these, only rattling is unique to CPU coolers -- the rest are covered by our fan bearings overview / guide. Rattling is normally a result of poor fan positioning and design. The Tuniq Tower 120 Extreme cooler we reviewed had rubberized screws to prevent rattling, Zalman uses a centralized fan that is detached from the fins (theoretically the quietest design), and other coolers use a mix of brackets and mounting mechanisms that may or may not vibrate under load. The centered fan design is interesting -- by placing the fan directly over the CPU and surrounding it with the fins (but not touching the two), the fan still pulls air cleanly through the entire unit without the added fun of rattling the cage. Aside from isolated fans, it's good to look for units with rubberized mounting plates/screws or otherwise stable brackets that can better withstand high RPMs. More fans are always going to be beneficial for cooling, of course, as they'll pull more air into the system and will more evenly cool the fins, but they aren't necessary; we saw a 3C decrease in temperature between the NZXT Respire T40 with one fan and the T40 with two fans -- so it is noticeable -- but the noise level will obviously increase as a result (though you could arguably just run them at lower RPMs). Decibels are calculated on a logarithmic scale (10*log(x) equates the difference in dB, where x is the number of fans of the same decibel level), so adding more fans to the system will always increase noise marginally for the most part. Top Things to Look For in a CPU Cooler Now that we have a thorough understanding of how coolers work, let's recap the most important design elements to look for; we're assuming a standard performance / gaming-grade build for this article's purposes: - Surface area. The larger the heatsink, the more readily it can dissipate heat. On this note, a larger base plate surface area means better transfer of heat from the CPU to the pipes and more room for mounting error. - Materials. Copper has about twice the thermal conductivity of aluminum and simply makes a better heatsink. - Number of heatpipes and their diameter. As a general rule, more heatpipes means better cooling. Additional vapor chambers may aid in heat diffusion for some units, but are not as common as traditional heatpipes. - Fan positioning and number of fans. More fans means better cooling, but potentially more noise. Find a balance between performance and noise that works for you; remember that you can always decrease the RPMs across the fans to neutralize some of the noise. And there's one more thing: Aesthetics. It's silly, but if we're honest, a lot of the mid-range to high-end heatsinks will offer almost identical cooling performance. For performance and enthusiast applications, mounting an ugly piece of copper to your otherwise beautiful rig isn't preferable. Given negligible performance difference between coolers, pick the one that you think fits your rig's personality the best. Let us know if you are debating between two heatsinks and need some help! - Steve "Lelldorianx" Burke, with thanks to Edmund Li of Zalman for insight. Special thanks to Tim "Space_man" Martin for his physics engineering insight.
<urn:uuid:a4d1393d-3f35-454f-b17b-25394d6a48c8>
CC-MAIN-2015-35
http://www.gamersnexus.net/guides/981-how-cpu-coolers-work
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00280-ip-10-171-96-226.ec2.internal.warc.gz
en
0.923831
3,074
3.625
4
RNAi: Crossing Platforms, Breaking Boundaries A new lab at New York Univ. Medical Center is likely the first of many to offer open-access screening across four enhanced RNA libraries. A fluorescence microscopy image of a typical 384-well sample using a Cellomics Arrayscan VTI automated high-throughput microscope.(Image: Chi Yun, RNAi Core Facility). But so far these breakthroughs are just window dressing for what is taking place in the background. In order to truly understand the complex mechanics of life and disease, scientists must deal with the scale of DNA, with its billions of base pairs. One of the best ways to unlock these secrets is to observe and experiment with gene silencing. First observed in plants in 1990, the technique needed another decade of R&D before entering its own after the discovery that double-stranded RNA causes gene silencing. The human genome was sequenced in 2000, and by 2002, major companies such as Merck had set up RNA interference (RNAi) laboratories for basic research. The National Institutes of Health was also setting up dedicated labs, including the Chemical Genomics Center, Rockville, Md., in 2004. In 2004, RNAi reached academia with the founding of Harvard Univ. Medical Center’s Drosophila RNAi Screening Center (DRSC), which for the first time allowed academics the chance to use gene interference to determine cell functions. Meanwhile, bioscience picked up even more speed. In the last few years, the value of cross-platform (or cross-species) RNAi screening has emerged, as has the interest in functionalized siRNA libraries of typically useful animal models, such as mice, C. elegans, Drosophila flies, and humans. At the end of 2008, New York Univ. (NYU) Medical Center responded with the RNAi Core Facility, a new laboratory that offers relatively affordable RNAi screens and access to all four major gene libraries. The brainchild of Ramanuj Dasgupta, assistant professor in the Depts. of Pharmacology and Cancer Institute (Research) who studied gene function at DRSC, is a first for RNAi screening. Open to all academics, the access extends to the screening results themselves, which are eventually available to the public. The PerkinElmer Janus MDT automated workstation is equipped with an expanded platform to reformat 96- and 384-well plates, as well as a modular dispensing arm. (Image: Chi Yun, RNAi Core Facility). Messenger RNA was first discovered in 1960, but another 30 years was needed before actual gene silencing was first observed in plants. In short, RNA interference occurs when small pieces of RNA silence the activity of specific genes. Genes use specialized enzymes to transcribe (or copy) a strand of a gene’s DNA. The transcription forms messenger RNA (mRNA). The mRNA are moved from the nucleus to another area of the cell where it is translated by ribosomes into a specific protein. The actual process of gene silencing has multiple steps. First, RNA molecules bind to proteins that slice them into small fragments. The fragments then bind easily to other proteins collectively known as RNA-induced silencing complexes (RISCs). This process removes one of the RNA’s two strands, leaving a chain that is able to connect to naturally occurring microRNA segments. The RISC proteins cut down the mRNA to fragments that cannot be translated. This means the gene that produced (or copied) the mRNA segment is suppressed, or silenced. Cellular processes ranging from disease protection to growth and death are regulated by RNAi activity. This natural process is harnessed at the RNAi Core Facility; its convenience has emerged as a favored way to unlock the complex processes of gene networks. RNA's value emerges The role of RNA screening is becoming increasingly important, says Dasgupta, because working with siRNA is experimentally easy and reliable. Director of the RNAi Core Facility, Dasgupta joined NYU primarily to help boost the role of RNAi in the university’s research. The school recognized the value of being able to attract potentially valuable research as well having the ability to perform advanced R&D on its own. His past training in both mouse and fly RNA screening helped Dasgupta when it came time to build the laboratory. “The set up is something unique for NYU in the sense that we are the only ones who offer multi-species libraries under one roof,” says Dasgupta, who recognized the value of cross-platform studies after a major validation occurred this way at the DRSC laboratory. Over time, he says, researchers have realized that whole genome libraries are very good for developing unbiased screens. “To test for cell function you want cross-species validation. The (RNAi Core) Facility allows, for example, researchers to do a screen for Drosophila or mouse or human genome and then look for functions essentially by cherry-picking” promising samples elsewhere, says Dasgupta. The ability to test between species has become attractive for researchers hoping to strengthen their conclusions through corroboration and has also spurred collaboration between labs. “Everyone is interested in looking for cell function. You could also perhaps go to a fly lab to collaborate, and they could build genetics and do prior chemistry,” says Dasgupta, then perform final screens at NYU. The Arrayscan VTI uses a Catalyst Express robotic arm with 45-plate capacity to feed plates into the microscope. (Image: Chi Yun, RNAi Core Facility). RNAi Core Facility Located in NYU’s Cancer Institute’s Tisch Hospital building and open for just a few months, the RNAi Core Facility was built with the support of the Kimmel Stem Cell Center, the NYU Cancer Institute, and scientists at DRSC. The RNAi Core Facility hardly resembles an RNAi screening lab one might find at Pfizer, GlaxoSmithKline, or Merck. Yes, it features fluid-handling gear, automated high-resolution fluorescence microscopes, extensive RNA libraries, and robotic sample-handling equipment. But its automation does not approach what is available to the big pharmaceutical firms, who may have 11 or 12 of the same high-throughput microscope that is performing the automated 384-wellplate screens at NYU. For now, such equipment is far out of the budget for this $1.5 million lab. But that doesn’t hurt the mission, which is decidedly egalitarian. According to the lab’s assistant director, Chi Yun, a cellular biologist turned RNA expert, the RNAi Core Facility represents an early attempt to bring RNA screening capacity to other academics, researchers who, due to budget constraints and lack of access, might otherwise never conduct high-throughput screening on attractive genomic targets. “When (Dasgupta) was at Harvard, he was able to see how things are set up there,” says Yun, which helped them set up an efficient workflow in a limited space. The lab is small—just about 1,200 ft2—and has just two full-time employees including Yun. But it’s the only open-access lab offering four RNA libraries—human, mouse, Drosophila, and C. elegans. As she describes it, the lab is not “fee for service.” Instead, researchers pay a relatively low rate—often less than $10,000—to gain access to the libraries. After training and optimization of the samples, the researchers are able to use the equipment to conduct their screening. In return, the lab retains the result of their findings, and after two years that data reverts to the public domain. The storage room of NYU Medical Center's Tisch Building is where freezers hold the genomic libraries for the four major species used for RNA research: human, mouse, Drosophila, and C. elegans. (Image: Chi Yun, RNAi Core Facility). A brief guide to RNAi screening The screening process is a dynamic one that involves transfer of data between the researcher’s home institution or lab and the RNAi Core Facility. There are two major types of screens, one with a plate reader and one sourced from high-content imaging. The plate reader-based assay allows the user to detect fluorescence intensity or polarization and luminescence. The quantitative well information achieved can be analyzed in about a month and requires reagents including luciferase and stains. This screen produces small data files. High-content imaging is a more sophisticated analysis process that combines high-resolution microscopy with software-enabled image analysis. Both quantitative and qualitative data are generated this way, and while the data files produced by such a method are large—on the order of gigabytes—they are easily analyzed by a variety of applications. “The actual screen time is only one or two months, but the preparation time is the big time requirement. The samples often need to be optimized,” says Yun. Typically, small laboratory researchers use 96-wellplate formats, which feature larger volumes than the 5 µl used by the Cellomics Arrayscan microscope. As a result, prospective screeners are given a single test plate to ensure optimization of the screen. If results confirm the feasibility of the screen, then the screener may start a pilot screen of six 384-well plates (the first three screening plates in duplicate) from the facility's library. If the study appears feasible, then the researchers must be trained on the laboratory’s equipment. The process for screening is as automated as is possible with a small budget and a tight workspace, but there are certain pieces of equipment that may be new to guest researchers. Yun and research associate Shauna Katz assist screeners. Prior to completing an online application at the facility’s website, researchers are asked prepare high-quality optimization data in a 384-well format. This can be a challenge for smaller labs accustomed to 96 wells, but 384 is a necessary for maintaining reasonable throughput. Users are responsible for the cost of assay consumables and reagents. The total fee may be from $3,000 plus supplies for a whole Drosophila genome screen for an NYU researcher to more than $10,000 for a whole human genome screen for a researcher outside the NYU system. Still, it’s nowhere near the $30,000 to $40,000 that many researcher think they'll have to pay, says Dasgupta. “We maintain an open data policy,” says Yun. Two years after the screen is complete, the data held by the RNAi facility is open to the public. This gives the researcher a chance to make sense of the results and publish while at the same time helping to stimulate further studies. The first of many to come On the face of it, an RNAi laboratory is not a good business proposition. Apart from the actual screening, the “business” model has certain problems. First, says Dasgupta, “it’s not a retail operation. There’s no money in screens. The initial investment can never be recovered.” This has been a limiting factor for RNA laboratories, but the upsides of discovering cellular pathways and functions outweigh the costs. The hope is that eventually this data will result in real therapies. As for paying for these screens, an oblique approach works best in the academic world. “If the screens result in one or two grants,” says Dasgupta, “that’s enough to recover the investment and possibly more. The only way it is economically viable is through indirect payment of the costs.” The other problems are logistical. Optimizing samples for a 384-wellplate, an otherwise efficient and environmentally-friendly format, can be difficult. Libraries take up space, and capital costs for new equipment can be high. At the RNAi Core Facility, two Drosophila fly screens have been completed and four screens are in the pipeline. Dasgupta has also performed a microRNA screen using the human genome library. And he is collaborating with Mt. Sinai Medical Center to do a mouse screen. But few outside researchers have so far made use of the facility. “It’s been a challenge to advertise a place like this,” says Dasgupta, who remembers the Boston area as being a place where “everybody and their cousin is doing screens.” Part of the problem, he says, is that people aren’t used to the availability of the services, especially in the New York City area. “People are sometimes intimidated and wonder if they can optimize for high-throughput, and that it’s much more expensive than it actually is,” says Dasgupta. “The amount of data you generate for that $10-14,000 is a really good investment for the long term because you get lots of hits and if it’s a good screen you have a lot to work with.” Published in R & D magazine: Vol. 51, No. 1, February, 2009, p.22-26
<urn:uuid:e4fb1642-aa76-4e2a-80a6-52f6668d1741>
CC-MAIN-2015-35
http://www.rdmag.com/articles/2009/01/rnai-crossing-platforms-breaking-boundaries
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00162-ip-10-171-96-226.ec2.internal.warc.gz
en
0.942496
2,740
2.859375
3
Lucas123 writes "Engineers at Disney Research in Pittsburgh have developed an algorithm that creates the illusion of a 3D surface on touch screens. Using electrical impulses, the touch screen technology offers the sensation of ridges, edges, protrusions and bumps and any combination of those textures. While Disney is not alone in developing tactile response touchscreens, its researchers said the traditional approach has been to use a library of 'canned effects,' that are played back when someone touches a screen. Disney's algorithm doesn't just playback one or two responses, but it offers a set of controls that make it possible to tune tactile effects to a specific visual artifact on the fly. 'Our algorithm is concise, light and easily applicable on static images and video streams,' the researchers stated." This summer Disney unveiled AIREAL, a system designed to give tactile sensations to people using motion control devices. An anonymous reader writes "A day after TEPCO workers mistakenly turned off cooling pumps serving the spent pool at reactor #4 at the crippled nuclear plant comes a new accident — 6 workers apparently removed the wrong pipe from a primary filtration system and were doused with highly radioactive water. They were wearing protection yet such continuing mishaps and 'small mistakes' are becoming a pattern at the facility." ananyo writes "Fusion unleashes vast amounts of energy that might one day be used to power giant electrical grids. But the laboratory systems that seem most promising produce radiation in the form of fast-moving neutrons, and these present a health hazard that requires heavy shielding and even degrades the walls of the fusion reactor. Physicists have now produced fusion at an accelerated rate in the laboratory without generating harmful neutrons (abstract). A team led by Christine Labaune, research director of the CNRS Laboratory for the Use of Intense Lasers at the Ecole Polytechnique in Palaiseau, France, used a two-laser system to fuse protons and boron-11 nuclei. One laser created a short-lived plasma, or highly ionized gas of boron nuclei, by heating boron atoms; the other laser generated a beam of protons that smashed into the boron nuclei, releasing slow-moving helium particles but no neutrons. Previous laser experiments that generated boron fusion aimed the laser at a boron target to initiate the reaction. In the new experiment, the laser-generated proton beam produces a tenfold increase of boron fusion because protons and boron nuclei are instead collided together directly." An anonymous reader writes "More than 90% of nuclear regulators are being sent home due to the Federal Government shutdown, as the agency announced today that it was out of funds. Without Congressional appropriations, the nuclear watchdog closes its doors for what appears to be the first time in U.S. history. CNN reports that while a skeleton crew remains to monitor the nation's 100 nuclear reactors, regulatory efforts to prevent a Fukushima-like incident in the United States have ceased." iONiUM writes "Samsung today unveiled the Galaxy Round phone with a curved 5.7" display. It comes with a hefty $1,000 USD price tag. This is a follow-up to the 55" curved TVs it began selling in June, and is most likely an intermediate form in the development of fold-able phones. Considering the recent LG announcement of mass OLED flexible screen production, it seems we are getting close to flexible phones. One question I wonder: will Apple follow suit? So far there has been no indication they are even attempting flexible/bendable screens." First time accepted submitter eekee writes "The targets are high, but so is the goal: releasing Verilog source code for a GPU implementation. The source will be open source, LGPL-licensed, and suitable for loading onto an FPGA. The first target is for a 2D GPU with PCI interface; perhaps not terribly interesting in itself, but the first stretch goal is much more exciting: full OpenGL and Direct3D graphics." Unlike the Open Graphics Project, this is starting from a working 2D accelerator and mostly working 3D accelerator cloning the features of the Number Nine Ticket to Ride hardware. If they get a meelion bucks they'll overhaul the chip to support something other than PCI (although you can bridge between PCI and PCIe) and implement a modern programmable rather than fixed-function chip. Also unlike OGP, they do not appear interested in producing hardware, instead focusing entirely on the core itself for use in FPGAs (anyone want to dust off the OGD1 design?) beckman101 writes "Two years ago the Gameduino brought retro-style gaming to the Arduino. This week its successor launched on Kickstarter, still fully open-source but with a video that shows it running some contemporary-looking demos. Plus, it has a touch screen and a pretty decent 3-axis accelerometer. Farewell to the retro?" crookedvulture writes "The first reviews of AMD's Radeon R7 and R9 graphics cards have hit the web, revealing cards based on the same GPU technology used in the existing HD 7000 series. The R9 280X is basically a tweaked variant of the Radeon HD 7970 GHz priced at $300 instead of $400, while the R9 270X is a revised version of the Radeon HD 7870 for $200. Thanks largely to lower prices, the R9 models compare favorably to rival GeForce offerings, even if there's nothing exciting going on at the chip level. There's more intrigue with the Radeon R7 260X, which shares the same GPU silicon as the HD 7790 for only $140. Turns out that graphics chip has some secret functionality that's been exposed by the R7 260X, including advanced shaders, simplified multimonitor support, and a TrueAudio DSP block dedicated to audio processing. AMD's current drivers support the shaders and multimonitor mojo in the 7790 right now, and a future update promises to unlock the DSP. The R7 260X isn't nearly as appealing as the R9 cards, though. It's slower overall than not only GeForce 650 Ti Boost cards from Nvidia, but also AMD's own Radeon HD 7850 1GB. We're still waiting on the Radeon R9 290X, which will be the first graphics card based on AMD's next-gen Hawaii GPU." More reviews available from AnandTech, Hexus, Hot Hardware, and PC Perspective. TX/RX Labs is not just big — it's busy, and booked. Unlike some spaces we've highlighted here before (like Seattle's Metrix:CreateSpace and Brooklyn's GenSpace, TX/RX Labs has room and year-round sunshine enough to contemplate putting a multi-kilowatt solar array in the backyard. Besides an array of CNC machines, 3-D printers, and wood- and metal-working equipment, TX/RX has workbenches available for members to rent. (These are serious workspaces, made in-house of poured concrete and welded steel tubing.) There's also a classroom full of donated workstations, lounge space, a small collection of old (but working) military trucks, and a kitchen big enough for their Pancake Science Sunday breakfasts. Labs member Steve Cameron showed me around. You saw Part One of his tour last week. Today's video is Part Two. dcblogs writes "Gartner says new technologies are decreasing jobs. In the industrial revolution — and revolutions since — there was an invigoration of jobs. For instance, assembly lines for cars led to a vast infrastructure that could support mass production giving rise to everything from car dealers to road building and utility expansion into new suburban areas. But the "digital industrial revolution" is not following the same path. "What we're seeing is a decline in the overall number of people required to do a job," said Daryl Plummer, a Gartner analyst at the research firm's Symposium ITxpo. Plummer points to a company like Kodak, which once employed 130,000, versus Instagram's 13. The analyst believes social unrest movements, similar to Occupy Wall Street, will emerge again by 2014 as the job creation problem deepens." Isn't "decline in the overall number of people required to do a job" precisely what assembly lines effect, even if some job categories as a result require fewer humans? We recently posted a contrary analysis arguing that the Luddites are wrong. linuxwrangler writes "NSA's new Utah data-center has been suffering numerous power-surges that have caused as much as $100,000 damage per event. The root cause is 'not yet sufficiently understood' but is suspected to relate to the site's 'inability to simultaneously run computers and keep them cool.' Frustrating the analysis and repair are 'incomplete information about the design of the electrical system' and the fact that "regular quality controls in design and construction were bypassed in an effort to fast track the Utah project."" Ars Technica has a short article, too, as does ITworld. judgecorp writes "The millionth Raspberry Pi microcomputer has been made in the Foundation's Welsh factory. Total sales so far are 1.75 million, including the initial stock made in China." (Do you have one? If so, what are you using it for?) An anonymous reader writes "NVIDIA was caught removing features from their Linux driver and days later Linux developers have caught and confirmed AMD imposing artificial limitations on their graphics cards in the DVI-to-HDMI adapters that their driver will support. Over years AMD has quietly been adding an extra EEPROM chip to their DVI-to-HDMI adapters that are bundled with Radeon HD graphics cards. Only when these identified adapters are detected via checks in their Windows and Linux Catalyst driver is HDMI audio enabled. If using a third-party DVI-to-HDMI adapter, HDMI audio support is disabled by the Catalyst driver. Open-source Linux developers have found this to be a self-imposed limitation and that the open-source AMD Linux driver will work fine with any DVI-to-HDMI adapter." Bismillah writes "University of Bristol researchers have come up with a way to make touch screens more touchy-feely so to speak, using ultrasound waves to produce haptic feedback. You don't need to touch the screen even, as the UltraHaptics waves can be felt mid-air. Very Minority Report, but cooler." The researchers built an ultrasonic transducer grid behind an acoustically transparent display. Using acoustic modeling of a volume above the screen, they can create multiple movable control points with varying properties. A Leap Motion controller was used to detect the hand movements. mysqlbytes writes "The BBC is reporting the National Ignition Facility (NIF), based at Livermore in California, has succeeded in breaking even — 'During an experiment in late September, the amount of energy released through the fusion reaction exceeded the amount of energy being absorbed by the fuel — the first time this had been achieved at any fusion facility in the world.'" sciencehabit writes "A do-it-yourself neuroscience experiment that allows students to create their own 'cyborg' insects is sparking controversy amongst scienitsts and ethicists. RoboRoach #12 is a real cockroach that a company called BackyardBrains ships to school students. The students fit the insect with a tiny backpack, which contains electrodes that feed into its antennae and receive signals by remote control — via the Bluetooth signals emitted by smartphones. A simple swipe of an iPhone can turn the insect left or right. Though some scientists say the small cyborg is a good educational tool, others say it's turning kids into psychopaths." Fitting the backpack requires poking a hole in the roach's thorax and clipping its antennae to insert electrodes. Zothecula writes "LG today announced that it is to start mass producing flexible OLED display panels for smartphones. The company says that its technology uses plastic substrates rather than glass, and claims that a protective film on the back of the display makes it 'unbreakable' as well as bendable." Hugh Pickens DOT Com writes "Claudia Assis writes that the US will end 2013 as the world's largest producer of petroleum and natural gas, surpassing Russia and Saudi Arabia with the Energy Information Administration estimating that combined US petroleum and gas production this year will hit 50 quadrillion British thermal units, or 25 million barrels of oil equivalent a day, outproducing Russia by 5 quadrillion Btu. Most of the new oil was coming from the western states. Oil production in Texas has more than doubled since 2010. In North Dakota, it has tripled, and Oklahoma, New Mexico, Wyoming, Colorado and Utah have also shown steep rises in oil production over the same three years, according to EIA data. Tapping shale rock for oil and gas has fueled the US boom, while Russia has struggled to keep up its output. 'This is a remarkable turn of events,' says Adam Sieminski, head of the US Energy Information Administration. 'This is a new era of thinking about market conditions, and opportunities created by these conditions, that you wouldn't in a million years have dreamed about.' But even optimists in the US concede that the shale boom's longevity could hinge on commodity prices, government regulations and public support, the last of which could be problematic. A poll last month by the Pew Research Center for the People and the Press found that opposition to increased use of fracking rose to 49% from 38% in the previous six months. 'It is not a supply question anymore,' says Ken Hersh. 'It is about demand and the cost of production. Those are the two drivers."'" Features of Google's next Nexus phone have finally been outed, along with confirmation that the phone will be built by LG, as a result of a leaked service manual draft; here are some of the details as described at TechCrunch: "The new Nexus will likely be available in 16 or 32GB variants, and will feature an LTE radio and an 8-megapixel rear camera with optical image stabilization (there’s no mention of that crazy Nikon tech, though). NFC, wireless charging, and that lovely little notification light are back, too, but don’t expect a huge boost in longevity — it’s going to pack a sealed 2,300mAh battery, up slightly from the 2100mAh cell that powered last year’s Nexus 4. That spec sheet should sound familiar to people who took notice of what happened with the Nexus 4. Just as that device was built from the foundation laid by the LG Optimus G, the Nexus 5 (or whatever it’s going to be called) seems like a mildly revamped version of LG’s G2." Despite the number of companies shipping or promising them, smart watches aren't the easiest sell, and Ars Technica's review of Samsung's entry illustrates why. Despite all the processing power inside, the watch is "sluggish" even for the kind of at-a-glance convenience features that are touted as the reason to have a phone tethered to an (even smarter) phone, and for the most part seems to weakly imitate features already found on that phone. There are a few features called out as cool, like a media control app, but for the most part reviewer Rob Amadeo finds little compelling in the Galaxy Gear.
<urn:uuid:0e875785-569f-4457-9b6d-a0b4edb649b4>
CC-MAIN-2015-35
http://hardware.slashdot.org/index2.pl?fhfilter=hardware&startdate=20131009&index=1
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00219-ip-10-171-96-226.ec2.internal.warc.gz
en
0.946468
3,190
2.78125
3
Category — Small Space Urban Agriculture with Job Ebenezer – part 1 Wading Pool Gardens The president (Dr. Job Ebenezer) of the organization, Technology for the Poor, explains his vision for the spread of urban agriculture. In 1993, Dr. Job Ebenezer, former Director of Environmental Stewardship and Hunger Education at the Evangelical Lutheran Church in America (ELCA) established a container garden on the roof of the parking garage of the ELCA offices in Chicago. The hope was that the roof top garden would serve as a role model for creative use of urban space throughout the country. Dr. Ebenezer proved the feasibility of growing vegetables in plastic wading pools, used tires and feed sacks. January 12, 2011 10 Comments One man’s parking garage is the same man’s garden — where he’s proving it’s possible to grow a significant portion of his own food at home, even in a San Francisco apartment building! By Jon Brooks August 4, 2010 It started three years ago with a single tomato plant. Today, he and his wife Ellen estimate that they grow 25-30 percent of their total food intake. Current crops include tomatoes, peas, blackberries, raspberries, basil, carrots, mushrooms and several types of lettuce, almost all cultivated in nine half-barrels of soil, tucked away in a corner of their San Francisco apartment’s parking garage. He is also growing sprouts in a couple of jars on his kitchen table. August 18, 2010 3 Comments Manual of Low/No-Space Agriculture Book by Dr. Thilak T. Ranasinghe Dr. Thilak T. Ranasinghe, Former Director of Agriculture, Western Province, Sri Lanka Review in the Sunday Times – Sri Lanka April 25, 2010 It is predicted that the world population will rise to ten billion by 2050. At present, some 15 million square kilometres or around one-tenth of total land area of the earth is used for farming. In October, 2009, scientists at the Potsdam Institute for Climate Impact Research (PIK) in Germany along with their colleagues from Sweden noted that global agricultural production could increase by around one-fifth by adopting better management practices, especially water management. April 25, 2010 Comments Off on New Book – Manual of Low/No-Space Agriculture – Family Business Gardens 2007 – Winner of the 2nd International Competition for Sustainable Housing by Knafo Klimor Architects and Town Planners, Israel Excerpts from Living Steels’ competition design website. Agro-housing, the winning design for construction in China, blends urban and rural living by creating vertical greenhouse space within high-rise apartments. Designed by Knafo Klimor Architects, the Agro-housing concept allows tenants to produce their own food, reducing commuting needs and providing a green neighbourhood. Knafo Klimor Architects developed this concept with concern for predictions that 50% of China’s one billion people will live in its cities, a trend mirrored in many developing countries in the world. The architects observe that massive urbanisation displaces communities, dissipating existing traditions and heritage, as well as placing a strain on energy resources and infrastructure. December 23, 2009 3 Comments Photo of fire escape gardener. “When I was planning my fire escape garden I planted cherry tomatoes thinking the plant would be small and perfect for the small space — not so much.” by Mike Lieberman (Canarsiebk) My goal of having this site is to inspire you to start gardening and growing your own food. If I’m doing it, why can’t you? Don’t have the space? Check out my fire escape garden. Not much room there, but I’m getting it done. August 28, 2009 Comments Off on Fire Escape Gardening in Manhattan Photo by Jared Braiterman, PhD Ginza rice farm By Jared Braiterman, PhD. Tokyo Green Space examines the potential for micro-green spaces to transform the world’s largest city into an urban forest that supports bio-diversity, the environment and human community. On a side street in Ginza, I noticed a rice farm and met Ginza Farm’s CEO Iimura Kazuki and his assistant who were tending the rice and two cute ducklings. Shop clerks and construction clerks stopped by to admire the rice in its mid-summer glory. August 12, 2009 Comments Off on Tokyo Green Space reports on downtown Tokyo rice farm Photo: Adrian Vecchio (http://www.adrianvecchio.com). Grow your own food in your apartment year round Window Farms are vertical, hydroponic, modular, low-energy, high-yield edible window gardens built using low-impact or recycled local materials. In February 2009, through a residency at Eyebeam, Britta Riley and Rebecca Bray began to build and test the first Window Farms prototype. Growing food inside NY apartments is a challenge, but within reach. The foundational knowledge base is emerging through working with agricultural, architectural and other specialists, collecting sensor data, and reinterpreting hydroponics research conducted by NASA scientists and marijuana farmers. August 7, 2009 Comments Off on Window Farming Japanese Government to boost indoor cultivation – Housed vegetable growing will ‘create jobs, aid food security’ Tokyo, Japan. A man tends a tomato plant in Pasona O2, an artificially lit and computer controlled greenhouse built in the basement of a high rise building in the business district of Tokyo on February 15, 2005 in Tokyo, Japan. Pasona Inc, a human resources service company, built the greenhouse in order to introduce the pleasure of agriculture also to train aspiring farmers in the city. The basement space was once used as a vault by Resona Bank Limited. Photo by Junko Kimura Japanese Government to boost indoor cultivation The Yomiuri Shimbun Apr. 10, 2009 The government is set to launch full-scale efforts to promote indoor agricultural facilities to ensure stable cultivation of fruits and vegetables, government officials said. As part of a three-year plan to boost the number of indoor growing facilities about fourfold, to 150, and raise production about fivefold, the government will offer incentives including low-interest financing and a capital investment tax credit, the officials said. April 10, 2009 3 Comments City Farmer’ s Keyhole Garden from Michael Levenston on Vimeo. See HD High Definition version by clicking through on the video to Vimeo. Also see alternative HD High Definition version on YouTube. James Scale of Celtic Stonescaping is building our keyhole garden for us out of local basalt rock. The video shows progress by day two after volunteers hauled six tons of rock and gravel into our back Youth Garden yesterday. What a contrast, sun and mild one day, snow and cold the next; well it is December and the rest of the country is minus 30 degrees. December 12, 2008 1 Comment Terry Fujimoto, plant sciences professor at California State Polytechnic University, Pomona, checks his students’ hydroponics agriculture projects inside a greenhouse on the campus in Pomona, Calif. on Monday, Nov. 17, 2008. Fujimoto’s program is at the forefront of an effort to use hydroponics _ a method of growing plants in water instead of soil _ to bring farming into the urban areas where consumers are concentrated. (AP Photo/Damian Dovarganes) By JACOB ADELMAN Associated Press Writer Nov 21, 2008 Terry Fujimoto sees the future of agriculture in the exposed roots of the leafy greens he and his students grow in thin streams of water at a campus greenhouse. The program run by the California State Polytechnic University agriculture professor is part of a growing effort to use hydroponics _ a method of cultivating plants in water instead of soil _ to bring farming into cities, where consumers are concentrated. November 27, 2008 2 Comments PowerPoint presentation by Dr. Thilak T. Ranasinghe (See next page.) Sri Lanka National Agriculture Policy Documents Statement – 29 (2003) Implement a special urban agriculture promotion program designed to ensure supply of home consumption needs and environmental protection. Statement – 17 (2007) 17.1 Promote home-gardening and urban agriculture to enhance household nutrition and income 17.2 Promote women’s participation in home-gardening. November 15, 2008 4 Comments The Urban Potato: It’s Time Has Come By Jac Smit October 29, 2008 From the Desk of Jac Smit A few years ago I stood on the roof of a hospital in Port au Prince, Haiti. The surface was half straw and other half organic thrash and half potato foliage. A week later I visited a friend in Washington DC. He took me out to his porch and there was a bale of hay [wire bound] with potato foliage on three sides. I soon learned that these two cases were examples of “Lazy Man Farming”. Lazy Man was invented in Germany in the 19th Century. Its most cited practice is roadside cultivation in Newfoundland Canada. There the farmers collect seaweed, off load it on the side of the road, and insert seedlings. October 30, 2008 1 Comment How does your backyard garden grow? By David Colker, Los Angeles Times September 14, 2008 Marta Teegen, who owns Homegrown, a Los Angeles-based garden consulting company, will come to your house and install a vegetable garden with your choice of plants. She generally puts in about four 4-by-6-foot raised beds. The average cost — $2,000. At that rate, and because this is Los Angeles, it’s no surprise that several of her clients are celebrities (whom she declined to name) with private chefs. September 14, 2008 Comments Off on Los Angeles Times – Homegrown – urban agriculture business See a short-documentary on guerrilla gardening starring Richard Reynolds, the author of “On Guerrilla Gardening.” The piece basically shows the process, preparation and troops needed to go out on a gardening mission. From Current TV. August 19, 2008 Comments Off on Watch British Guerilla Gardeners in Action Up five floors at the YWCA in downtown Vancouver, amongst skyscrapers, is a spectacular rooftop food garden. Our two videos feature an interview with Ted Cathcart, Operations Manager and Rooftop Food Gardener at the YWCA. Email Contact: email@example.com August 8, 2008 1 Comment
<urn:uuid:ae1426bf-1a9b-433c-af95-df4b71a4f097>
CC-MAIN-2015-35
http://www.cityfarmer.info/category/small-space/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00042-ip-10-171-96-226.ec2.internal.warc.gz
en
0.917254
2,240
2.640625
3
Collapse of the concert of europe To What Extent Can The Collapse Of The Concert Of Europe Be Attributed To The Crimean War (1853-1856)? The collapse of the Concert of Europe can be attributed to the Crimean War to a limited extent as there were many other factors which acted to undermine the Concert, causing instability and disputes amongst the nations involved. Although the Crimean War can be indentified to have been a major instance in which participating countries disregarded their policies of peace in pursuit of national interest, this was not as significant to the collapse as earlier factors which essentially rendered the Concert obsolete. The rise of European nationalism and the conflicting ideology and differing aims of the countries involved created the unstable conditions for both the deterioration of the concert and the outbreak of the Crimean War. Therefore the Crimean War can be viewed as a final trigger, but not a sole instigation of collapse. The 18th Century nationalistic movement which was beginning to assert a strong hold among many European countries, acted to undermine the concert by threatening stability throughout Europe. In particular, the revolutionary upheavals of 1848 seriously weakened the Concert by demanding that frontiers established in the Congress of Vienna to be reviewed. In the Hungarian revolution of 1849, riots on the 15th of March by Magyar nationalists in Pest-Buda, now Budapest, the capital of Hungary, demanding Hungary's political independence from Austria resulted in the resignation of the Austrian Prince Metternich, a key personality in the negotiations in the Congress of Vienna. In a letter to Tsar Nicolas I of Russia in March 1848, a primary source informing of his resignation, Metternich describes the social crises as a ‘torrent... no longer within the power of man”. Revolutionary upheavals were also apparent in France, Italy, Germany, Switzerland and Poland. The balance of power maintained in Europe was shifting, and as expressed by Metternich, the Concert of Europe had little influence over it. This largely undermined the Concert's objectives, as stated in Article VI of the 1815 Quadruple Alliance between Britain, Austria, Prussia and Russia which formed the basis of the Concert, it was the responsibility of the ‘High Contracting Powers... to renew at fixed intervals... meetings consecrated to great common objects and the examination of such measures as at each one of these epochs shall be judged most salutary for the peace and prosperity of the nations and for the maintenance of the peace of Europe'. As peace was not being maintained, the concert was, even at this point, somewhat defunct. Furthermore, this movement acted as an important impetus for the political unification of Italy in 1861 and Germany in 1871. Owing to the development of 18th Century nationalism, Europe was geographically altered as countries gained their independence. Consequently, European diplomacy was also altered causing a weakening of the concert, especially as conflict arose between the countries involved regarding intervention in revolution.iHu A fundamental division amongst members of the Concert of Europe, caused by conflicting ideological perspectives regarding intervention against revolutionary movements, acted to undermine the relationship between the countries. A foremost concern for the preservation of peace was the manner of dealing with revolutions and constitutional movements as many statesmen feared the idealogy of the French Revolution was still a powerful influence and as settlements in the Congress of Vienna had failed to satisfy nationalistic and constitutionalistic ambitions. Austria and Russia maintained it was the responsibility and right of the great powers to intervene and impose their collective will on states threatened by internal rebellion, with the Austrian diplomat Metternich stressing that revolution was a ‘terrible social catastrophe' and believed that ‘only order produces equilibrium'. However, Britain did not wish to intervene in internal disputes and instead pursed a less reactionary policy. Britain's foreign secretaries, Castlereagh and later, Canning, acted to distance Britain from the policies of the continental powers with Canning clearly stating that ‘England is under no obligation to interfere, or assist in interfering, in the internal affairs of independent states'. Thus, Britain disputed intervention within the Congress of Troppau in 1820, a response to revolts in Spain, Portugal, Piedmont and Naples, and at the Congress of Laibach in 1821 where Austria and Russia had prepared to mobilise soldiers against Italian revolts. The tension which resulted from these disputes lead to Britain's increased isolation from Austria, Prussia and Russia while France maintained relations with both sides of the divide. Even though in 1825, a final Congress was held at St Petersburg in an attempt to resolve these disputes, only Austria, Prussia and Russia actively particpated revealing the large extent to which the Concert had been weakened. Despite the assertion that countries within the concert were acting for the greater interest of all of Europe, due to world economies becoming geo-political, with a focus on imperialism, colonialism and economic rivalry, the individual interests of countries revealed cracks in the system. Britain's particular opposition towards intervention in Latin American revolutions was based on the grounds that Britain would be forgoing trade profit from the Spanish if rebellions ended there, and hence, refused to cooperate on the grounds of nationalistic interest which existed despite the concert. Geo-political competition and jealousy between European nations became particularly apparent in their decision to prohibit the entry of all foreign warships into the straits between Bosporus and Dardanelles. As a reward for Russian military assistance against Egypt, Russia was rewarded with advantageous access to these straits by the Ottoman Empire in the Treaty of Unkiar-Skelessi in 1833, which closed the Dardanelles off to “any foreign vessels of war” other than Russian. This allowed Russian commercial vessels free access into the Mediterranean, a significant benefit for Russian export trade particularly considering the growing importance of ports such as Odessa in the Ukraine. The Concert was indignant of Russia's access to the straits and so an attempt to inhibit Russian expansionism, the straits convention was held in 1841 in which it was declared that no country should be in an advantageous position regarding the use of the straits. Furthermore, European nations were competing for raw materials, markets and land in order to fuel growing populations. Russia was still eager to increase its influence in the Balkans, and to gain control of the straits between the Black Sea and the Mediterranean Sea then under Turkey's control. Britain and France viewed Russian control of the straits as a threat to their own trade interests, and Austria was uneasy about Russia's growing influence in the Balkans. These tensions regarding the control of the Balkans in turn compounded the tension which already existed in the practically obsolete concert, and ultimately lead to the outbreak of the Crimean war, in which the remnants of the Concert expired. The outbreak of the Crimean War in 1853 signified the downfall of the Concert of Europe as the great powers engaged in war with one another over matters of national interest. In making an expansionary thrust at the Ottoman Empire, Russia disregarded any pretence of backing an altruistic balance of power. The causes of the Crimean War conflicted with the doctrine of the concert as an aspect of the preservation of the balance of power in Europe had been directed at preventing a single nation from gaining control of the Ottoman Empire, which was intended by Metternich to be a solution to the Eastern Question. As Russia sought to take exploit the decaying Ottoman Empire, in effect, it undermined the remnants of the Concert and the balance of power, leading to France and Britain, along with some assistance from Sardinia engaging in war to ironically, maintain peace in Europe. Effectively, this simply acted to sacrifice the Concert system with the war having the highest casualty rate of any European conflict between the Congress of Vienna in 1815 and 1914, the Outbreak of World War One, as more than 450 000 Russians, 95 000 French and 22 000 English lost their lives during the conflict. Renowned historian A.J.P. Taylor states that regarding European international relations, the Crimean War destroyed the charade of Russian military dominance in Europe, which lead to Russia's diminished influence in European affairs subsequent to 1856. Through sheer number, the Russian army had been the largest force and yet it was still defeated by the comparably smaller French and British armies. The internal effects of the war on countries within the Concert of Europe are also highly significant when considering the destruction of the balance of power. Having been made aware of Russia's social and industrial backwardness through military weakness within the war, the Russian Tsar Alexander II became convinced of the need for Russian reform. Napoleon III of France sought to adopt new foreign policies which eventually lead to conflict in the 1860s with Austria and Prussia. Austria had been isolated as its ties with Russia were severed due to Russia's expectation as a result of its assistance in suppressing the 1849 Magyar revolts in Hungary, Austria would remain neutral in the war. The Treaty of Paris reached in 1856, permanently altered the balance of power and highlighted the strain which had been placed on it through the Crimean War. At the conclusion of the war, severe penalties were placed on Russia by the other countries, restricting its influence. Russia was made to surrender Bessarabia, situated at the mouth of the Danube, had to forgo claims as protector of Orthodox Christians, and lost influence over the Romanian principalities which, along with Serbia, were granted greater independence. Furthermore, the Black Sea was declared neutral, closing it off to all warships which effectively left Russia with an undefended southern border. This left Russia with little incentive to uphold the goals of the Concert as it was now at considerable disadvantage to the other European powers. Upon the conclusion of treaty negotiations the Concert was obsolete, with its goals abandoned and communication at a stand-still. Through the treaty of Paris it became apparent that the Crimean war had disrupted nineteenth-century diplomacy, thereby destroying the decayed Concert of Europe. Although the Crimean War can be identified as the first major instance in which countries within the Concert of Europe clearly disregarded the policy of peace and turned against one another, it can only be held responsible for the concert's demise to a limited extent. The rise of Nationalism in Europe and the instability caused by the widespread outbreak of revolution caused a strong divide amongst countries. Britain's refusal to assist in intervention particularly acted to undermine the authority and cohesion essentially making the Concert practically obsolete prior to the outbreak of the Crimean War. Therefore the war can be seen to have been the conclusion of the concert, but was by no means the sole cause of collapse. Fisher, H.A.L, A History of Europe Volume II, Eyre & Spottiswoode 1935 Langhorne, Richard, The Collapse of the Concert of Europe: International Politics, 1890-1914, Macmillan, 1981 Lee, Stephen J., Aspects of European History 1789-1980, Routledge, 1982 Medlicott, William N, Bismarck, Gladstone, and the Concert of Europe, Athlone Press, University of London, 1956 Robinson, James Harvey, Readings in European History, Vol. II, Boston: Gin and Co, 1906 Schroeder, Paul W., Austria, Great Britain, and the Crimean War: The Destruction of the European Concert, Cornell University Press, Ithaca NY, 1972 Sweetman, John, The Crimean War, Osprey Publishing Limited, London, United Kingdom, 2001 Taylor, A. J. P. The Origins of the Second World War, Middlesex: Penguin Books, 1961 Robinson, James Harvey, Readings in European History p.464 Schroeder, Paul W., Austria, Great Britain, and the Crimean War, p.211 Lee, Stephen J., Aspects of European History, p.26 Lee, Stephen J., Aspects of European History, p.27 Langhorne, Richard, The Collapse of the Concert of Europe: International Politics, 1890-1914 p.38 Sweetman, John, The Crimean War p.42 A.J.P. Taylor, The origins of the second world war, Ch. 3 p.71 If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
<urn:uuid:23079abb-5cd6-4d42-8dc7-ce263ec6298a>
CC-MAIN-2015-35
http://www.ukessays.com/essays/history/collapse-of-the-concert-of-europe.php
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00281-ip-10-171-96-226.ec2.internal.warc.gz
en
0.954021
2,515
3.640625
4
Encephalization quotient (EQ), or encephalization level is a measure of relative brain size defined as the ratio between actual brain mass and predicted brain mass for an animal of a given size, which is hypothesized to be a rough estimate of the intelligence or cognition of the animal. This is a more refined measurement than the raw brain-to-body mass ratio, as it takes into account allometric effects. The relationship, expressed as a formula, has been developed for mammals, and may not yield relevant results when applied outside this group. Additionally to volume, mass or cell count, the energy expenditure of the brain could be compared with that of the rest of the body. Brain-body size relationship Brain size usually increases with body size in animals (is positively correlated), i.e. large animals usually have larger brains than smaller animals. The relationship is not linear, however. Generally, small mammals have relatively larger brains than big ones. Mice have a direct brain/body size ratio similar to humans (1/40), while elephants have a comparatively small brain/body size (1/560), despite being quite intelligent animals. Several reasons for this trend are possible, one of which is that neural cells have a relative constant size. Some brain functions, like the brain pathway responsible for a basic task like drawing breath, are basically similar in a mouse and an elephant. Thus, the same amount of brain matter can govern breathing in a large or a small body. While not all control functions are independent of body size, some are, and hence large animals need comparatively less brain than small animals. This phenomenon has been called the cephalization factor: E = CS2, where E and S are brain and body weights respectively, and C is the cephalization factor. To compensate for this factor, a formula has been devised by plotting the brain/body weight of various mammals against each other and a curve fitted so as to give best fit to the data. The cephalization factor and the subsequent encephalization quotient was developed by H.J. Jerison in the late 1960s. The formula for the curve varies, but an empirical fitting of the formula to a sample of mammals gives . As this formula is based on data from mammals, it should be applied to other animals with caution. For some of the other vertebrate classes the power of 3/4 rather than 2/3 is sometimes used, and for many groups of invertebrates the formula may give no meaningful results at all. EQ and intelligence in mammals Intelligence in animals is hard to establish, but the larger the brain is relative to the body, the more brain weight might be available for more complex cognitive tasks. The EQ formula, as opposed to the method of simply measuring raw brain weight or brain weight to body weight, makes for a ranking of animals that coincide better with observed complexity of behaviour. Mean EQ for mammals is around 1, with carnivorans, cetaceans and primates above 1, and insectivores and herbivores below. This reflects two major trends. One is that brain matter is extremely costly in terms of energy needed to sustain it. Animals which live on relatively nutrient poor diets (plants, insects) have relatively little energy to spare for a large brain, while animals living from energy-rich food (meat, fish, fruit) can grow larger brains. The other factor is the brain power needed to catch food. Carnivores generally need to find and kill their prey, which presumably requires more cognitive power than browsing or grazing. Another factor affecting relative brain size is sociality and flock size. Similarly, dogs (a social species) have a higher EQ than cats (a mostly solitary species). Animals with very large flock size and/or complex social systems consistently score high EQ, with dolphins and orcas having the highest EQ of all cetaceans, and humans with their extremely large societies and complex social life topping the list by a good margin. Comparisons with non-mammalian animals Manta rays have the highest for a fish, and either octopuses or jumping spiders have the highest for an invertebrate. Despite the jumping spider having a huge brain for its size, it is minuscule in absolute terms, and humans have a much higher EQ, despite having a lower raw brain-to-body weight ratio. Mean EQ for reptiles are about one tenth of the EQ for mammals. EQ in birds (and estimated EQ in dinosaurs) generally also falls below that of mammals, possibly due to lower thermoregulation and/or motor control demands. Estimation of brain size in the oldest known bird, Archaeopteryx, shows it had an EQ well above the reptilian range, and just below that of living birds. Biologist Stephen Jay Gould has noted that if one looks at vertebrates with very low encephalization quotients, their brains are slightly less massive than their spinal cords. Theoretically, intelligence might correlate with the absolute amount of brain an animal has after subtracting the weight of the spinal cord from the brain. This formula is useless for invertebrates because they do not have spinal cords or, in some cases, central nervous systems. EQ in paleoneurology Behavioural complexity in living animals can to some degree be observed directly, making the predictive power of the encephalization quotient less relevant. It is however central in paleoneurology, where the endocast of the brain cavity and estimated body weight of an animal is all one has to work from. The behaviour of extinct mammals and dinosaurs are typically investigated using EQ formulas. Recent research indicates that whole brain size is a better measure of cognitive abilities than EQ for primates at least. The relationship between brain-to-body mass ratio and complexity is not alone in influencing intelligence. Other factors, such as the recent evolution of the cerebral cortex and different degrees of brain folding, which increases the surface area (and volume) of the cortex, is positively correlated to intelligence in humans. - Gerhard Roth und Ursula Dicke (May 2005). "Evolution of the brain and Intelligence". TRENDS in Cognitive Sciences 9 (5): 250–7. doi:10.1016/j.tics.2005.03.005. PMID 15866152. - William Perrin. Encyclopedia of Marine Mammals. p. 150. - Marino, Lori (2004). "Cetacean Brain Evolution: Multiplication Generates Complexity" (PDF). International Society for Comparative Psychology (The International Society for Comparative Psychology) (17): 1–16. Retrieved 2010-08-29. - Marino, L. and Sol, D. and Toren, K. and Lefebvre, L. (2006). "Does diving limit brain size in cetaceans?" (PDF). Marine Mammal Science 22 (2): 413–425. doi:10.1111/j.1748-7692.2006.00042.x. - Hill, Kyle. "How science could make a chimp like Dawn of the Planet of the Apes’ Caesar's". The Nerdist. Retrieved 10 December 2014. - Shoshani, Jeheskel; Kupsky, William J.; Marchant, Gary H. (30 June 2006). "Elephant brain Part I: Gross morphology, functions,comparative anatomy, and evolution". Brain Research Bulletin 70 (2): 124–157. doi:10.1016/j.brainresbull.2006.03.016. PMID 16782503. - G.Rieke. "Natural Sciences 102: Lecture Notes: Emergence of Intelligence". Retrieved 2011-02-12. - Moore, J. (1999): Allometry, University of California, San Diego - Hart, B. L.; Hart, L. A.; McCoy, M.; Sarath, C. R. (November 2001). "Cognitive behaviour in Asian elephants: use and modification of branches for fly switching". Animal Behaviour (Academic Press) 62 (5): 839–847. doi:10.1006/anbe.2001.1815. Retrieved 2007-10-30. - Gould (1977) Ever since Darwin, c7s1 - Jerison, H.F. (1983). Eisenberg, J.F. & Kleiman, D.G., ed. Advances in the Study of Mammalian Behavior. Pittsburgh: Special Publication of the American Society of Mammalogists, nr. 7. pp. 113–146. - Brett-Surman, Michael K.; Holtz, Thomas R.; Farlow, James O. (eds.). The complete dinosaur. Illustrated by Bob Walters (2nd ed.). Bloomington, Ind.: Indiana University Press. pp. 191–208. ISBN 978-0-253-00849-7. - Isler, K.; van Schaik; C. P (22 December 2006). "Metabolic costs of brain size evolution". Biology Letters 2 (4): 557–560. doi:10.1098/rsbl.2006.0538. PMC 1834002. PMID 17148287. - Savage, J.G. (1977). "Evolution in carnivorous mammals" (PDF). Palaentology. 20, part 2: 237–271. Retrieved 19 February 2013. - Lefebvre, Louis; Reader, Simon M.; Sol, Daniel (1 January 2004). "Brains, Innovations and Evolution in Birds and Primates". Brain, Behavior and Evolution 63 (4): 233–246. doi:10.1159/000076784. Retrieved 19 February 2013. - Susanne Shultz and R.I.M Dunbar. "Both social and ecological factors predict ungulate brain size". doi:10.1098/rspb.2005.3283. - Striedter, Georg F. (2005). Principles of brain evolution. Sunderland, Mass.: Sinauer. ISBN 0-87893-820-6. - "Jumping Spider Vision". Retrieved 2009-10-28. - Meyer, W., Schlesinger, C., Poehling, H.M. & Ruge, W. (1984): Comparative and quantitative aspects of putative neurotransmitters in the central nervous system of spiders (Arachnida: Araneida). Comparative Biochemical Physiology no 78 (C series): pp 357-62. - James K. Riling; Insel, TR (1999). "The Primate Neocortex in Comparative Perspective using Magnetic Resonance Imaging". Journal of Human Evolution 37 (2): 191–223. doi:10.1006/jhev.1999.0313. PMID 10444351. - Suzana Herculano-Houzel (2009). "The Human Brain in Numbers- A Linearly Scaled-Up Primae Brain". Frontiers in Human Neuroscience 3: 1–11 (2). doi:10.3389/neuro.09.031.2009. PMC 2776484. PMID 19915731. - Paul, Gregory S. (1988) Predatory dinosaurs of the world. Simon and Schuster. ISBN 0-671-61946-2 - Hopson J.A. (1977). "Relative Brain Size and Behavior in Archosaurian Reptiles". Annual Review of Ecology and Systematics 8: 429–448. doi:10.1146/annurev.es.08.110177.002241. - Bligh's Bounty at the Wayback Machine (archived July 9, 2001) - "Overall Brain Size, and Not Encephalization Quotient, Best Predicts Cognitive Ability across Non-Human Primates". Brain Behav Evol 70: 115–124. 2007. doi:10.1159/000102973. - "Cortical Folding and Intelligence". Retrieved 2008-09-15. - Haier, R.J., Jung, R.E., Yeo, R.C., Head, K. and Alkired, M.T. (Sep 2004). "Structural brain variation and general intelligence". NeuroImage 23 (1): 425–33. doi:10.1016/j.neuroimage.2004.04.025. PMID 15325390. - a graph of body mass vs. brain mass - "Bligh's Bounty" — Stephen Jay Gould This page uses Creative Commons Licensed content from Wikipedia. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
<urn:uuid:bf57b78f-22aa-4c71-82a2-f23b671238f7>
CC-MAIN-2015-35
http://www.digplanet.com/wiki/Encephalization_quotient
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00161-ip-10-171-96-226.ec2.internal.warc.gz
en
0.794221
2,635
3.40625
3
Today is the winter solstice, which means it’s also the sixth anniversary of this blog. On these anniversaries I like to write about archaeoastronomy, which is a very interesting topic and an important one for understanding Chaco and Southwestern prehistory in general. Last year I wrote about some research indicating that in the Rio Grande valley, an area generally thought to be outside the Chaco system but that was certainly occupied at the same time as Chaco, there was a long and very consistent tradition of orienting pit structures to the east-southeast, which is the direction of winter solstice sunrise. The winter solstice is very important in the cosmology and rituals of the modern Pueblos, so it makes a lot of sense that at least some Pueblo groups would orient their dwellings based on it. As I noted at the time, this orientation is very different from that in the San Juan region to the west, including Chaco and Mesa Verde. In this area there is an equally long tradition of orienting pit structures to either due south or south-southeast. I’ve long wondered why this might be, and an article I read recently discusses the issue and proposes some interesting potential answers. The article is by Kim Malville and Andrew Munro and was published in the journal Archaeoastronomy in 2010 as part of a special issue on archaeoastronomy in the Southwest. Malville is an astronomer who has done a lot of research on archaeoastronomy in the Southwest and identified many potential astronomical alignments, but this article is actually largely about debunking many of the alleged alignments claimed by others, particularly Anna Sofaer and her Solstice Project. Sofaer, an artist who turned her attention to archaeoastronomy after discovering the “Sun Dagger” effect involving a spiral petroglyph on Fajada Butte that on the summer solstice appears (or appeared) to be bisected by a “dagger” of light coming through a slit between large boulders in front of it. Sofaer went on to organize surveys of the major great house sites in Chaco Canyon to identify any celestial alignments in the orientation of their walls, and her team found that virtually all of them did show alignments to the positions of the sun or moon on solstices, equinoxes, or lunar standstills. Sofaer and her collaborators went on to publish these findings widely, and to make a well-known documentary that has often been shown on television and inspired a lot of interest in Chaco. As Malville and Munro show in this paper, however, the evidence for these alignments is very thin. There is little to no justification in Pueblo ethnography for the idea of celestial building alignments, and the alignments themselves are identified with a substantial margin for error that makes spurious positive identifications likely, especially when so many potential alignments are tested for. Particularly concerning is how many of the alignments are to the minor lunar standstill, which is not a very impressive or noticeable event. (The major lunar standstill is a different story, and there is strong evidence at Chimney Rock in Colorado that the Chacoans were familiar with it and considered it important.) Malville and Munro also argue that the fact that most of the alignments are based on the rear walls of sites is also questionable, since there is no evidence that rear wall alignments were or are important culturally to Puebloans. Instead, they argue that the alignments of rear walls are epiphenomenal, and that they mostly result from the more solidly established concern with the orientation of the front of a site. The bulk of the article is devoting to tracing these frontal orientations across time and space, with a primary focus on Chaco itself and on the earlier Pueblo I villages in the area of Dolores, Colorado that are often seen as being partly ancestral to the Chaco system. As I noted above, there are two main orientations that persist through time in the San Juan region. One is to due south, and the other is to the south-southeast (SSE). With pit structures these axes are typically defined by a straight line of sipapu (if present), hearth, deflector, and vent shaft. There is often also a measure of bilateral symmetry between features on either side of this line, such as support posts. When there are surface rooms behind a pit structure, they often (but not always) conform to the same alignment, and when the back of a row of surface rooms is straight, it is typically perpendicular to the main orientation. Malville and Munro argue that these perpendicular back walls on many Chacoan great houses, which Sofaer has identified as having alignments to various astronomical phenomena, are really subsidiary effects of the main emphasis on frontal orientation. The authors start their survey of orientations with the Basketmaker III pithouse village of Shabik’eschee at Chaco. Of 15 pithouses for which they could find adequate information on orientation, 11 faced SSE with an average azimuth of 153.7 degrees and 4 faced south with an average azimuth of 185 degrees. Strikingly, none of the pithouses showed any other orientation. The north-south orientation isn’t difficult to understand, and Malville and Munro attribute it to use of the night sky for navigation (which would have been easy enough at this time even though there wasn’t actually a north star), and they also mention the widespread presence of Pueblo traditions mentioning origins in the north. While the exact reasons for adoption of this orientation may not be clear, its consistency isn’t unexpected since it’s pretty obvious and easy to replicate. The SSE orientation, on the other hand, is a different matter. Note that at Shabik’eschee this was much more common than the southern orientation, from which it is offset by about 20 to 30 degrees in individual cases. There is more variation in this orientation than with the southern one (standard deviation of 7.7 degrees versus 2.4), but it’s sufficiently consistent and common that it seems like there must be some specific reason for it. Unlike the southern orientation, however, it’s not at all clear what that might be. Malville and Munro, sticking to their interpretation of orientations as references to places of origin, suggest that in the case of Shabik’eschee it might reflect the fact that some people might have migrated to Chaco from an area that was more to the north-northwest than due north, which seems implausible to me but then I don’t have a better explanation myself. In any case, this pattern continues through time. The next set of orientations Malville and Munro look at are those of the pit structures at the Pueblo I Dolores villages. What they find is that SSE orientations are dominant here too, even more so than at Shabik’eschee. In fact, all of the pit structures they looked at had SSE orientations except those at Grass Mesa Village, which mostly faced faced south (although even here there were a few SSE orientations). This is in keeping with other evidence for differences in architecture among different villages at Dolores; Grass Mesa is known for having long, straight room blocks, as opposed to the smaller and often crescent-shaped roomblocks at McPhee Village, which with it is most often compared. The Duckfoot site, to the west of the Dolores villages but contemporaneous with them, also had a SSE orientation. Further west, however, southern orientations become more common, including at the important village sites of Yellow Jacket and Alkali Ridge, plus some of the earlier Basketmaker II sites on Cedar Mesa in Utah. There was one more orientation used during the Pueblo I period in the Northern San Juan region, however. At Sacred Ridge, in Ridges Basin near modern Durango, Colorado, the average azimuth of the pit structures is 120 degrees, the same east-southeast orientation corresponding to winter solstice sunrise so common in the Rio Grande. Malville and Munro remark on the similarity to the Rio Grande pattern and consider it “puzzling,” positing some potential ways that it could have come about. They argue, however, that wherever this pattern came from it didn’t last in the north, and they point to the extremely violent end to the occupation of Sacred Ridge as the end of this orientation tradition in the San Juan region (although this may not be strictly true, as discussed below). From here Malville and Munro turn back to Chaco. Specifically, they look at the great houses at Chaco during its heyday from about AD 850 to 1150. Rather than pit structures, they focus on roomblocks, and they interpret the orientation of a roomblock to be the perpendicular to its long axis (in the case of rectangular roomblocks) or the perpendicular to the ends of the crescent of roomblocks with that shape. They find that most of the great houses have a SSE orientation, in keeping with the general trend throughout the region, as do the three northern outlier great houses of Chimney Rock, Salmon, and Aztec. Since this orientation is very close to the perpendicular of the minor lunar standstill moonrise alignment that Sofaer has proposed for many of these buildings, Malville and Munro argue that this widespread orientation explains the pattern much better than the lunar alignment. Pueblo Alto and Tsin Kletzin have north-south orientations, which is unsurprising since they lie on a north-south line with each other. A few of the great houses have a more complicated situation. Peñasco Blanco appears to face east-southeast at an azimuth of approximately 115 degrees. This is intriguingly close to the Rio Grande/Sacred Ridge winter solstice orientation, which Malville and Munro do note. Although the unexcavated nature of the site makes it hard to tell for sure, it is possible that this is in fact an example of this orientation surviving much later in the San Juan region than the destruction of Sacred Ridge, although what, if any, connection there might be between the two sites is unclear. And then there’s Pueblo Bonito. While the very precise north-south and east-west cardinal alignments of some of the key walls at this site are well known, it has also long been noted that there is evidence for different alignments and change over time here. Malville and Munro interpret the early crescent shape of the building as having a SSE orientation, and like many others they relate it to the similar size, shape, and orientation of McPhee Pueblo at McPhee Village. They then describe multiple stages of drift away from this orientation toward the cardinal orientation. There is surely something to this interpretation, but a careful look at the stages of construction of the site shows that the picture is probably more complicated. The very first construction at Bonito appears to have been straight and oriented to the south, and to have been incorporated later into the SSE-facing crescent. Subsequent building stages show evidence of both orientations having been present throughout the history of the building. The complicated situation at Pueblo Bonito provides a convenient segue to the key issue here: what was driving this long-term but consistent variation? Why were two different orientations for buildings present in close proximity for hundreds of years, even as populations moved long distances and adjusted their cultures in profound ways? Malville and Munro suggest that these orientations may reflect longstanding cultural and ethnic diversity in the prehistoric Southwest. Given how long-lived and consistent these patterns are, they propose that they were related to deep-seated cultural identities. This is an intriguing idea that may allow tracking of specific cultural groups across the Southwest over centuries. It also provides another piece of evidence that Chaco Canyon was a multicultural community, and implies that even Pueblo Bonito itself contained groups with diverse backgrounds. The picture is probably even more complicated than Malville and Munro suggest. They tend to implicitly assume that the orientations of pit structures are the same as those of the room blocks with which they are associated, but at least at Chaco this is not necessary true, particularly for small-house sites, which they also don’t address at all in this study. There are many examples of small houses where the room blocks are oriented to the east but the pit structures are oriented to the south (and possibly also SSE, although I haven’t checked this). This eastern orientation may reflect connections to the south, which have gotten a lot less attention in the literature than connections to the north although they appear to have been pretty important in the origins of Chaco. In any case, I think this is fascinating stuff. It may not be archaeoastronomy per se, but it seems like a fitting way to mark the solstice. Malville JM, & Munro AM (2010). Cultural Identity, Continuity, and Astronomy in Chaco Canyon Archaeoastronomy, 23, 62-81
<urn:uuid:0a65db14-dc6d-46ff-b431-0cd2a2b4d7c8>
CC-MAIN-2015-35
https://gamblershouse.wordpress.com/category/ideas/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00338-ip-10-171-96-226.ec2.internal.warc.gz
en
0.969281
2,755
3.21875
3
Fundamentals of Legal Analysis You have been told you need to "think like a lawyer." You have been told that well-written legal analysis is the key to succeeding on exams. You've grappled with court opinions in which the analytical steps may or may not be clear, and you have grappled with how to apply your own analytical skills to legal writing problems, classroom hypotheticals and perhaps exam answers. Developing analytical skills is, for most students, the greatest challenge law school presents. But most students enter law school already possessing considerable analytical ability. The following problem is designed to help you identify the steps involved in analytical reasoning. The "solution" will walk you through the steps--steps that you undoubtedly already use to solve everyday problems (such as the incident detailed in the hypothetical below). You are perhaps already "thinking like a lawyer" without even realizing you are doing so. The Story of Joe's Birthday Party Joe was dissatisfied with his birthday party this year. His mother had invited guests, but instead of inviting his whole kindergarten class she just invited five children, three from the neighborhood and his two-year-old twin cousins. There was no cake; instead she served pizza and apple pie. She put a candle in the pie but nobody sang happy birthday. The guests did not really play games; instead they watched an educational video on ocean life. Some of the guests brought gifts, but they were practical items like underwear, pencils for school, and yogurt. Joe's mother did decorate the house with pictures of aquatic mammals and hung a banner that said "Happy Birthday, Joe." Does Joe have a cause of action against his mother for failing to provide a birthday party (ignore intra-family immunity issues)? Preliminaries Before we break down the steps you might have intuitively used to resolve this issue, let's ask a few questions. 1. Did Joe have a birthday party? In answering a legal question it is fine to start from your intuitive sense of what the outcome should be. You may have responded intuitively that yes, Joe had a party, or no, he was robbed. Similarly, when you read an exam problem or hypothetical, you may have an instant sense of what you believe the result should be. Don't ignore your instinct. However, analytical reasoning requires that you use your instinct as a starting point and then build an argument that leads to that result. And, as a lawyer, you also need to be able to make or at least anticipate the arguments from the opposing side. A non-lawyer, then, might stop with the yes or no answer to question one. A lawyer moves on. 2. Why/why not? Legal analysis--being a lawyer--requires more than just taking a stand. It requires us to articulate our reasons. You will therefore need to justify your answer. A natural starting point is to look at the facts. If you think Joe had a birthday party, is it because he got gifts, because there were decorations, because there was a candle? Or, if your answer was no, was it because there were so few guests, no cake, or because no one sang "Happy Birthday?" The facts that intuitively spring to mind will likely end up being the legally relevant facts, i.e. those facts that support your legal arguments. Looking at the facts is a good starting point for justifying your gut feeling about a problem, and therefore a good starting point for legal analysis. But it is just a starting point. 3. On what principles did you base your answer? An argument based solely on facts may be persuasive and, in some cases, may be all you have to sway a jury to your side. But true legal analysis is based on the law. There must be rules, principles, statutes--something on which to base your claim. For a law student, learning the law is the first step. Applying the law to the legally relevant facts in order to make an argument is the foundation of legal analysis. In the next sections, we'll continue to use the example of Joe's birthday party to discuss ways in which the law is organized and ways to approach legal analysis. Basic Steps in Analytical Reasoning In many ways, analytical reasoning, or legal analysis, is the process of learning to think "inside the box." This is not to say that legal analysis lacks creativity or intellectual rigor; rather it is a recognition of the fact that our legal system is based on rules and precedent. Finding the right rules, or the cases that justify your argument, provides the "box" or the foundation from which to make your arguments and provides the judge with a basis for making a ruling. Once you have gotten past the preliminaries, finding the right "box," or body of law, is thus the crucial first step. 1. The threshold question Many fields of law have a threshold question that you must always address before beginning to formulate your answer. Often this question has to do with the basic facts of the case. For example, in contracts, before you begin any problem you need to ask, is this a sale of goods (use the U.C.C.) or not (use the common law)? Sometimes the threshold questions involves a pure legal matter, e.g., in criminal law you may need to know whether the Model Penal Code applies. Asking that threshold question will help you identify the correct body of law from which to begin formulating your analysis. For our hypothetical, we will assume there is only one body of law governing the adequacy of birthday parties. 2. The broad context--identifying the big question Legal analysis is the process of moving from the broad to the more specific. You may have identified an intuitive answer to a problem. For example, if you have a client who believes he's entitled to purchase a used car at a certain price, you may have concluded that, yes, your client has a valid claim. But in order to establish that claim, the client (or you, the attorney) will need to show there was a breach of a valid contract. Your legal argument regarding the contract may end up focusing on a particular element or portion of that larger issue, but it is essential to keep in mind the overall question you need to answer. Once you have resolved any threshold issues, start your legal analysis by identifying that over-arching issue. Is there a valid contract? Is this statute a valid assertion of Federal legislative power? Was the doctor negligent? So, in your contracts case you may ultimately focus on whether an offer had been effectively revoked, but you are answering that question in order to establish the existence of a contract and therefore to establish that your client is entitled to purchase the car. As far as Joe, our over-arching issue is whether or not he had a birthday party. We can now move on to the components of that issue. 3. The specific rules of law--the basis of your analysis We are now going to leave Joe behind for a moment to examine how rules of law are organized. Undoubtedly you spend hours in class, at home and in the library learning rules of law. Legal analysis requires you to put the rules of law to use. Once you have identified the broad context or the over-arching question you need to answer, you then need to identify the specific rules that comprise the legal definition of that issue. Understanding how those rules are organized is crucial to effective analysis. Rules of law are organized in different ways: a. Elements Law that is organized by elements is set forth as a number of requirements. For example, a negligence claim requires that a plaintiff establish 1) duty; 2) breach; 3) cause (in fact and proximate); and 4) damages. Law that is organized by elements is generally the most straightforward analytically. To prevail on a claim, a plaintiff must establish proof of each element. To analyze the claim, all a student needs to do is proceed through the list of elements and determine which facts support the existence of each element. In most cases, even if you determine that your client fails to establish a case on one element, your professor will expect you to complete the analysis and address all the elements of the claim. b. Process/steps Here the law gets more complicated. Some problems require the attorney to go through a series of analytical steps in order to reach a legal conclusion. Imagine our over-arching question, "is there a contract?" Now, imagine we have a sale of goods (threshold question, the U.C.C. applies) and a number of documents flying back and forth. Suddenly our analytical "box" is Sec. 2-207, Battle of the Forms. Here, in order to resolve a legal problem, you will need to proceed through a series of steps. These steps may have detours or forks in the road. On an exam you may have to explore both prongs of the fork, e.g., what happens if the parties are merchants, or if they are not? You will need to recognize that there may be alternatives and address all the relevant possibilities. c. Balancing tests/factors Every type of legal analysis calls for you to make judgments, but this skill is perhaps most important when applying law that is organized as a balancing test. A classic example of a balancing test is the strict scrutiny test in Constitutional Law (compelling state interest and the least restrictive means possible to address that interest). To apply a balancing test you must understand each prong, and be able to evaluate one against the other, often bringing in specific examples from prior case law. d. Hybrids Some bodies of law are hybrids, containing a variety of organizational types. Consider, for example, personal jurisdiction. Analyzing a personal jurisdiction problem basically requires that you walk through a process. However, some steps in the process, e.g. minimum contacts, require you to analyze a set of factors (purposeful availment, foreseeability of litigation, etc.). Adverse possession in property basically sets out a list of elements, but you might have to go through the analytical process of tacking if your facts present a sequence of two or more adverse possessors. e. Defenses Don't forget that there are rules, and then there are defenses and exceptions to those rules. In torts, for example, the touching that occurs during a typical medical exam could generally be considered a battery, except that in the context of the exam, the patient has consented to the contact. Defenses may not be relevant to every problem, but always look for possible defenses and exceptions when you are studying your facts. We will apply these categories to the "law" governing Joe's birthday party after we have considered the final steps in legal analysis. 4. Take a step outside the box--policy and the broader issues These instructions have emphasized thinking "inside" the box; however, it is important to remember that all of our laws exist as part of a larger legal system and that that legal system is just one aspect of our broader society. Not every legal problem will involve larger concerns, but keep the big picture and policy concerns in mind. What are the values at play in the body of law with which you are dealing? In contracts, for example, the law is designed to effectuate the intent of the contracting parties, but also, particularly under the U.C.C., to keep the wheels of commerce turning. Jurisdictional issues in civil procedure call in to play the competing values of providing access to the Federal courts, but keeping those courts from being swamped with litigation. After working through an analysis of the specific requirements of the law, it is perfectly appropriate to address whether the ultimate goal of the law was realized in your case. As far as Joe's birthday party, the broader issue might be to ensure that Joe had a good time on his special day. 5. Conclude After going through all those steps, you can return to that intuitive feeling you had about how your legal issue should be resolved--only now you have an analytical basis for your answer. 6. Back to Joe's birthday party So where does this leave us with Joe's birthday? Let's go through our analytical steps. Any threshold questions? In the field of birthday party law, probably not. As a basic fact we may want to note that Joe is a child (we're told he's in kindergarten) so that we will be using the standard for birthday parties applicable to children. What is our over-arching question? Here, it's easy--did Joe's mother provide him with a birthday party? What are the specific rules of law? There is no Restatement of Birthday Parties, but let's assume birthday parties require a certain minimum of components. When you first answered the question, what were some of the factors you considered? You probably ran through a list that included some of the following: guests cake gifts games decorations By applying your own internal standard to determine whether Joe had a birthday party, you were in essence creating and applying your own body of law. [In the real practice of law, of course we do not create our own rules; we rely on statutes and precedent.] This law looks like a straightforward list of elements, which would require us simply to work our way down the list and determine if we have sufficient facts to satisfy each item. However, assuming we have done our studying, we might be aware that what looks like a simple list of elements may in fact be a hybrid body of law. For example, perhaps in order to determine if there were guests we need to establish a "minimum" number based on the "quantity and nature" of guests at the party (civil procedure students, these ideas should sound familiar). Suddenly, we might be dealing with a set of factors (proximity in age to Joe, gender, how much Joe likes each individual, etc.) or a balancing test in order to determine whether Joe's mother has satisfied this element of the definition. Five guests might be adequate if those guests were Joe's five best friends, but if the cousins are considered "low-quality" guests, perhaps five fails to satisfy the minimum standard. Remember, too, that looking carefully at the facts will affect your analysis. For example, we are told that Joe's mother did not make a cake but instead served pie. Does that mean she just failed to satisfy that element? Would it make a difference if we'd been told that Joe really hates cake and loves pie (in effect, establishing an affirmative defense)? What is the larger goal? Whatever you are conclude as far as the elements of Joe's party, keep in mind that the reason we are asking whether Joe's mother provided him with a "birthday party" under the law is to ensure that Joe felt celebrated on his special day. Your analysis and conclusion should reflect this goal. Conclude? Did Joe have a birthday party? On most law school exams, the professor will be less concerned with whether you answer "yes" or "no" to the questions and far more concerned with your ability to use the process of analytical reasoning to arrive at your answer. Our legal system is an adversarial one; there are two sides to every issue. In the end, whatever you conclude about Joe and his party will be correct as long as you have based your answer on a thorough knowledge of the law and sound analytical reasoning. Final Thoughts Determining whether Joe had a birthday party does not involve complex legal issues, but the problem illustrates how most people use analytical reasoning every day to evaluate situations and make decisions; often they are not even aware they are doing so. For the law student, consciously going through this analytical process and articulating every step is the key to succeeding in school and eventually in practice. Legal analysis is based on a comprehensive knowledge of the law. Once that knowledge is in place, most people can improve their analytical skills by slowing down their thought processes, making sure they address each analytical step in order, and using the correct rules as a basis for answering the question.
<urn:uuid:2ff436a9-d381-4c0f-8e52-ea42e9a124cb>
CC-MAIN-2015-35
http://www.hamline.edu/law/academic-success/resources/legal-analysis/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00334-ip-10-171-96-226.ec2.internal.warc.gz
en
0.966067
3,238
2.78125
3
by Scott G. Kenney Sidney Rigdon (1793–1876) was a leading figure in founding two American religious traditions, Mormonism and the Churches of Christ. Born ten miles south of Pittsburgh on February 19, 1793, his and his uncle’s families belonged to the denomination called Regular Baptist (three cousins would become ministers). Regulars were strict Calvinists who adhered to the tenets of the Philadelphia Confession of Faith (1742), including original sin, total depravity of human beings, predestination, irresistible grace, saving faith, and spiritual rebirth. Ordained in 1820, Sidney became minister of the Pittsburgh Baptist Church in 1822 and embraced the movement of reformer Alexander Campbell, who called for the “restoration of the ancient order of things.” Campbell denounced creeds and all other “inventions of men” in favor of the authority of the New Testament. Instead of faith as the transformative operation of the Spirit on the elect, Campbell preached faith as a simple, reasoned response to the testimonies of the apostles in the New Testament. Instead of baptism as an emblem of spiritual rebirth—the outward sign of an inward grace—he insisted that baptism was for the remission of sins. In baptism the sins of the penitents were formally washed away. For teaching these and related heresies, Rigdon was expelled from the pulpit by the Redstone Baptist Association in 1823. Moving to Ohio, he affiliated with the Mahoning Association of his mentor and brother-in-law, Adamson Bentley. Campbell also migrated to the Mahoning group, as did Walter Scott, Campbell’s and Rigdon’s non-denominational cohort in Pittsburgh. There was a natural affinity between Scott and Campbell, both being products of the Scottish Enlightenment—Walter a graduate of the University of Edinburgh and Campbell having attended the University of Glasgow. Engaged as the evangelist for the Mahoning group in 1827, Scott developed an innovative method of teaching. The gospel, he came to believe, came first through faith as a rational assent of Jesus as Lord; next came repentance, a determination to turn from sin; third, baptism by immersion for the remission of sins; and fourth, the gift of the Holy Spirit, leading to eternal life. At the end of an address, he called for the converted to come forward and be baptized. This emphasis on baptism for the remission of sins was tantamount to Catholic “sacramentalism” in the minds of mainstream Protestant clerics. However, the simplicity and immediacy of Scott’s approach were appealing to many people at the time. In four weeks he baptized 150 converts, most of them Regular Baptists. As pastor of the Regular Baptist Church that served Painesville, Mentor, and Kirtland, Rigdon adopted Scott’s approach, as did his brother-in-law. In six months the three immersed eight hundred people, wreaking havoc among the Regular Baptist Churches and leading the editor of one Presbyterian paper to exclaim, “Look at the Baptist Church in this part of the country. It has been thrown into the greatest confusion, torn and lacerated to the bone—divided and subdivided, and is now bleeding at every pore.” Sidney organized five “Reformed” congregations, and the moniker “Rigdonites” began to be applied to his followers. He was, Alexander Campbell acknowledged, “the great orator of the Mahoning Association.” But by early 1830, differences between Rigdon and Campbell were becoming apparent. In his Christian Baptist, Campbell frequently referred to the “restoration” of primitive Christianity, but Rigdon went further and said the people of the Old Testament had been Christians. Contrary to Campbell, Rigdon believed that miracles and gifts of the Spirit were still needed in the world. People needed to be “called” to preach the gospel, as opposed to Campbell’s view that those who had a desire to preach should do so. The millennium, said Campbell, had already begun. By early 1830 the differences between Rigdon and Campbell had become apparent. In February 1830 three members of Rigdon’s flock, Isaac Morley, Titus Billings, and Lyman Wight, combined their assets as a “common-stock company” patterned after the New Testament Church. Soon eight more families joined. At the annual meeting of the Mahoning Association in late August, Rigdon promoted the practice. Campbell was repulsed by this fanaticism and argued that the biblical franchise had been instituted in a specific time and place and was no longer relevant. The two argued, and in the end Campbell prevailed. Enter the Mormons Within days, one of Sidney’s protégés, Parley Pratt, met Joseph Smith in New York and was rebaptized for the remission of sins. In October he and Oliver Cowdery arrived at the Rigdon home in Mentor and pointed out that Mormons held several heterodox views in common with Reformed Baptists. Both decried sectarianism and disavowed creeds and a formal clergy. Both were restorationist and taught the formula of faith, repentance, baptism, and the Holy Ghost. Faith was considered to be an intellectual exercise. Both called on believers to come forward and have their sins immediately washed away. The similarities were so striking that one newspaper article carried the headline, “The Golden Bible, or, Campbellism Improved.” There were differences, to be sure, but they tended to occur at points where Mormons agreed with the Rigdonite critique of Campbellitism. Both Rigdon and Smith believed in a literal and far-ranging restoration that would include prophecy, priesthood authority, and gifts of the Spirit. Smith too believed that the ancient patriarchs and prophets were Christians who were called to prepare the way for Jesus, that the current age was a short preparatory period to prepare for Christ’s millennial reign. The Mormon missionaries baptized seventeen people in one night at the Morley farm near Kirtland. Rigdon himself abruptly reversed course, repudiated Campbellism, and was baptized into the Church of Christ (Mormon) on November 8, 1830. More than a hundred souls were added to the Mormon movement in the next two weeks. The next month a revelation to Smith identified Rigdon as the latter-day John the Baptist who would “prepare the way” for the millennium and be Smith’s scribe and spokesman. This would be reiterated in 1833 (Kirtland Revelation Book, 72; LDS D&C 100:9-11). Rigdon got to work and helped Smith produce an “inspired” new translation of Genesis and the New Testament; Rigdon also composed the doctrine portion of the Doctrine and Covenants, known as the Lectures on Faith, which although removed from the canon in 1921 were enormously influential until then. Rigdon became the public face of Mormonism. On January 25, 1832, he ordained Smith president of the high priesthood, and in March 1833 he became a counselor to Smith in the “presidency of the high Priesthood,” even “equal” with Smith in holding the “keys of the Kingdom.” When he offered a two-and-a-half-hour dedicatory sermon at the House of the Lord in Kirtland in March 1836, he did so “in his usual, forcible and logical manner,” while Smith was said to have delivered “a short address … in a manner calculated to instruct the understanding, rather than please the ear,” according to the Latter-day Saints’ Messenger and Advocate of March 1836. Rigdon also gave both the opening and closing prayers for the occasion, while Smith read the scripted dedicatory prayer. Trouble in Ohio and Missouri The cost of the House of the Lord was so burdensome that in August 1836, Smith and Rigdon, accompanied by a few others in the Church leadership, traveled to Salem, Massachusetts, to search for buried treasure there (D&C 111). When they failed to find anything, and as debtors clamored for satisfaction, Smith and Rigdon were forced to flee from Kirtland on horseback in the middle of the night on January 12, 1838, and arrived three months later in a Mormon settlement in Missouri called Far West. Rigdon soon distinguished himself there through his occasional paranoia, invective against the Church’s opponents, and religious extremism. For instance, on June 17, 1838, he gave a sermon known to historians as the Salt Sermon, in which he said that it was the duty of Mormons “to trample [dissenters] into the earth” and hang them because like salt that “has lost its savor,” they were intended to be “cast out” and “trodden under the feet of men” (see Matt. 5:13). Smith followed Rigdon on that occasion and spoke approvingly of what his counselor had said, adding that in the New Testament, “Judas was a traitor, and instead of hanging himself, he was hung by Peter.” On July 4, Rigdon revisited the theme of violence against the Church’s enemies and said “it shall be between us and them a war of extermination, for we will follow them, till the last drop of their blood is spilled or else they will have to exterminate us; for we will carry the seat of war to their own houses, and their own families.” By October the governor insisted that the Mormons leave the state. Smith, Rigdon, and other LDS leaders were apprehended by the Missouri state militia in October and held over in a jail in Liberty, Missouri, on the charge of treason. When Rigdon’s case was reviewed on January 25, 1839, he gave his own defense so eloquently that, as his attorney, General Alexander Doniphan of the state militia, said: “Such a burst of eloquence it was never my fortune to listen to” and “at its close there was not a dry eye in the room, all were moved to tears.” In response, the judge offered bail and immediate release. Even non-Mormons in the room helped raise the $100 needed for Rigdon’s freedom. When the Latter-day Saints left Missouri for a swampy bend of the Mississippi River in southern Illinois, Rigdon was struck with malaria and was thereafter often bedridden for weeks at a time. Without Rigdon’s knowledge, Smith was beginning to experiment with plural marriage. In fact, Smith eventually proposed to Rigdon’s teenage daughter Nancy. When confronted about this, Smith first denied and then admitted his interest in Rigdon’s daughter after being shown a letter he had written to Nancy. In that letter, Smith had famously written that “happiness is the object and design of our existence … That which is wrong under one circumstance may be, and often is, right under another” (History of the Church of Jesus Christ of Latter-day Saints, 5:134). Rigdon concluded that his friend and spiritual mentor had “contracted a whoring spirit,” but Rigdon continued to support Smith as God’s prophet. Smith repaid that loyalty in May 1844 when he chose Rigdon to be his running mate for the U.S. presidency. Needing his vice president to come from a different state, he sent Rigdon to Pennsylvania to gain residency there. The next month—just a few days after Rigdon’s departure—Joseph and Hyrum Smith were assassinated, leaving Rigdon the only surviving member of the First Presidency. When contacted about the murders, Rigdon said he was prepared to claim the “prophetic mantle” and “take his place as the head of the church.” He told the Quorum of the Twelve to meet him in Pittsburgh. They had other plans and told him to come to Nauvoo. After arriving in August, he stood with Brigham Young on Thursday, the eighth, on a platform east of the Nauvoo temple and spoke to a crowd of 5,000 Church members. Rigdon’s oration was well received, but Young out-maneuvered him by painting a picture of a new type of Church government in which the apostles would collectively lead the Church. A month later, Rigdon was tried before the Nauvoo High Council in a raucous session overseen by Young in which Rigdon was accused of introducing people to temple ordinations in Pittsburgh, receiving revelations without submitting them to the Twelve, and (although not stated explicitly) objecting to polygamy. Despite a spirited defense from stake president William Marks, the high council voted unanimously to excommunicate Rigdon. Rigdon boarded a riverboat in September, never to return to Nauvoo. One of the apostles, Orson Hyde, caught up with Rigdon in St. Louis to ask him to not write publicly about the apostles’ polygamous liaisons. But once established in Pittsburgh, Rigdon started a rival newspaper called, like its LDS predecessor, the Latter Day Saints’ Messenger and Advocate and devoted its pages to revealing the sexual sins of the apostles. He even wrote in November: “Oh, Joseph! Joseph! Joseph! Where art thou? Oh, Joseph, thou wicked servant, thou hast fallen because of thy transgression!” Rigdon organized a Grand Council of about seventy men on April 6, 1845, which elected him president of a new Church of Christ. Like the Church Rigdon was once associated with, this new Church soon had twelve apostles, a presiding bishopric, and standing high council. However, a month after being called as the Church’s presiding officer and prophet, when a large fire broke out in Pittsburgh, Rigdon was criticized for not having predicted it. He lost many of the Church’s most prominent members, but he nevertheless responded by leading the stalwart hangers-on out of the city to the south to the Cumberland Valley, which bordered West Virginia, where he said they would build a New Jerusalem. They bought a 390-acre farm from Andrew G. McLanahan on credit; they discovered sympathetic neighbors such as a nearby commune of German Seventh Day Adventists. Rigdon’s people would have done well to follow their German neighbors’ industriousness; if Rigdon had devoted more time to planting and less to planning for the messiah’s advent, they may not have been evicted from the farm the next year. But they were, and local merchants were soon using unfolded printed sheets for a Church hymnal as wrapping paper (Richard S. Van Wagoner, Sidney Rigdon: A Portrait of Religious Excess, 379-393). With the failure of the Adventure Farm, as the original owner had called it, Rigdon was compelled to move in with family members in New York, accompanied by his wife, Phoebe. They spent the next twenty-five years in quiet personal study of the scriptures and occasional correspondence with distant followers. By 1865 a circle of believers calling themselves the Children of Zion began settling in Attica, Iowa, and were directed solely by letters from Rigdon, who never visited them. In 1870 Rigdon had a small stroke. His health slowly deteriorated over the next six years until his death on July 14, 1876. His wife reported that he became completely deranged six weeks before his demise. Rigdon’s influence continues to be felt today in several of the disparate branches of the Latter Day Saint movement—nowhere as strongly as in the Church of Jesus Christ headquartered in Monongahela, Pennsylvania, a suburb of Pittsburgh. The Church was founded by one of Rigdon’s apostles, William Bickerton. Today the Church has some 15,000 members, mostly in Pennsylvania but also throughout the world. Rigdon’s influence is also present in the canonical works of both the LDS Church and Community of Christ and in other theological strains of thought that persist to the present.
<urn:uuid:57ac51e1-bde7-4f93-8ed5-014f258b2d3f>
CC-MAIN-2015-35
http://signaturebookslibrary.org/sidney-rigdon/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00100-ip-10-171-96-226.ec2.internal.warc.gz
en
0.978253
3,358
2.75
3
To get a leg up over their competition, microcontroller chip vendors are rushing to market new devices with the popular 8-bit MCU cores surrounded by a wide variety of peripheral circuitry, an ever increasing amount of flash memory, popular buses structures such as CAN (Controller Area Network), as well as multiple A/D and D/A blocks housed in smaller and smaller packages. Some companies have also designed-in special circuitry for power management and low voltage operation. There doesn't seem to be any let-up in the rate of new products surfacing in the 8-bit MCU arena as system OEMs continue to replace electromechanical components with highly integrated and very inexpensive microcontroller chips. Automotive electronics, for examples consumes huge amounts of devices for applications ranging from body controllers, immobilizer receivers, occupant detection systems, power steering, to anti-lock braking and stability sensing systems. Imaging systems, industrial controls, medical and scientific instrumentation, wireless base stations, automatic test equipment and a wide assortment of consumer appliances are also targeted applications for latest generation of microcontroller devices. Among the many vendors chasing after automotive system designers is Microchip Technology (Chandler, Ariz.) which recently launched the newest members of its PIC microcontroller line. Microchip has built-in features in demand by the car companies such as the CAN2.0B interface and higher density flash memory. Automotive engineers have a growing need for cost-effective 8-bit microcontrollers with built-in CAN functionality and flexible flash memory, balanced with a requirement to consume less power and take up less space, said Cheri Keller, product marketing manager for PIC MCUs. The MCUs also have Microchip's on-chip ECAN module, that enables multiple applications to be configured on a single node, with easier implementation to a software protocol bridge from a CAN network and a device to be re-used across different applications. The units have either 48 Kbytes or 64 Kbytes of flash program memory in a tiny 28-pin package. The flash memory can be programmed "in-car." To meet the power management needs of the market, the new devices also have Microchip's nanoWatt technology, which according to Keller improves battery life, which is a major cause of car failures today. Peripherals can be shut down to conserve power. In sleep mode, typical power consumption is as little as 0.1 microamp. For details on the line, visit Microchip's MCUs. Also focusing on its CAN features is Atmel Corp.(San Jose, Calif.) The company's new AT90CAN128 AVR flash microcontroller boasts 16 MIPS processing speed for many CAN networking and industrial applications, including factory and building automation, medical equipment, marine networking and print media. The CAN controller can handle 15 independent message objects, programmable on-the-fly. With a 16 MIPS AVR RISC-engine, 128-Kbyte flash program memory, 4-Kbyte RAM and 4-Kbyte on-chip EEPROM, AT90CAN128 can tackle the most demanding industrial control applications, according to Atmel. Dual 16-bit, 1 Msps (mega sample per second) analog-to-digital converters are a key feature of Silicon Laboratories (Austin, Tex.) newest 8-bit C8051F06x family that integrates a 25 MIPS 8051 MCU with the dual A/Ds. A two percent accuracy precision internal oscillator, also on-chip eliminates the need for an external crystal or resonator. Silicon Laboratories' is targeting such applications that require high-speed data acquisition, high accuracy, low noise and low power consumption including applications such as imaging systems, industrial controls, medical and scientific instrumentation, wireless base stations and automatic test equipment. Details of the new MCU are shown below. Another new 8-bit MCU line that has two A/D converters on chip is now available from Royal Philips Electronics (San Jose, Calif.) The LPC935 is the flagship chip of nine new microcontrollers in the LPC900 family for a variety of consumer devices ranging from coffeemakers and washing machines to intelligent toys. The LPC935 bridges the man-machine worlds, enabling the ADC and DAC conversions between the analog and digital computing worlds, it was noted. With two A/D converters, the LPC935 can simultaneously convert and read data in two channels, providing designers the advantages of real-time data analysis, such as simultaneously reading voltage and current measurements. The LPC935 converts these signals in less than four microseconds. Each of the new LPC900 family MCUs -- the LPC904, LPC915/6/7, LPC924/5 and LPC933/4/5 - allows customers the flexibility to select A/D conversion or high-speed digital-to-analog (ADC/DAC) output. By offering ADC/DAC functionality, customers will no longer need to use separate ADC or DAC on their printed circuit boards (PCBs) that are already integrated into the LPC900 family, according to Philips. The new microcontrollers also offer customers the ability to define data reading boundaries for when response is necessary, thereby freeing up the CPU to handle other tasks. Armed with byte-erasable flash technology for enhanced flexibility and performance, the LPC900 family is based on a high-performance processor architecture that executes instructions in 167 ns at 12 MHz (600 percent improvement over the traditional 80C51). The LPC900 features a real-time clock and three other 16-bit counter/times. The family also features serial communication channels such as a 400 kHz byte-wide I2C-bus, enhanced UART and SPI. The flexible power management features also help extend the battery life of handheld applications. Eying such household electric appliance markets as refrigerators, washers, dryers and air-conditioners that now demand lower noise, vibration and power consumption through intelligent motor control, NEC Electronics America Inc. has launched its uPD78F0714 and V850ES/IK1 series of single-chip MCUs for inverter-control applications. Both the uPD78F0714 and V850ES/IK1 series come with a number of safety features necessary for motor-control applications. These include an emergency shutdown pin for the inverter timer outputs to protect the motor and inverter, and a power on clear (POC) and a low voltage interrupt (LVI) for a controlled system shutdown and recovery in the event of power failure. The uPD78F0714 and V850ES/IK1 series are also suitable for a variety of other applications including industrial inverter-control systems such as uninterruptible power supplies (UPS), AC servos and general-purpose inverters, as well as electric bicycles with inverter modules, which serve as power generators for variable voltage. The uPD78F0714 operates at 20 MHz and is the latest addition to the company's 78K0 family of 8-bit MCUs. The device packs 32 kilobytes embedded flash memory and 1Kbyes of RAM, along with dedicated 3-phase inverter functions. There is also an on-chip debug function that allows developers to design and debug products in circuit via a serial connection. Earlier this year, STMicroelectronics (Lexington, Mass.) extended its ST7 family of 8-bit microcontrollers with the launch of its ST7MC family, specifically intended for the control of three-phase induction and permanent magnet brushless motors (including compressors). Packed into the MCU chip is ST's motor control peripheral, the MTC, which consists mainly of a three-phase pulse width modulator multiplexed on six high-sink outputs, with a Back EMF (BEMF) zero-crossing detector and co-processor unit for the sensorless control of permanent magnet Brushless Direct Current (BLDC) motors. The MTC's input pins can also be configured for Hall, tachometer or encoder sensing. In addition, comprehensive filters and settings allow the control of any star or delta wound motor, from 12V to 240V, in various control topologies (six-step/Sine wave, Current/Voltage, PAM/PWM). Air conditioners, refrigerators, washing machines, automotive fans and pumps, HVAC (Heating, Ventilation and Air Conditioning) fans and actuators, office automation, electric vehicles and low-end to medium industrial drives are the major targets for this new microcontroller family. With increasing pressure to minimize power consumption, the ST7MC will allow improvements in the energy efficiency of pumps, fans and compressors in these and other industrial, appliance and automotive applications. Four different power-saving modes minimize consumption in the device itself, according to STMicroelectronics. Also, packing lots of interesting peripheral functions into a new 8-bit MCU is Zilog, Inc. (San Jose, Calif.) Touting a high level of integration with low overall system cost, the company's Z8 Encore! XP integrates a full range temperature sensor and a trans-impedance amplifier, which have their outputs processed by a sigma-delta A/D converter. With the sigma-delta approach, the full 10-bit accuracy of the converter is realized, even with the microcontroller running full speed, and with no external signal filtering components required. ZiLOG director of product marketing Michael Gershowitz, said. For details of the chip, see diagram below, The Z8 Encore! XP is a scalable family of MCUs covering program memory sizes from 1KB to 64KB. It uses ZiLOG's register-to-register based architecture and features an 8-channel, 10-bit Sigma Delta A/D converter. Unlike A/Ds that are based on SAR implementations, the Encore A/D doesn't require halting the processor to get the full 10-bit accuracy.The MCU provides further system cost savings by fully integrating the anti-aliasing filters internally, saving on external component count as well as delivering more accurate bandpass performance. With additional integrated features such as an on-chip- precision internal oscillator, non-volatile data storage memory, and large working memory, the MCU performs favorably versus competing solutions. The MCU comes in 20 or 28-pin SOIC, SSOP and PDIP packages. The units are priced at 89 cents for a 1KB device in 20-pin SOIC and $1.44 for a 4KB device in 28-pin SOIC, each in 10K quantities.
<urn:uuid:b313387b-8e98-43be-8c68-38058a2dc888>
CC-MAIN-2015-35
http://www.eetimes.com/document.asp?doc_id=1293116
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00282-ip-10-171-96-226.ec2.internal.warc.gz
en
0.916703
2,207
2.75
3
Chapter 13 Money, Banking and the Creation of Money I. Functions of Money II. The Supply and Demand for Money III. United States Private Banking System IV. The Federal Reserve System V. Recent Developments VI. Fractional Reserve System/Money Creation VII. The Monetary Multiplier VIII. Additional Reading Economics Internet Library Table of Contents Economic Video Lectures Other Macro Chapters 8) Measuring Total Economic Activity 9) The Business Cycle 10) Macro Equilibrium 11) Competing Macro Theories and Issues 12 Keynesian Economics: Expanded View 14) Fiscal Policy 15) Monetary Policy 16) Stagflation & Rise of Supply-Side Economics 17) Budget Deficits See Democratic Capitalism vs Capitalistic Democracy 18) Economic Growth II. The Supply and Demand A. Three categories of the supply of money 1. M1 = Currency, coins, and demand deposits (checking accounts). 2. M2 = M1 plus near monies such as small time deposits (savings accounts) and short-term government securities. 3. M3 = M2 plus large time deposits (over $100,000) B. What backs the dollar? 1. In 1971 President Nixon took the U.S. off a partial gold standard. See The Gold Reserve Act 2. It is a debt of the federal government. 3. Backed by faith in the government's ability to control inflation. 4. Value is determined by acceptability (it is legal tender and scarce). 5. Commodity money, such as tobacco used as money in the Virginia colony, has intrinsic value of its own. 6. It's fiat (by decree of the government) money. a. Fiat money's first appearance in the U.S. when Congress issued Continental currency with no gold backing 1. Gresham's Law (bad money chases good) caused it to disappear after the war. 2. State issued their own money. 3. Ben Franklin on Paper Money Economy b. Greenbacks (demand notes on the Treasury) that were green in color were issued as legal tender during the Civil War. c. An intermittent return to Gold Standard (only gold is money) followed although the U.S. left gold for good in 1971as high inflation caused a run on gold that the U.S. Treasury could not satisfy. 7. Today, coins are called token money because they have little intrinsic value. 9. Types of money has more information Hyperinflation in the C. The Demand for Money 1. Transaction D, Dt, results because people hold money, often in a money market account, to use as a medium of exchange. 2. Asset Demand, Da, results because people accumulate money, often held in an investment account, to buy assets. 3. The demand for money Dm= D t + Da 4. Interest rates are set in the money market. more interesting history. 5. For more information visit Demand for money 1 Brief History of Money Or, how we learned to stop worrying and embrace the abstraction 2. 300 Years-of-financial-crises-1620-1920 3. Gold Timeline 4. The continental currency crisis of 1779 and today's European debt crisis 4/11/14 5. Central bank crisis management during wall street's first crash in 1792 5//14 6. The Slump that Shaped Modern Finance Economist 4/14 7. Infographic History of U.S. Money 8. History of the Gold Standard 9. History of Money 10. A Brief History of U S Banking Problems and The Essence of Money, a Medieval Tale (7:36) provide a little Source The Economist 4/5/14 States Private Banking System A. Two kinds of banks 1. Commercial banks offer demand deposits (checking accounts) 2. Savings and loan associations used to specialize in time deposits (saving accounts) and home mortgages. Now, because of deregulation during the early 1980's, they are similar to B. Federal deregulation contributed to banking difficulties in the 1980's. C. Visit brief history provided by the Federal Reserve Bank for a time line of the U.S. banking System. Be sure to point at each date to see what happens during that period. IV. The Federal Reserve System | 1. Board of Governors oversee the Federal Reserve System a. Seven governors b. Governors are appointed by the President and confirmed by the Senate. c. The chair is appointed by the President for a four-year term. 1) To foster independence, the term does not coincide with the President's 2) Other board members are appointed to 14-year terms on a staggered basis to insure an experienced board. 2. Federal Open Market Committee a. Membership consists of the Board of Governors and 5 of the 12 Federal Reserve bank presidents with the N.Y. president always a member because N.Y. City is the financial center for U.S. international trade. b. The Committee tries to affect interest rates by affecting the supply of money by buying and selling U.S. government bonds (See Chapter 15). 3. Federal Advisory Council 12 prominent commercial bankers, one from each district, who advise the Board of Governors 4. Twelve Federal Reserve Banks a. The United States is divided into 12 homogenous districts and each has its own bank b. Bank for the federal government c. Bank for member banks d. Graphic is complements of the Board of Governors of the Federal 5. Member commercial banks 6. Nonmember commercial banks and thrifts are regulated by other Federal Open Market Functions of the Federal Reserve 1. Regulate the money supply 3. Oversea the financial system 4. Check collection and clearing 5. Fiscal agent for the government 6. Supervise (audit) member banks 7. Hold reserves (deposits) for member banks 8. Compile economic statistics such as the 2010 BeigeBook, which is a quarterly summary of each districts' recent economic activity. 9. Many Federal Reserve publications are free. 10.The Federal Reserve System Purposes and Functions 1. The bank that Hamilton built 2. ANDREW JACKSON took on the eastern bankers, vetoed the charter extension of the Second Bank of the United States because he felt it had excessive power over farmers. 3. THEODORE ROOSEVELT took on the corporate monopoly Trusts that control railroad rates and routes and thus destroyed small towns and farms. 4. Nixon Shock inflationary pressure and French President De Gaulle force the US to stops the convertibility of dollars to gold. 5. A Century of U.S. Banking: Fall 2013 Ben Bernanke 6. Federal-reserve-was-created-100-years-ago-this-is-how-it-happened 12/21/13 7. Cooperation, conflict and the emergence of a modern federal reserve 4/17/14 D. Current Events Readings 1. Philadelphia Reflections: Whither, Federal Reserve? is a well done, concise history of banking in the United States. 2. The Real Threat to Fed Independence Henry Kaufman, Wall Street Journal. The financial giant cuts the industry no slack. 3. Visual Guide To The Federal Reserve why we have a Fed... ! 4. Phony Currency Wars 5. The Current Events Internet Library has many up-to-date economic Profit isn't usually listed as a function of the FED but $325 from 20010 -13 isn't chicken feed. V. Recent Developments A. Banks and thrifts are the only institutions whose checking accounts are not restricted as to check size and number 1. The Financial Institutions Reform, Recovery and Enforcement Act of 1989, liberalized banking laws, caused a declined in their importance although reforms resulting from the Great Recession of 2008 cold change this. 2. Consolidation has caused their numbers to decline and their size to increase. 3. Many bank/thrift services are now performed by insurance companies, pension, and securities companies. B. Globalization of financial markets. C. Shadow Banking System 1. Rise of the Shadow-Banking System 2. Long Term Capital Management collapsed in the late 1990s D. Some politicians think the Federal Reserve system is too independent. 1. Congress Is Politicizing the Fed Jan 25, 2010 2. Central bank independence versus inflation This often cited research published by Alesina and Summers (1993) is used to show why it is important for a nation's central bank (i.e.-monetary authority) to have a high level of independence. This chart shows a clear trend towards a lower inflation rate as the independence of the central bank increases. The generally agreed upon reason independence leads to lower inflation is that politicians have a tendency to create too much money if given the opportunity to do it. The Federal Reserve System in the United States is generally regarded as one of the more independent central banks. 3. Presidential Elections of 1828 and 1832 were about the need for a strong central bank Fractional Reserve System/Creation of Money A. Commercial banks are required to keep a reserve (cash) of about 12% of their demand deposits (checking accounts) at their bank or on deposit with the Federal Reserve (required reserves). The remainder, (Excess Reserves) may be loaned out even though they support deposits. B. Money is created by these loans as long as the demand deposits (DD) created by them stay within the banking system, that is, the money loaned is redeposit as a DD into a bank within the system. The banks owe the demand deposits created by the loans to each other. These inter-brain debts are canceled with a bookkeeping entry. It should be pointed out that the demand deposits created by such loans are spent, and goods transferred, just as if the transaction involved currency. C. Example: Bank A has $50,000 in demand deposits. A reserve requirement of 10% would yield required reserves of .10 x $50,000 = $5,000. If Bank A had $7,000 in reserve, it could loan up to $2,000 in the form of demand deposits. Suppose Bank B does exactly the same with both banks' customers depositing their DD in the other bank. Banks would owe cashed checks to each other, would cancel interbank debts, and money has been created. D. The system works in reverse with money destroyed if reserves leave the system. E. Required reserves, reserves not loaned, and loans of cash (reserves) represent a leakage which eventually stops money supply growth. F. Readings and Video 1. The Truth is Out: Money is Just an IOU and the Banks are Rolling In Tt 2. Money Creation in the Modern Economy 3. Money Creation Video 13.03 minutes is well done Source and prediction of coming inflation A. Laffer, 6/11/14 VII. The Monetary Multiplier A. An infusion of reserves into the system by the Treasury as directed by the Federal Reserve can be loaned a number of times by the commercial banking system. For example, the Federal Reserve may buy a $100 Treasury bond from Ms. A who deposits the Federal Reserve check (reserves) into Bank A. B. Bank A's new DD of $100 requires them to keep $10 (10%) in reserve leaving $90 excess to loan to Mr. B who deposits it in Bank B. C. Bank B needs to keep only $9 ($90 x .1) in reserve and may loan out $81. D. This process continues and as long as the demand deposits being created by the loans stay within the commercial banking system, interbank debts are canceled and money has been created E. Monetary multiplier (M) sets the upper limit of the expansion 1. R = reserve requirement = 10% = .1 2. M = 1/R = 1/.1 = 10 F. In the above example the total amount of DD created beginning with Bank A's $90 in excess reserves would equal Excess Reserves x M = $90 x 10 = $900. If the $100 infusion by the Federal Reserve is included, the increase is 10 x 100 = $1,000. G. Video 2.43 minute Practice Problems Video 3.02 H. Visit The Banking System and the Money Multiplier from Jay Kaplan of the University of Colorado at Boulder for more information. VIII. Additional reading A. Money Creation 1. Money creation from Wiki 2. Recollections of Pine Gulch reffeconomics reviews money. 3. Banks Create Money 4. The (2nd) Deleveraging: What Economists Need to Know of Money Creation Process B. More History 1. A Brief History of U S Banking Problems and Dirk-bezemer-the-post-bubble-economy is a four part video explaining credit evolution. 7/6/13 2. A 9-page summer read for historical context in understanding the history of monetary policy: the famous Wizard of Oz book is economic allegory/satire on who will create US money: federal government as "money" or a central bank as "debt." This issue was the foundation for the "Greenback Party," "Populist Party," "free silver," and William Jennings Bryan's three presidential campaigns (and "Cross of Gold" speech): or a film version of this allegory, The Secret of Oz can be viewed online (winner of two 2010 international Best Documentary awards). 3. The Nixon Shock from Business Week 8/14/11 How Nixon stopped backing the dollar with gold and changed global finance, a 40-year-old decision that still echoes in Greece, Ireland, and the U.S. 4. Buttonwood, Forty years on from 8/11 of Economist Magazine 5. Mississippi-bubble of 1720 and the-European Debt Crisis Are we getting better at controlling inflation? Free Political Science Book by Michael Beschloss , 8 pages, or Buy this Book Second Chance Three Presidents and the Crisis of America Superpower, by Zbigniew Brzezinski, 6 pages, or Buy this Book 6 pages, or Buy this Book American Dynasty Aristocracy, Fortune, and the Politics of Deceit in the House of Bush, by Kevin Phillips 5 pages, or Buy the Book Don't Know Much About History Everything You Need To Know About American History But Never Learned, 5 pages, or Buy the Book Some Central Banks Are Easing to Create Growth, Some Are Not
<urn:uuid:7f7af919-b549-46bc-8099-178728776b7a>
CC-MAIN-2015-35
http://www.businessbookmall.com/Economics_13_Money_Banking_and_Monetarism.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00280-ip-10-171-96-226.ec2.internal.warc.gz
en
0.904207
3,197
3.625
4
ON A ROAD NOT FAR FROM MORGANTOWN, West Virginia, my guide pulled over to show me the peculiar color of a certain river. It was orange. The rocks and creek bed were a hue somewhat brighter than rust but duskier than the reflective vests worn by utility crews. Years of drainage from coal mining tailings, high in the acid produced during the washing of the coal, had killed everything in the watercourse, rendering the water a moving hazard and contributing to the economic decline of the area. Coal had also sickened the bodies of miners, as well as the atmosphere. Yet the years when rivers like the one I saw became industrial sewers were some of the most prosperous in the history of the state of West Virginia, when men and women were relatively well employed, cashing their paychecks to pay for groceries and rent, televisions and cars, medicine and property taxes. Every conventional accounting of the economic significance of the coal industry includes the wages paid, the small businesses sustained, and the quarterly profits of corporations, but not the rotted lungs, or the polluted waters, or the rising oceans that inundate low-lying slums in Bangladesh. These other effects, equally direct, receive no valuation when coal’s contribution to economic growth is tallied. They are invisible to the gross product of West Virginia or the United States. Gross Domestic Product (GDP) might seem benign enough. After all, it’s just a number. But it has emerged as the principal way the public evaluates a nation’s status and whether times are good or bad. News organizations report rising GDP as a sign of recovery, and stagnant or declining GDP as a portent. But GDP mismeasures all things. It is about as indicative of human progress as a body count is of success in war; it’s not only blunt, but also blind to the destruction behind the number. It denies that “growth” makes us poorer in the long run and in the short run benefits only a few. The inventor of GDP, the economist Simon Kuznets, never intended it as an indicator of progress or happiness. Kuznets sent a report to Congress in 1934 that included a new way of reporting on the state of the economy, but cautioned that “the welfare of a nation can . . . scarcely be inferred from a measure of national income.” Yet advocates of economic growth seized Kuznets’s indicator and simply chose to ignore his apprehension. They reduce national welfare to national income, regardless of the social distribution or ecological effects of wealth. They look at history with the same foggy lens, missing the social relations behind the history of capitalism, as though everything preceding the Industrial Revolution was just a million-year recession. In their view, the problem with European feudalism was that it generated too little wealth, not that it was a social system built on violence. They see the steam engine as the invention that made possible the first explosive increase in worker productivity—rather than as a machine that created a poor and hostile working class in Britain and the United States. GDP soared, but the first industrial workers lived in sickness and starvation. However, when we talk about national wealth, we tend to stress just the opposite—that it benefits everyone because a rising tide lifts all boats—when, in reality, as Robert Reich once quipped, it only lifts the yachts. Again, GDP obscures the truth. For example, divide our country’s GDP in 1790 (preindustrial) and 1890 (industrial) by the U.S. population at those times, and the increase per person appears remarkable. But these gains weren’t distributed equally. The apparent rise in individual income during that century also hides the immense poverty and environmental destruction that came as a consequence of growth. It tells us nothing of the violence between workers and employers for livable wages, an eight-hour workday, and basic factory safety. Affluence can be shared, or hoarded. Corporate profits do not create equitable living standards; only equitable public policy does that. Consider the sale of a two-dollar t-shirt by a big-box store. The sale instantly becomes part of GDP, but there would have been no sale were it not for the undercompensated labor of the Cambodian woman who made the shirt. A Cambodian woman who, in one year, stitches and sews $195,000 worth of goods is paid $750. That calculates to a share of three-thousandths of every retail dollar. Meanwhile, many Cambodian workers aren’t paid enough to adequately feed their families. Thoroughly globalized products present a problem for GDP as a measure. After all, what is a “domestic product” when the citizenship of product and profit are difficult to determine? The t-shirt’s costs stay in one country and its profits go to another. If the true cost of producing the t-shirt became part of its price, few households in the United States could afford to buy one. The profitability of the t-shirt and its volume of sales for the big-box store depend on below-subsistence wages and the absence of environmental laws. Economists call this externalizing—when the costs of production are dumped on the public, while the profits remain in private hands. To the extent that GDP represents millions of products shared across national economies, it is a highly subsidized number—in which other people and other places sustain the true costs of growth. For the past three hundred years, investment capital has besieged the earth, creating orange rivers and blackened lungs, yet the idea that growth is good remains more popular than ever. It’s viewed as a social welfare policy that costs taxpayers nothing—there’s no need to redistribute income if everyone is always in the process of getting rich. The antidote to socialism, in the American experience, has been supercharged capitalism, but a quick look at the last thirty years through the lens of the financial collapse of 2008 tells a different story. Real wages peaked in 1972, and thereafter living standards for American workers declined. Corporations undercut the modest prosperity of the working class when they realized that workers in other lands could turn the same bolt and stitch the same sleeve for a fraction of the cost. Corporate leaders demanded that domestic workers accept slashed benefits and stagnant wages or lose their jobs. Given that high wages drive consumption, GDP should have plunged during the ensuing decades. But it did the opposite. Credit cards and subprime mortgages fueled this expansion. In other words, rather than create the conditions for real growth, banks and government developed a system that encouraged people with declining incomes and savings to consume more. They favored a short-term spike in GDP over actual prosperity. The rest we know. If we counted up all the damage done in the name of growth—the unions busted when jobs went abroad, the lower wages and depleted benefits workers accepted for the same reason, the foreclosed mortgages sold by lenders in order to boost their earnings for shareholders; if we tally the rainforests cut, the ocean floors raked over, and the drought damage in Texas due to the highest CO2 concentrations in human history—the numbers would reveal a falling line. According to Friends of the Earth, the decline would amount to a vanished $12,500 per capita. But if GDP and the assumptions behind it are broken, what would be a better measure of human welfare? You wouldn’t know it from the terminology and tenor of this campaign season in which GDP remains the unquestioned measure of national economic health, but there are a number of methods for measuring true progress rather than mismeasuring so-called growth. The Organization for Economic Cooperation and Development has invented its own Leisure-Adjusted GDP by adding in the recreational hours of workers; in 2001, Luxembourg, Norway, and Ireland led the world. Others include the Index of Sustainable Economic Welfare, which factors in resource depletion, pollution, and income distribution, and the Genuine Progress Indicator, which tries to determine if economic growth has improved a country’s welfare. These kinds of estimates— and there are many others — value “quality of life” over “standard of living,” and they value healthy ecosystems. One thing capitalist economies almost never measure—and therefore do not evaluate—are the benefits, or “services,” they receive from environments at no cost. In conventional economic thinking, a forest has the value of the board feet it contains, but a forest also holds billions of gallons of water, which prevents flooding and keeps rivers and streams running clear. The trees hold soil in place that would otherwise roll down the mountain, muddying streams and rivers and destroying fish and bird habitats. Since fish such as salmon travel vast distances and spend half their lives in the oceans where they are caught, forest policy can end up affecting an industry seemingly far removed from the fate of Douglas firs. The totality of these ecosystem services, and the organisms that provide them, is called natural capital. Investing in natural capital, and coming up with a measure of its health and viability, would give us a different way of looking at progress. For instance, we tend to think of investment as a way to make money, but it’s really a restraint on spending, because it requires delaying consumption now for the possibility of future gain. Investing in natural capital means not cutting the timber, leaving the fish in the ocean, keeping the mountaintop intact. The “interest” from this leave-it-alone policy consists of the ecological services (and less tangible returns) provided by the natural capital. Natural capital, in other words, yields not when it is extracted but when it remains in place, doing what it has always done. There is rising interest in natural capital, and some of it comes from Google, which is developing an Earth Engine that would map things like soil fertility, deforestation, and other information having to do with the monitoring and measurement of environmental change. A team from Stanford University has invented software (called InVEST or Integrated Valuation of Ecosystem Services and Trade-offs) that puts a value on the benefits of mountain meadows and lily ponds. At the very least, these tools offer data to offset the claims of businesses focused on extraction. And yet global ecological valuation also carries the danger that its definition of natural capital will blur into the same old drive for profitability. Corporations could claim to preserve natural capital in some self-serving way to fool the public into thinking that they’re doing something that they’re not. Worse, these tools might end up valuing only those services useful to humans. To the extent that we place our interests and needs above the inherent worth of a blooming desert or undeveloped waterfront, natural capital would presumably do nothing to prevent their destruction, particularly if they’re viewed as somehow insignificant. For instance, one might ask, “What has this salt marsh done for me lately?” If the question comes down to economic value, the salt marsh might fail to justify its existence. There are other kinds of alternative measures. These are the indices of happiness that take into consideration the available resources per person, as well as the necessary food, water, open space, and services that we all need. These include the Happy Planet Index, Gross National Happiness measure, and National Accounts of Well-Being. Happiness indices include thriving environments because they are necessary for thriving humans and place well-being above profit. This might be the most radical of measures, since it posits a standard for progress that exists outside of the economy. It says that there are behaviors other than accumulation and consumption, and systems for meeting needs other than capitalism. A touchstone concept for every index of national happiness comes from the nineteenth-century political economist and moral philosopher John Stuart Mill and his theory of the “stationary state.” Mill pointed out that “the increase of wealth is not boundless,” and “we are always on the verge” of its end. Thus technological change runs up against diminishing returns, and so-called resource abundance can turn into acid-washed rivers, exploited forests, and depleted oceans. For Mill, the idea that life is and must always be a constant “trampling, crushing, elbowing, and treading on each other’s heels” for a dwindling bit of wealth led him to think that the cooling off of the economy into entropic stagnation might not be such a bad thing. What might a post-growth economy look like? It would not be a utopia—an immediate effect would likely be increased unemployment and social unrest. But we are fully capable of creating equality and economic security for everyone, if we so choose. Workers might be given the choice of more leisure time rather than higher wages, with a corresponding drop in consumption for its own sake. Governments would make investments in natural capital (as they do now in national parks and wilderness areas) to maintain the ecological services provided by migrating birds and watersheds. Businesses would recycle everything, since it wouldn’t be possible to add to the total amount of matter in the economy—only change it into different forms. Schools and communities would teach sufficiency, not accumulation, as a primary value. If we lack the imagination to see other modes of human prosperity, all of our hometowns and natural places could end up like parts of West Virginia and so many other environmentally compromised areas around the country and world—our forests and fields ripped out like entrails, our rivers deadened and stained, our working lives spent reproducing a material world that is killing us. GDP obscures the social relations that make up economic growth. It allows us to continue in the delusion that the domestic product can be isolated from the global economy, and it distracts us from looking at the planet’s degradation by upholding consumption as a standard of progress. It might seem like quite a lot for a simple number broadcast over the radio in the car on the way to work, but people can only evaluate what they can measure. Measuring differently is itself a revolutionary act.
<urn:uuid:713cf180-c204-4f76-8543-acc36942984a>
CC-MAIN-2015-35
https://orionmagazine.org/article/the-mismeasure-of-all-things/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00106-ip-10-171-96-226.ec2.internal.warc.gz
en
0.957248
2,907
2.609375
3
Aside from the massive challenges with the highly diffuse nature of incoming sunlight caused by the second law of thermodynamics (previous page), renewable energy faces a number of additional challenges, any one of which could derail the entire process on its own. This page will provide some more perspective on just how difficult it will actually be to achieve a sustainable energy future. After the second law of thermodynamics, arguably the biggest challenge with renewable energy sources is their intermittent nature. Solar and wind power only generate electricity when the sun is shining or the wind is blowing. Yes, there are ways of storing energy for use on windless nights (such as pumping water up to a reservoir using surplus electricity and letting it run down through a turbine when needed), but these methods greatly compound the lack of economic competitiveness of renewables against the awesomeness that is fossil fuels and are also greatly limited by other factors such as geography. No, the odds are that we will always need to burn some kind of fuel during the times that renewables are not producing any power. And yes, this just brings yet another problem: even more expensive electricity. You see, due to very high capital costs, a standard power plant can only be economical if it continuously generates electricity throughout all the years in its lifetime. If power plants are reduced to backup generators for renewables and operate, say, at only 50% of their current loads, it is safe to say that the electricity they generate will become twice as expensive (probably more due to regular startups, shutdowns and sub-optimal operation). Sure, it can be reasoned that this sharp increase in the cost of fossil fuel electricity (caused exclusively by renewables of course) can actually appear to make renewables cost competitive, but the vicious energy price inflation cycle that will result from this dynamic will crash the global economy long before any meaningful increases in installed renewable energy capacity is achieved. The alternative is to just have a truly massive electricity grid to spread electricity over a very wide area from wherever the sun happens to be shining and the wind happens to be blowing. Unfortunately though, this is perhaps the most unrealistic idea of all. A proposal for a single electricity line to transmit 3000 MW of electricity for a distance of 1600 km (which will require a clear channel 60 m in width for the entire distance) quoted a cost of $3 billion – an amount sufficient to simply build 3000 MW of fossil fuel power plants wherever they might be needed. And yes, we will need thousands of much longer electricity lines if we are to successfully construct such a super-grid. Also just imagine the levels of coordination and international collaboration required to make this happen. The way in which Western politicians are currently handling our self-imposed debt crisis should offer another stern reality check. Finally, super-grid or large scale storage options might be theoretically feasible to smooth out short term intermittancy such as night and day variations, but have no chance of countering for slow seasonal variations. When looking at solar power for example, the electricity that can be produced is linked very directly to the seasonal solar influx. If you are located on the equator, this is not a problem, but the vast majority of global energy is consumed thousands of miles north of the equator. According to these data, the minimum winter solar influx at a latitude of 40 degrees (which is typical for the USA) is 3 times smaller than the maximum summer influx. At 50 degrees north (which is typical for Europe), this ratio increases to 5.7 and then quickly blows up to infinity as we approach the arctic circle. If you then also acknowledge that energy usage normally peaks in the cold winter months, the problem becomes pretty obvious. Up-front capital costs The next major challenge faced by renewable energy involves the massive upfront capital costs in terms of money, materials and energy. A normal fossil fuel power plant also has substantial capital costs, but a significant amount of the total cost is spread throughout its entire lifetime as fuel, operating and maintenance costs. This is not the case for renewables like solar and wind where virtually the entire cost in terms of money, materials and energy (which, on average, is probably around 5 times more than a fossil fuel plant of similar wattage) must be paid up-front. And yes, this is a major problem. Just looking at the up-front energy costs, it can be roughly estimated that replacing all of our fossil fuel power plants with solar power plants that last for 30 years and deliver three times the energy it took to construct and install them over that period will require the total amount of electricity that we currently generate in an entire decade. So, even if we really tighten our belts and pledge 10% of the electricity we generate towards the construction of renewable energy resources, we will need an entire century to get the job done. And yes, this is of course based on a very shaky assumption that the world energy consumption will remain constant. Also, this rough estimation has not even taken into account the fact that we will probably need to revamp our entire power distribution system to accommodate the intermittent nature of renewable energy sources. This can easily require capital in the same order as that required to construct all of the renewable energy power plants. If you then also factor in the monetary and resource capital required at this time when budgets are already highly strained and a wide array of vital resources are becoming increasingly scarce, even the relatively modest penetration rate of renewable energy forecast by most leading institutions suddenly begins to look just as overoptimistic as the laughable forecasts of perpetual economic growth routinely churned out by world-leading economists. I don’t want to sound overly pessimistic or anything, but the chance of the world achieving the economic expansion suggested in the graph above is essentially 0%. More on this here. The laws of receding horizons and diminishing returns OK, this heading looks pretty weird, but it will all make sense in a moment. Basically, the laws of receding horizons and diminishing returns shines the bright spotlight of reality on all of those pretty graphs where the cost of renewable energy continuously goes down, the cost of fossil fuels continuously goes up and a very happy crossing of the lines happens at some point in the not-too-distant-future. Those graphs might look very promising, but unfortunately they are complete nonsense. Firstly, we have to acknowledge the typical subjective bias with which charts such as the one given above is constructed. After all, we all want solar energy to become cost competitive so that we can keep on consuming at ever increasing rates and, because we are human, this desire will often influence our reasoning. For example, reputable Solarbuzz calculations show that the current 14 cents per kWh cost shown above is a highly optimistic estimate even for full scale industrial solar plants. The chart, however, appears to be promoting residential solar power installation which, on average, would result in a levelized cost of electricity (LCOE) about three times higher than the given price. Indeed, if that pretty yellow band were inflated by a factor of 3, it would look distinctly less pretty. Also, the cost of fossil fuel electricity will not rise much over the coming years. This is because only about a third of the cost of fossil fuel electricity comes from the cost of the fuel itself and, since we still have lots of coal and shale gas, these prices will not rise all that much if left to the free market. Highly efficient and very economical combined cycle gas fired power plants combined with large shale gas finds might even bring down the cost of fossil fuel electricity. Even the very scary prospect of peak oil will not cause sustained fossil fuel price increases simply because economic growth cannot continue without cheap oil. Every time oil becomes too expensive (like in 2008), our economy will enter into a recession and oil prices will go back down due to decreases in demand. On the other hand, if we get our act together and start to really push renewable energy commercialization and climate change mitigation measures such as carbon capture and storage (CCS) through stiff carbon taxation, the price of fossil energy will start rising significantly – firstly just because of the tax, but eventually also because of the part-load operation and grid infrastructure revamps enforced by an increase in solar and wind power. The first and most obvious problem with this is simply that the enforced austerity brought by this artificial increase in the energy price will be met with massive and potentially very costly resistance (see Southern Europe as a good example). In addition, countries who do this first will shoot themselves in the foot by making their own energy more expensive and thereby hurting their own competitiveness. This is one of the major reasons why nothing meaningful is being done about climate change. But the real kicker is the fact that, as the energy price rises, so will the cost of constructing highly energy and resource intensive renewable energy plants and the highly advanced distribution grids they will require. There are also real concerns about medium-term shortages of rare earth metals which are indispensable in many renewable energy technologies. The obvious conclusion is therefore that more installed renewable energy will provide an ever-strengthening headwind to further renewable energy installation. This is the law of receding horizons: the horizon – that magical point where renewables become cheaper than fossil fuels – is continuously pushed further away as we struggle towards it. Indeed, chasing the point where renewable energy honestly becomes cheaper than fossil fuels is much like chasing the pot of gold at the end of the rainbow – it just stays on the horizon, no matter how fast you run towards it. And then we have the law of diminishing returns to further compound this problem. Initially, we will be able to construct renewable energy solutions in the areas where there is the most sun and wind, the most economic muscle and the most political will. In addition, we will be able to incorporate these renewable energy sources into our existing power grids simply because the intermittent power spikes they create will initially be small relative to the total fossil fuel power generation. However, as time goes by, these initial easy gains will fade away as we are forced to start constructing renewable energy plants in much less ideal areas and the intermittent nature of renewable energy starts to enforce very costly and complex changes to our existing power infrastructure. In short; the greater the relative contribution of renewable energy becomes, the greater the magnitude of the challenges facing the installation of further renewable energy infrastructure. This is the law of diminishing returns: initially, we will get relatively large returns on our renewable energy investments, but the aforementioned factors will cause these returns to gradually diminish as time goes by. In summary, I think it is safe to say that the laws of receding horizons and diminishing returns reduce the chances of wild solar expansion like that shown below to virtually 0%. Such an expansion can only happen if more solar power makes the installation of further solar power progressively easier, but, unfortunately, the complete opposite is true. So, there you have it: a nice summary of the challenges facing renewable energy: the second law of thermodynamics dictating that renewable energy will never be able to compete with fossil fuels on a level playing field, the intermittent nature of renewable energy bringing immense practical challenges, the tremendous up-front capital costs that will greatly hinder the rapid widespread deployment of renewable energy technology, and finally the dangerous combination of the laws of receding horizons and diminishing returns which will make our fundamentally enforced energy transition more and more difficult as time goes by. I sincerely hope that you now understand just how difficult the construction of a sustainable energy future will be and just how incredibly amazing the age of practically unlimited fossil fuel energy really was. That being said, however, the aim of these two pages is not to send you into a depression, but rather to provide the dose of objective reality that might just get you to start preparing yourself and your family for the end of the age of energy abundance. The new age of energy realism most certainly does not need to be cataclysmic and, as discussed on the next page, there are many ways in which you can manage this new age while still maintaining a very high standard of living. Let’s take a look.
<urn:uuid:f3e99a79-7fdb-4a82-a00e-13c280dd06c5>
CC-MAIN-2015-35
http://oneinabillionblog.com/summary-2/laws/additional-challenges-facing-renewable-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00343-ip-10-171-96-226.ec2.internal.warc.gz
en
0.948886
2,440
3.46875
3
For 73-year-old fisherman Ni Tingrong, who lives in a village on the northwestern shore of Taihu Lake, China's third largest freshwater lake, the idyllic scenes portrayed in the folk song "Beauty of Tai Hu" are confined to memory. The lyrics go something like this: "Green reeds at the water's edge, rich in fish and shellfish at low tide, the lake water weaves through irrigation nets and the fragrance of fruit and rice wafts up from the lake." But the modern-day reality is far from poetic. "Just 20 years ago, I fished in the lake and the rivers nearby almost everyday," said Ni, who began working as a fisherman at the age of 14. "But pollution has only left us blue-green algae and the odor of dirty water, the fish stocks are drying up." Covering an area of 2,400 square kilometers in east China, Taihu Lake is a major source of drinking water for people living in Shanghai and Jiangsu and Zhejiang provinces. Historically a rich and fertile area, the lake region has become one of the most populous and prosperous regions in the country with 33.5 million people living in the surrounding area. However, the lake has come under increasing environmental strain for years as untreated sewage from towns and villages, as well as the region's booming chemical and light manufacturing industries, have choked its water with pollutants. The fine line between rapid economic growth and continuous ecological degeneration was crossed in May when a large bloom of blue-green algae was found to have swamped the lake. The combination of the low water level and the accumulation of waste and untreated sewage had triggered the algae bloom, turning the water putrid and cutting the water supply to more than two million residents. Workers collected thousands of tons of algae from the lake and residents raced for bottled water. It was not the first time Ni had seen the lake water clogged up with waste in his hometown of Zhoutie, outside Yixing City, in Jiangsu Province, home to more than 100 chemical plants. "Actually, all the families in our village have been using water from the nearby well, instead of that from Taihu Lake, as drinking water since 1998, because the lake water had a weird smell of chemicals," said Ni. The old man said Zhoutie Town saw its first chemical plant 15 years ago and so many others followed in the space of one decade that the town soon won a reputation as the "hometown of chemical plants". The booming chemical industry has inspired the economic growth of Zhoutie Town, but the industrial waste has also brought environmental disaster to its residents with urban sewage and chemical fertilizers from agriculture. "Black water flows directly into the lake. Soon fish in the rivers nearby died and we had to fish in the large lake, " said Ni. "There is still fish in the lake, but the quantity is reducing because the lake is being polluted too." Zhoutie Town is no exception to the Taihu Lake region. Around 20,000 chemical plants that cluster in the Taihu valley have had a drastic effect on the water quality of the lake. Experts say that the lake's environmental problems include accelerated eutrophication, or aging, caused by nitrogen and phosphorus enrichment. These materials cause an overgrowth of algae and further deterioration, including oxygen depletion. Investigations from the State Environmental Protection Administration (SEPA) show that the content of nitrogen in the lake in 2006 was three times the amount in 1996, while the content of phosphate pollutants had increased 1.5 times in the 1996-2006 period. To mitigate the lake's environment pressure, all towns around Taihu Lake have been ordered to establish sewage treatment plants and are forbidden from discharging untreated sewage into the lake and rivers in the Taihu valley. Existing plants must also install nitrogen and phosphorus removal facilities and those that fail to meet the raised water emission standards risk suspension. They will be shut down permanently if they still fail to meet the standards by the end of next June. In addition, more than 1,000 small-sized chemical plants that are scattered around rivers and lakes have been closed since June in the cities of Wuxi, Suzhou and Changzhou in Jiangsu. In Zhoutie alone, 93 chemical plants were closed in the past three months and more than 40 others are left, said Wu Xijun, Party chief of the town's government. "After the algae incident, voices to reform the chemical plants are coming from everywhere and we have felt more pressure than ever, so we know we have no other choice but to close the plants," said Wu. "I hope the policies will be faithfully implemented," said Ni, " or what an irony it will be, if we have no water to drink though the lake is right before our eyes." Plateau lake's uncertain future While people living around Taihu Lake were haunted by algae, thousands of kilometers away in the northwestern province of Qinghai, farmers and herdsmen living around Qinghai Lake, the country's largest saltwater lake, were busy preparing to receive tourists from all over the world. Perched more than 3,200 meters above the sea level, the 4,300-square-km Qinghai Lake, located in the northeast of Qinghai-Tibet Plateau, is not only a "Holy Lake" to Tibetans, but also home to 189 species of birds and a crucial barrier against the desert spreading from west to east. With a slim population of more than 70,000, the Qinghai Lake valley is historically a land for farming. "Generally speaking, the lake is healthy thanks to few industrial projects even till now", said Zhao Haoming, head of the provincial environment protection department. Beautiful scenery has drawn more and more tourists to the lake in recent years. According to Dong Lizhi, deputy manager-general of the Qinghai Lake Tourism Development Co. Ltd, more than 890,000 people visited the lake in 2006 and by July this year, the lake had received more than 500,000 tourists and the figure was expected to hit one million by the end of this year. Degyi, 19-year-old Tibetan girl who grew up in a nomadic family living by the lake, has been getting used to her new role as a tent restaurant waitress. Like many of their neighbors, Degyi's father began to set up two white tents four years ago on the grassland on the southern shore of the Qinghai Lake, to receive tourists. Visitors are provided with traditional Tibetan food like boiled mutton, milk tea and yogurt made of yak milk. They can also rent a horse to pose for pictures or for riding. Though the business only lasts from May to October, Degyi's family is able to earn more than 15,000 yuan every year, accounting for two thirds of the family's annual income. However, with booming tourism comes pollution. The waste produced by hotels and restaurants have been discharged into the lake without being properly treated and garbage, such as crisp packets and plastic drink bottles left behind by tourists, are frequently found around the lake. In addition, the lake is threatened by global warming and encroaching desert. Statistics with the provincial environment protection administration show the lake shrunk more than 380 sq km between 1959 and 2006 and the average water level dropped three meters to the present level of 18 meters. More than 111,800 hectares of land around the lake has been suffering from desertification brought about by overgrazing around the lake and global warming, according to the provincial forestry department. To curb ecological degeneration on the lake, China has invested 470 million yuan on recovering the plants around the lake and dealing with desertification. Local government has also banned fishing in the lake since 1982. A latest move taken by Qinghai is to ban the construction of permanent buildings around the lake. "Not only the projects under construction have been stopped, the hotels, restaurants, and shops near the lake shore will be torn down," said Jetik Majil, vice governor of the province. According to Jetik, under a new tourism development plan around the lake which is expect to be enacted next year, permanent buildings such as hotels, restaurants and tourist service centers will be relocated to an "accommodation zone" at least three kilometers away from the southern shore of the lake. "Grassland will be restored after the buildings are demolished. In the future, tourists can only tour around the lake by riding horses and bikes, taking shuttle buses powered by electricity or walking on a boardwalk," said the vice governor. "For a province like Qinghai which falls far behind many of our counterparts in terms of economic development, to improve GDP growth is very important to us", said Jetik. "However, we can't afford to taking the old road of developing first, cleaning up later." New perception of development Protecting the environment and sustainable development are now part of China's national strategy, which calls for a "scientific concept of development". The new policy put forward by the Central Committee of the Chinese Communist Party of China in 2003 has been calling for coordinated development between urban and rural areas, among different regions, between economic and social development, between the development of man and nature, and between domestic development and opening up to the outside world. "The new perception of development has been set out to halt the trend of local governments, in both economically developed coastal areas and underdeveloped inland areas, pursuing economic growth at cost of ecological deterioration and many other negative social consequences," said Ma Jun, director of the Beijing Public and Environment Affairs Institute. The Taihu Lake algae incident again clearly demonstrates a conflict between China's development and environmental protection and "the root cause of the problem is the evaluation system of Party and government officials based on GDP figures," he said. A national investigation of the Ministry of Water Resources shows that more than 70 percent of China's waterways and 90 percent of its underground water is contaminated by pollution. Ma's comments were echoed by Wu Xijun, Party chief of the Zhoutie Town government. "The booming chemical industry in our town is somehow related to the GDP evaluation system," said Wu, who came to his current post in 2005. "The chemical industry is helping to resolve the local employment issue and encourage economic growth and increase GDP, which will reflect leaders' achievements." The central leadership has also detected the dark side of the GDP evaluation system, and has been working on new systems of Green GDP or Happiness Index, which put public opinions into consideration. "In the past two years, the evaluation system for officials has taken on great changes. Economic growth is not the only major factor and residents' satisfaction with their living environment has become another major index," said Wu. Zhoutie Town has banned construction of chemical plants since 2005 and the existing factories have been ordered to meet water and gas emission standards. After the large scale reform of the chemical industry, the town's GDP ranking has fallen from the third in Yixing City to the sixth, according to Wu. "But I think it's worthwhile as our living environment has an opportunity to recover. A place with a better environment has more space for future development," said Wu. The town plans to import high-tech projects and develop tourism in the future. "The scientific concept of development will not be a mere political slogan or a catchphrase," Ma said. "After being in practice for four years, the Party, government and the people have realized it will be the only right way of China's future development." Government actions are already in the pipeline. The State Council, China's cabinet, has called for research on green taxes, looking at using a tax to bolster environmental protection. New research and trials on environmental tax and compensation policies are also underway. The authorities will audit the environmental records of listed companies, hold trials of compulsory environmental liability insurance, and strengthen oversight of export firms' environmental standards. And the new perception of development is also winning more and more support from the public. The Tibetan girl Degyi does not know exactly what the new concept means, even though she claimed to have heard about it on television. However, she and her family show understanding and support to government policy to end their tent restaurant business near the Qinghai Lake. "When I was young, there are many farms around the lake and people launched a campaign of what they called 'opening up the wasteland', which resulted the dropping of the water level and the expansion," said Huage, 43, Degyi's father. "I don't want to see that keep happening." Huage has applied to open new tent restaurants in the "accommodation zone". "It is the most beautiful scenery to see tourists riding horses on the green grasslands by the lake and I hope this is able to last forever," said Degyi. (Xinhua News Agency October 10, 2007)
<urn:uuid:6e5f7908-2ea3-403d-a327-ecfb814dbc6c>
CC-MAIN-2015-35
http://www.china.org.cn/english/environment/227514.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00165-ip-10-171-96-226.ec2.internal.warc.gz
en
0.969777
2,670
2.765625
3
|International Registry of Sunken Ships For those who are not afraid of a little investigative work I have read two different accounts of Rommel's treasure. First I should explain that Rommel's Afrika Corps, like most other armies, paid their way as they went, and as such carried various valuables with them. These valuables, I presume, consisted of cash, gold or even gems. The first account tells of Rommel being bottled up in North Africa and sending his valuables by submarine to Corsica.It would appear the submarine reached the island and the valuables unloaded and hidden. The submarine left the island only to be caught on the surface by an American bomber which attacked and sunk the vessel. All knowledge of the hiding place seemed to now lie on the bottom of the Med'. The story continued that in later years, a scuba diver who was diving in a specific area of Corsica was found dead. His own spear gun had been used on him. The inference of course is that he got too close to the treasure, either by accident or prior knowledge and/or that someone is still looking after it. The second account states that Rommel's treasure was taken to Corsica and then on to the Gulf of Bastia by a German vessel and dropped into 30 fathoms. In 1960 an ex-Nazi came forward claiming involvement or knowledge and was taken to the area. When at the scene he developed a loss of memory. The man later disappeared. Sir Francis Drake died at sea January 28th 1596. Some records show 1595 but this is based in the Julian Calendar. His last journey was on the flagship Defiance and the aim was to attack Panama. Whilst on the Island of Escudo de Varaguas he contracted Dysentery. This overtook him and he died. Drake's body was placed in a lead lined coffin and slid into the sea a league or two from Nombre de Dios. A copy of the Captains diagram showing "markers" for the final resting place exists. This would indeed be a historic find. Konigsberg is believed to be the centre where looted treasures aquired by the Nazis are buried in tunnels underground. Now named Kalingrad and in Russia, new streets, concrete and buildings have covered any trace of the old towns original design. The Germans were forced to leave and the Russians became the inhabitants, further adding to the lost knowledge of the tunnels. Some believe that some of the treasures were sunk during the German evacuation from that area. In April 1942 US minelayer Harrison dumped boxes of Filipino coin, to the value of 5 million, into Manila Bay at a depth of 18 fathoms to avoid them falling into Japanese hands. The Japanese did in fact recover 2.25 million. Later, an American recovered the bulk of what was left but there still remains about 1 million to salvage. As a result of a Doctor Watkinson's will, earlier this century a box of jewels was dropped into the sea about 2 miles North of Hartlepool, Yorkshire, UK. There is no clue as to the actual site or the contents, but it was believed the doctor was a collector of gems. Cocos Island lies at N:05.32.57. W:88.02.10. and is owned by Costa Rica. Stories say that the Great Treasure of Lima, eleven boatloads in all, were buried there by pirate Thompson of the brigantine Mary Dear in 1821-22 in a cave with a natural door. This natural door may be part of a cliff that revolves or a door that can be wedged with rocks. Benito Bonito, the Portuguese pirate also used this island to bury his millions in 1820. Pirate William Dampier is also said to have excavated several caves in the sandstone in 1822 and hidden treasure valued at over 60 million. The captain of the wrecked vessel Lark is supposed to have taken 72,000 to the island and buried it. Efforts to trace these treasures include the crew of the schooner Fanny in 1871 who found nothing. A Captain Welch also tried in 1871 with the same result. Schooner Vanderbilt tried in 1879 with no luck. In all, over 450 expeditions have set out to locate the treasures, all have failed........ RESCUE (Schooner) - An attempt to break Captain Flavel's pilotage monopoly was made in 1878 by Bar Pilots Eric Johnson, Thomas Doig, M. D. Staples and Thomas Masters, who operated the schooner Rescue on the bar. Capt. George W. Wood was taken in afterward, but the competition was short-lived. The Rescue, a fast sailer of seventy-two tons burden, was built by Matthew Turner at a cost of $8,000. When she was taken off the bar, Masters, who was at that time pilot on the Great Republic, found a buyer, and, giving his place on the steamship to Doig, sailed south with her to Cocos Island in search of the treasure supposed to be buried there. Finding nothing, she departed for Costa Rica, where she was sold to the Government. E. W. Wright, "Organization of Pacific Coast S. S. Co., Fierce Competition on Ocean Routes," Lewis & Dryden's Marine History of the Pacific Northwest. New York: Antiquarian Press, Ltd., 1961., p.263. SILVER WAVE (Schooner) - The periodic urge to seek the fabled treasure of Cocos Island resulted in the charter of the Arctic Transport Company's power schooner Silver Wave to Cocos Island Treasure, Ltd., a group of Vancouver, BC. hopefuls who were as successful in their endeavor as the countless previous expeditions. Gordon Newell, "Maritime Events of 1932," H. W. McCurdy Marine History of the Pacific Northwest., p. 418. The SAN PEDRO Gorda Cay, Southwest of Ababco Island is rich in artifacts and it is thought a Spanish vessel, the San Pedro sank here in 1660 with a great fortune on board. The depth in the area is about 20 feet but then it descends very steeply into the Northwest Providence Channel. In 1536 there were 800 Monasteries in England and Wales. four years later there were none. All had been appropriated by Henry V111. One of these sites was near Lyme Bay, Devon. Henry did not get all the treasures and it is possible that ships were used to transport the treasures to a place of safety. Who knows what some of the wrecks between 1536 and 1540 may contain. Local tales tell of an English marauder chasing a Spanish Galleon, the galleon was wrecked on the Oregon Coast at Yaquina Bay. Gold goblets have been recovered from the seabed at this location. Another treasure story from Oregon tells of a Spanish ship at anchor off the Nehalem shore. A party of local indians observed a small boat with a large black box dispatched for the shore. The party from this boat, which included a black man, took the box and buried it a little way up the beach near the Southwest side of the slopes of Neah-Kah-Nie. They then killed the black man and placed him on the box before covering it up. They then left as they had arrived. Native superstition would not allow the box to be dug up. There are many stories of attempts to locate this chest and it is said the slope is full of holes from this effort. There is a rock at the base of the mountain with what resembles a cross cut into it and possibly the words "IHS", a catholic symbol and an arrow pointing in a direction. One story tell of a Thomas McKay of the Hudsons Bay Co who seemed in later years to have an abundance of wealth. Did he find the box or was he just a good trader. John Delaney, an Englishman, had captained many vessels out of Halifax, Nova Scotia. The story goes that on one such voyage one of his crew became ill and before dying confided in Delaney that he was once a pirate. A story of buried treasure on a barren West Indian Island followed. Many years later Delaney told the story to the pilots of the pilot vessel Rechab and as a result the pilots slipped out of St John in the fall of 1850 and "fetched up" off Sandy Cay, Turks Island. Divining rods and shovels failed to produce anything, they gave up and headed for home. Anacapa Island in the Santa Barbara group in California is hard to reach in rough weather. Local stories tell of treasure and smuggling. In the time of whalers and clippers, Nanaimo was a busy port in British Columbia. The story goes that a Chinese girl, Sue Ann Lee, wanted to leave town and head for San Francisco. She packed her jewels and money into a small chest and hired two sailors to take her South. These sailors murdered Sue Ann and stole her chest. A Police hunt was on and the sailors were seen trying to leave port in a small boat, but before they could be apprehended they were seen to cast a small chest overboard. The chest has not been recovered. History notes that the treasure of pirate Capt Avery is buried in the high cliff near Beagles Point, South of Black Head, Cornwall, near the Lizard. Several attempts have been made to locate it without success. QUEEN CHARLOTTE ISLANDS, BC In June 1859 a shipwreck occurred on the West Coast of Queen Charlotte Islands, BC. The three crewmen made it to the shore and began to search for food. During this search they found a small barren island about 50ft offshore. Noticing a yellow glint, one of them swam out to the island and returned with the news that the island was covered in gold. They built a small cabin and stored the retrieved gold under the floor. The estimated they had stored about a ton. They were then attacked by hostile indians and one of the sailors was killed. After hiding out for a while the remaining two managed to get safe passage to Fort Victoria. After telling their tale an expedition was set up to return. Upon returning they found the body of their friend, but no cabin, no gold or small island. Did the Indians burn the cabin, was the gold found...who knows. Note: In 1851 many vessels had set out for the Queen Charlotte's with miners on board, gold had allegedly been found there and the panic was on. ISLAND OF DOMINICA In 1567 six vessels of a Spanish fleet were wrecked on the Northwest tip of the Island of Dominica. It is said that they carried over 3 million in pesos, and that the survivors were captured and eaten by the Carib Indians. During a salvage attempt a year later it was learned that the indians had salvaged the treasure and hid it in caves. They would not break and say where, even under threat of death. There are no records of the treasure ever being found. Poverty Island, Lake Michigan. Legend says that a 60ft schooner was carrying 5 chests of gold bullion supplied by France for use by the Americans in their fight against the English, was wrecked off the island which is East of Wisconsin. The chests are valued at 400 million. There have been many attempts to locate. LADY IN RED A steamer put in to a sawmill dock at Puget Sound. The language spoken by those on the steamer was not understood. A woman in a red cloak was seen on the bridge. The captain had come ashore and seemed under great strain, he returned to the ship, hauled in the lines and moved off. When far out into Admiralty Inlet the steamer erupted into flames. When the mill tug reached the spot only a scorched piece of wreckage was found. The treasure of pirate Lafitte may have been buried near Jefferson Island, Louisiana, yet other rumors have it hidden at Galveston Island. During the fall of Singapore the British tossed 1 million in gold currency into the sea to avoid capture by the Japanese. Japanese salvors recovered tens of thousands in a salvage operation. There were rumors of a great hoard of platinum, gold and silver hidden in Tokyo Bay toward the end of WW2. It was said to have been hidden in the sea near a wharf used by USN vessels. Some platinum and silver was recovered to the value of 35,000, not the 2 billion that was rumored. October 1215 a caravan of King John of England attempted to cross the sands of The Wash. The caravan carried a large amount of the King's treasure and was trapped by an incoming tide and a descending current from the River Nene. All was lost. The journey in question lies between Kings Lynn and Long Sutton. Since that time the area has changed, the Wash has been pushed back and rushes grow in areas where sands were covered by high tide. Man made drainage canals have also gained ground. The treasure is probably now lying 30ft deep. King John had taken a longer route through Wisbech and over higher ground and was able to witness the loss. He then rode to the Abbey at Swineshead. Quicksand is in the area. NEW YORK STRONGBOX In 1775, a Robert Gordon left his home at Whitehall, NY. for Canada to avoid the war. His house was on the West side of Wood Creek on the Red Barn Lot. His money and plate was placed in a strongbox. During his journey, somewhere on the West shore of the Haven in the marshes, he dropped it and took a note of the location. He did not return from Canada. In 1934 a state dredger engaged in swamp clearance brought up a metal box in its jaws, it balanced a while on the debris and then fell back into the water. Attempts to relocated the box again have failed. Henry Morgan died in 1688 in Jamaica. Rumors of his buried treasure abound and the following are just some of them. That it is buried in a cave at Old Providence Island. The cave mouth is now 75ft below the water level and there is a 150ft swim to the main chamber. Sharks and barracuda are in the area and there is no entrance from the land. In 1671 Morgan laid siege to Panama City and eventually scaled the wall to find little treasure. It is assumed much of it was hidden within the city. A 1927 find of jewels in San Jose church tunnels has been linked to the siege by Morgan. It is also said that after looting Panama City, Morgan took the loot and hid it in a bayou near Darien Bay, Panama West Coast. The eight person who buried the treasure were murdered. Rumor also hold that Morgan buried treasure on an islands near Tenerife, Canary Islands. Rumor has that a Japanese submarine sunk in WW2 of the West side of Pinang Island. It's cargo is alleged to be of gold bullion and precious ornaments. A Combined Royal Navy and Royal Malay Navy search failed to find the submarine in 1957. During the US Civil War, blockade runners were persued by Federal vessels. During these chases there were times when chests full of gold bullion were thrown overboard by the runners. Rumor had it the several of these chest lie between the US mainland and the Islands of Bermuda and the Bahamas. In 1641 the Duke Doria was being persued by Marshal De Brese. Whilst aboard a vessel off Perpignan, France he jettisoned gold valued (circa)1 million into the sea. MISSING FABERGE EGGS Peter Carl Faberge made a total of 57 eggs. All have been accounted for except 4, they are the Danish Silver Jubilee Egg, the Rosebud Egg, the Swan Egg and the Egg with Love Trophies. The name of the egg represents the design or motif which appears when the shell is opened. I was recently asked (2001) where I got the above information from and how good was it, my reply was that I read it and took notes from an author/reporters published investigative work, and the only reason for doing so was that the missing eggs may have been transported or lost on a ship, probably in the Baltic. He advised he know the Faberge family and would check with them. Here is the reply from the family. " That there was only proof that 50 eggs had been completed. Eight eggs are missing but two of them are probably in private collections. These eight eggs are: 1. 1886 Hen and Saphire egg. 2. 1888 Cherub with Chariote ( probably in private collection in the US). 3. 1889 Necessaire egg (probably in a private collection in the UK). 4. 1896 Alexander 111 egg. 5. 1897 Outer shell of Mauve enamel egg. 6. 1902 Alexander 111 Nephrite egg. 7. 1903 Danish Jubilee egg. 8. 1909 Alexander 111 Commemorative egg. The Swan egg and Rosebud egg exist in museums. The Love Trophie egg has never been heard of by the family ". ( Thanks Peter L (and to the Faberge family) for passing on the information) BRITAINS ART TREASURES DURING WW2 Between August 23 and September 2. 1939 Britain's art treasures and other historical artifacts were removed from the National Gallery and from Hampton Court and transported to Wales for safe keeping. They were eventually housed, 1,750 feet above sea level, in the tunnels of the slate quarry at Manod, near Ffestiniog in North Wales. Atmosphere was maintained at a steady 65 degrees F. with 40 degrees of humidity. All were returned safely to London in 1945, but the best kept secret of all, was the destination of the Crown Jewels. To this day, the hiding place has never been revealed. (Source: George Duncan Webpage) For his 50th birthday several leading industrialists presented Hitler with a case containing the original scores of some of Richard Wagners music. They had paid nearly a million marks for the collection. Towards the end of the war, Frau Winifred Wagner asked Hitler to transfer these manuscripts to Bayreuth. Hitler refused, saying he had placed them in a far safer place. The manuscripts involved included the scores of 'Die Feen', 'Die Liebesverbot', 'Reinzi', 'Das Reingold', 'Die Valkure',and the orchestral sketch of 'Der Fliegende Hollander'. These lost documents have never been found. (Source: George Duncan Webpage) I had a message from a gentleman in the States saying that during WW2 his father shot down a German aircraft. When they went to check for survivors they found many suitcases and boxes of music. One of the suitcases was opened up and he found a solid silver saxophone. He mailed it home to the US as a keepsake. In later years it was sold in a garage sale for $50.00. The instrument had a Nazi insignia on it and German writing which, when later translated said it belonged to a member of Hitler's private orchestra. The plane when it crashed was on fire and it is believed the sheet music was lost. The saxaphone is still in the States and its existence is known to the German archives, I believe. Wagners music, or the Orchestra's.....
<urn:uuid:4da3e041-7c2d-4ca0-823f-fb3415c973a9>
CC-MAIN-2015-35
http://www.shipwreckregistry.com/index10.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00335-ip-10-171-96-226.ec2.internal.warc.gz
en
0.977123
4,029
2.671875
3
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (January 2014)| |This article needs additional citations for verification. (January 2014)| General Giulio Douhet (30 May 1869 – 15 February 1930) was an Italian general and air power theorist. He was a key proponent of strategic bombing in aerial warfare. He was a contemporary of the 1920s air warfare advocates Walther Wever, Billy Mitchell and Sir Hugh Trenchard. Born in Caserta, Campania, Italy, he attended the Modena Military Academy and was commissioned into the artillery of the Italian Army in 1882. Later he attended the Polytechnic Institute in Turin where he studied science and engineering. Assigned to the General Staff shortly after the beginning of the new century, Douhet published lectures on military mechanization. With the arrival of dirigibles and then fixed-wing aircraft in Italy he quickly recognized the military potential of the new technology. Douhet saw the pitfalls of allowing air power to be fettered by ground commanders and began to advocate the creation of a separate air arm commanded by airmen. He teamed up with the young aircraft engineer Gianni Caproni to extol the virtues of air power in the years ahead. In 1911, Italy went to war against the Ottoman Empire for control of Libya. During that war aircraft operated for the first time in reconnaissance, transport, artillery spotting and even limited bombing roles. Douhet wrote a report on the aviation lessons learned in which he suggested high altitude bombing should be the primary role of aircraft. In 1912 Douhet assumed command of the Italian aviation battalion at Turin, where he wrote a set of Rules for the Use of Airplanes in War—one of the first doctrine manuals of its kind. However, Douhet's preaching on air power marked him as a 'radical'. After an incident in which he ordered construction of Caproni bombers without authorization, he was exiled to the infantry. When World War I began, Douhet began to call for Italy to launch a massive military buildup—particularly in aircraft. "To gain command of the air," he said, was to render an enemy "harmless". When Italy entered the war in 1915 Douhet was shocked by the army's incompetence and unpreparedness. He proposed a force of 500 bombers that could drop 125 tons of bombs daily to break the bloody stalemate with Austria, but was ignored. He corresponded with his superiors and government officials, criticising the conduct of the war and advocating an air power solution. Douhet was court-martialed and was imprisoned for one year for criticizing Italian military leaders in a memorandum to the cabinet. Douhet continued to write about air power from his cell, finishing a novel on air power and proposing a massive Allied fleet of aircraft in communications to ministers. He was released and returned to duty shortly after the disastrous Battle of Caporetto in 1917. Douhet was recalled to service in 1918 to serve as head of the Italian Central Aeronautic Bureau. He was exonerated in 1920 and promoted to general officer in 1921. The same year he completed a hugely influential treatise on strategic bombing titled The Command of the Air and retired from military service soon after. Except for a few months as the head of aviation in Mussolini's government in 1922, Douhet spent much of the rest of his life theorizing about the impact of military air power. He died in 1930. In his book Douhet argued that air power was revolutionary because it operated in the third dimension. Aircraft could fly over surface forces, relegating them to secondary importance. The vastness of the sky made defense almost impossible, so the essence of air power was the offensive. The only defense was a good offense. The air force that could achieve command of the air by bombing the enemy air arm into extinction would doom its enemy to perpetual bombardment. Command of the air meant victory. Douhet believed in the morale effects of bombing. Air power could break a people's will by destroying a country's "vital centers". Armies became superfluous because aircraft could overfly them and attack these centers of the government, military and industry with impunity, a principle later called "The bomber will always get through". Targeting was central to this strategy and he believed that air commanders would prove themselves by their choice of targets. These would vary from situation to situation, but Douhet identified the five basic target types as: industry, transport infrastructure, communications, government and "the will of the people". The last category was particularly important to Douhet, who believed in the principle of total war. The chief strategy laid out in his writings, the Douhet model, is pivotal in debates regarding the use of air power and bombing campaigns. The Douhet model rests on the belief that in a conflict, the infliction of high costs from aerial bombing can shatter civilian morale. This would unravel the social basis of resistance, and pressure citizens into asking their governments to surrender. The logic of this model is that exposing large portions of civilian populations to the terror of destruction or the shortage of consumer goods would damage civilian morale into submission. By smothering the enemy's civilian centers with bombs, Douhet argued the war would become so terrible that the common people would rise against their government, overthrow it with revolution, then sue for peace. This emphasis on the strategic offensive would blind Douhet to the possibilities of air defense or tactical support of armies. In his second edition of The Command of the Air he maintained such aviation was "useless, superfluous and harmful". He proposed an independent air force composed primarily of long-range load-carrying bombers. He believed interception of these bombers was unlikely, but allowed for a force of escort aircraft to ward off interceptors. Attacks would not require great accuracy. On a tactical level he advocated using three types of bombs in quick succession; explosives to destroy the target, incendiaries to ignite the damaged structures, and poison gas to keep firefighters and rescue crews away. The entire population was in the front line of an air war and they could be terrorized with urban bombing. In his book The War of 19-- he described a fictional war between Germany and a Franco-Belgian alliance in which the Germans launched massive terror bombing raids on the populace, reducing their cities to ashes before their armies could mobilize. Because bombing would be so terrible, Douhet believed that wars would be short. As soon as one side lost command of the air it would capitulate rather than face the terrors of air attack. In other words, the enemy air force was the primary target. A decisive victory here would hasten the end of the war. However, subsequent conflicts would largely discredit Douhet's theory. Air Marshal Arthur "Bomber" Harris set out in 1942 to prove Douhet's theories valid during World War II. Through four years under his command, RAF Bomber Command attempted to destroy the main German cities. By 1944–1945, in partial concert with the USAAF, they had largely achieved this aim; but no revolution toppled the Third Reich. The heavy bombers involved in the Combined Bomber Offensive did not win the war alone, as Harris had argued they would. Douhet's theories about forcing the population to starting a revolution, when subjected to practical application, were shown to be ineffective. In fact, there is considerable evidence to show the bombings did nothing but antagonize the German people, galvanizing them to work harder for their country, and the final defeat of Germany was not achieved until virtually the entire country had been occupied by Allied land forces. Though the initial response to The Command of the Air was muted, the second edition generated virulent attacks from his military peers—particularly those in the navy and army. Douhet's was an apocalyptic vision that gripped the popular imagination. But his theories would be unproven—and therefore unchallenged—for another 20 years. In many cases he had hugely exaggerated the effects of bombing. His calculations for the amount of bombs and poison gas required to destroy a city were ludicrously optimistic. World War II would prove many of his predictions to be wrong—particularly on the vulnerability of public morale to bombing. In "Rivista Aeuronautica" in July 1928 he wrote that he believed that 300 tons of bombs over the most important cities would end a war in less than a month. This can be compared with the fact that the Allies during World War II dropped in excess of 2.5 million tons of bombs on Europe without this being directly decisive for the war. Outside of Italy, Douhet's reception was mixed. In Britain, The Command of the Air was not required reading at the RAF Staff College. France, Germany and America were far more receptive and his theories were discussed and disseminated; in America, in particular, by Billy Mitchell. A supporter of Benito Mussolini, Douhet was appointed commissioner of aviation when the Fascists assumed power but he soon gave up this bureaucrat's job to continue writing, which he did up to his death from a heart attack in 1930. More than 70 years on, many of his predictions have failed to come true, but some of his concepts—gaining command of the air, terror bombing and attacking vital centers—continue to underpin air power theory to this day. - Giulio Douhet, Command of the Air, 1942 translation - Thomas Hippler. Bombing the People: Giulio Douhet and the Foundations of Air-Power Strategy, 1884-1939' (Cambridge University Press, 2013) 294pp. online review - Louis A. Sigaud, Air Power and Unification: Douhet's Principles of Warfare and Their Application to the United States, The Military Service Publishing Co., 1949 - Douhet, Giulio The Command of the Air (Editors' Introduction), Coward McCann (1942), Office of Air Force History 1983 reprint, 1993 new imprint by Air Force History and Museums Program, ISBN 0-912799-10-2, p. vii-viii - Col. Phillip S. Meilinger, The Paths of Heaven: The Evolution of Air Power Theory (Alabama, 1997), p.1. - Johansson, Alf W, Europas krig (in Swedish), Stockholm: Tidens Förlag, p. 281, ISBN 91-550-3818-2
<urn:uuid:00eb9a94-7899-458c-8f8b-4f160ff5f3cf>
CC-MAIN-2015-35
https://en.wikipedia.org/wiki/Douhet
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz
en
0.96589
2,123
3.484375
3
Volume 14 Number 10 Guy R. Schenker, D.C. What causes heart attacks and strokes? - High triglycerides - Low HDL cholesterol - Dietary deficiency of saturated fat and cholesterol - Excess dietary polyunsaturated fats - Excess dietary carbohydrates (particularly fructose sugar) - Thyroid insufficiency - Excess estrogen - Testosterone insufficiency - Excess catacholamines - Excess cortisol - Excess insulin (or dysinsulinism) - Oxidative stress to the heart - Oxidative stress in the arteries - Oxidation of LDL cholesterol (with release of metalloproteinase enzymes) - Chronic inflammation of the arteries - Excess proliferation of cells lining the arteries - Platelet aggragation and RBC Rouleux formation - Excess prostaglandins (particularly thromboxane) - Excess vasoconstriction - Magnesium deficiency - Excess calcium (pushing out magnesium) in the heart, blood vessels, and vasomotor nerves - Trace mineral deficiencies Quite an exhaustive list, isn't it? (Note that elevated serum cholesterol is not on the list, and neither is excess dietary intake of cholesterol --- which is the point we made in your two most recent NUTRI-SPEC Letters.) How do you possibly sort through all these causative factors to construct an effective clinical protocol to serve your patients at risk for cardiovascular disease (CVD)? It is quite simple --- do NUTRI-SPEC testing on your patients, and back up your NUTRI-SPEC findings with selective use of blood work. Here is a list of clinical indicators of CVD risk. In other words, this is a list of factors indicating the likelihood that one or more of the above listed causes of CVD are at work in a particular patient, and, why those causative factors are active in that particular patient. - Electrolyte Stress Imbalance - Anaerobic Imbalance - Dysaerobic Imbalance - Sympathetic Imbalance - Ketogenic Imbalance - Cardiac arrhythmia - Elevated triglycerides (particularly elevated triglycerides to HDL cholesterol ratio) - Elevated homocysteine - Elevated C-reactive protein - Thyroid functional evaluation Do your appreciate the significance of the two lists you have just read? The knowledge you have in the first list puts you far above the vast majority of clinicians in your understanding of CVD causes. As unbelievable as it may seem, we have a condition that kills more than 50% of all people, yet most doctors are almost entirely ignorant of the causes of this condition. How unbelievably absurd is that? You (and your patients) can be quite pleased that you have risen above the standards of mediocrity that characterize the healing arts professions. The real beauty is in the second list. Here, you have 10 clinical indicators that inform you completely about the 22 causative factors of CVD. Do you see how valuable you are to your patients? You have the ability to define and monitor the 22 causes of cardiovascular disease with 10 clinical indicators. Now that you have developed a complete picture of the complexity of CVD, you should also begin to appreciate that you, as a NUTRI-SPEC practitioner, are uniquely in a position to actually do something about it. We have said repeatedly for more than 20 years now ... You will, with NUTRI-SPEC save lives of patients at severe risk for heart attacks and strokes. Imagine a patient coming to you who has already had one heart attack, has triglycerides over 1000, blood pressure in the stratosphere, and a pulse that bounces up to over 100 at the slightest provocation. Imagine further that within a year you've got the patient's triglycerides down below 200, the blood pressure is high normal and the pulse is steady and strong. Furthermore, the patient has been able to eliminate four of his six medications prescribed by the Cardiologist, and is feeling better than he has in years. How many cases like that does it take to build a booming nutrition practice? These people will flood your office with referrals. Just as gratifying as saving lives is enriching lives with NUTRI-SPEC. Do you see the magnificent prophetic capacity of your NUTRI-SPEC testing system? Your NUTRI-SPEC system allows you to identify the early stages of CVD 20 years or more before the typical physician will identify a pathology, and as much as 25-30 years before the heart attack or stroke. You will find patient after patient who has taken several giant steps down the road leading to death from CVD; patients you will rescue and redirect down the road to happy-ever-after. With what you have learned about the true nature of the pathology underlying CVD, you can clearly understand that the most direct and effective way to minimize CVD risk is to accompany your NUTRI-SPEC Fundamental Diet with NUTRI-SPEC supplementation --- particularly Taurine, and the powerful anti-oxidants in Diphasic A.M. and Diphasic P.M. We finished last month's Letter by singing the praises of Taurine. A tremendous amount of research for nearly 20 years has demonstrated its protective affect against heart attacks and strokes. Its benefits are largely the result of its effect on calcium and magnesium metabolism. Taurine helps keep calcium out of the myocardium and the smooth musculature of the arterial intima, and allows magnesium to fully exercise its biological role. But beyond protecting against excess calcium and enhancing the effects of magnesium, taurine also facilitates the elimination of excess cholesterol, and promotes vasodilation, and best of all actually decreases the size of athlerosclerotic lesions. Having celebrated the benefits of Taurine, we will now consider your other big guns against cardiovascular disease. You know that Oxygenic A-plus and Oxygenic D-plus, along with Diphasic AM and Diphasic PM are the keys to preventing and reversing pathological hyperplasia and pathological disintegration. Pathological hyperplasia includes the anabolic, atherosclerotic phase of CVD; pathological disintegration, as it relates to the heart and blood vessels, includes the catabolic oxidative damage to the heart and vascular walls. Your Diphasic AM contains the betaine to reverse the aberrant metabolic process that results in the buildup of homocysteine; glucosamine and chondroitin sulfate help build strong arterial walls; the chondroitin sulfate also protects the heart and blood vessels against degenerative changes, as do the carnosine, the carnitine, the Co Q-10, and the whole family of tocotrienols and tocopherols and lipoic acid. As decreasing elevated triglycerides is one of your most important clinical goals, you must begin to appreciate lipoic acid. Nothing compares with lipoic acid as the means to lower triglycerides, and it does so by several mechanisms. When you combine the lipoic acid in your Diphasic A.M. and Diphasic P.M. with your NUTRI-SPEC Fundamental Diet (avoidance of excess carbohydrate in general, and fructose in particular) you will offer your patients by far the most effective means to lower deadly triglycerides. There have been many, many instances of NUTRI-SPEC practitioners lowering patients' triglycerides by more than 1000 in a period of less than 6 months. You can do so as well. Doing so is as simple as either beginning to do NUTRI-SPEC testing on all your patients, or, implementing the Diphasic Nutrition Plan for your patients (and, by the way, giving up all your favorite herbal remedies, "adrenal support" supplements, and mega doses of this and that). There seems to be no end to the flood of research highlighting the protective effects of Co Q-10. This nutrient is turning out to be one of the most valuable clinical tools you have for patients with a diversity of health problems, but particularly for those at risk for CVD. A study published in Clinical Investigations, 1993; 71/8 Supplement: S140-4 entitled, "Isolated Diastolic Dysfunction of the Myocardium, and its Response to Co Q-10 Treatment," studied patients in the early stages of congestive heart failure and found that Coenzyme Q-10 resulted in a decrease in high blood pressure in 80% of hypertensives; an improvement in diastolic function in all patients based on endocardiograms; a reduction in myocardial thickness in 53% of hypertensives and in 36% of those with combined mitral valve and fatigue syndrome. A study published in Clinical Investigations, 1993; 71(8 Supplement) S116-23, entitled, "Perspectives on Therapy of Cardiovascular Diseases with Co Q-10," showed that Co Q-10 myocardial tissue levels were significantly lower in patients with more advanced heart failure compared with those in the milder stages of heart failure. Administering Co Q-10 to these patients showed significant improvement in patients' capacity for physical activity and overall quality of life. The benefits were found to be far greater than those from treatment with traditional methods such as angiotensin converting enzyme inhibitors. A study published in The International Journal of Tissue Reactions, 1990; 12(3):163-8 entitled, "Pronounced Increase of Survival of Patients with Cardiomyopathy when Treated with Co Q-10," showed that patients with all classes of cardiomyopathy accompanied by low ejection fractions experienced dramatic improvement in ejection fractions and pronounced increase in survival, which was attributed to Co Q-10's bioenergetic activity in regard to myocardial function. Yes, your NUTRI-SPEC supplements plus your NUTRI-SPEC Fundamental Diet are beyond compare as a means to treat and prevent CVD. Finally, another piece of evidence illustrating one of the primary causes of elevated serum cholesterol, particularly LDL (the bad cholesterol), comes from research published in Prostaglandins, Leukotrienes, and Essential Fatty Acids 2000:63(4):177-86). This research shows how eating too many sugars and carbohydrates accelerates the aging process because it results in the production of advanced glycosylation end products (AGEs). These AGEs so easily undergo the pathological oxidation that results in tissue damage and thus premature aging. But note this research shows additionally that these AGEs are not just associated with accelerated aging in general, but in particular with the oxidation of LDL cholesterol in the vascular system and the elevation of LDL levels in the serum. What happens is the glucose from a diet high in carbohydrates and relatively low in fat and protein attaches to peptides (protein molecules) forming AGEs that end up circulating in the bloodstream and ultimately attaching themselves directly to LDL molecules. The body can no longer recognize this new LDL since it has extra molecules clinging to it, so the excess LDL is not removed by the liver, and thus continues to circulate --- resulting in elevated serum LDL. So, the high carb, low protein and fat diet actually elevates LDL into the range that alarms most physicians, but also, since this LDL is glycated, it is more sensitive to oxidation damage than normal LDL (which is no threat whatsoever), thus contributing to atherogenesis, heart attacks, and strokes. What causes heart attacks and strokes? May I be so bold as to suggest that ... about answering that question. Is there a way to discover and predict the risk of CVD years before the pathological process is evident to other clinicians? Yes, you have ten easily monitored prophetic indicators. Can CVD be prevented? Yes, and no one can match your ability to do so. Can CVD be reversed? Absolutely --- to some degree in all patients, and to a dramatic degree in many CVD patients. All it takes is your knowledge, and your NUTRI-SPEC products and procedures. Guy R. Schenker, D.C.
<urn:uuid:062a246e-5a1d-4538-a7ed-1333c863517b>
CC-MAIN-2015-35
http://www.royalrife.com/1003.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00217-ip-10-171-96-226.ec2.internal.warc.gz
en
0.924213
2,506
2.625
3
Back to BasicsTaking pictures with a pinhole camera is one of the simplest forms of the photographic process. But wonderfully, making and using a pinhole camera provides the student with an understanding and appreciation for not only photography, but human physiology, chemistry, light physics, mathematics, art and possibly, a little magic. All cameras, from the most sophisticated to the pinhole, rely on the same elementary principles, performing in similar fashion to the human eye.Like your eyes, the camera needs light to operate. Light moves into the eye through the pupil, a hole that is made smaller or larger by the iris. Light gets into the camera through a hole called an aperture that is made larger or smaller by a diaphragm. The camera can also shut out all light with a shutter, similar to closing your eyelids, or opening them to let the light pass. Recall what happens when you enter a movie theater on a sunny afternoon. It takes some time for your eyes to adjust to the low light. At first you cannot see anything, but soon you began to make out objects and within a short time you can see pretty well, even in that darkened room. This is much like a camera making a long exposure in low light. The diaphragm opens as wide as it can to allow maximum illumination. For your eyes, the iris opens wide. The eye's retina, like the camera's film, is sensitive to changes in light and sends messages to the brain about the images you see. As you leave the theater and return to the sunlight, the opposite situation occurs. The eye's iris closes down as it's flooded with light. In bright light the camera's diaphragm closes down, or stops down. Unlike the camera, the eye is constantly and automatically reacting to various fight and focusing and refocusing. The camera has to be adjusted for each situation. The camera, however, can bring into focus objects both near and far and record them on film at the same time. Your eye cannot. While modern cameras and the human eye use sophisticated systems to focus images, including color correction and lenses to improve clarity and magnification, much simpler techniques can and do work. Consequently, the pinhole camera can produce surprising results using just a light-tight box to capture the image transmitted to film through a simple hole (aperture) made with a pin or, for a better level of performance, a sewing needle . Constructing the cameraThe camera body can be made from any container that is practical to handle and can be fixed so that it will not allow stray light (all but light entering through the aperture) to enter. Common pinhole camera bodies range from small rectangular jewelry boxes and the old metal band aid tins to shoe boxes and one-pound coffee cans. The curved shapes of the coffee can or oatmeal carton will produce a more surrealistic or panoramic image. Keep in mind that a sturdy container usually has better light keeping properties, is easier to work with and will last longer. The best overall size is in the neighborhood of six inches square. The shape of the box can present some real creative possibilities and the box depth, from removable cover to back, relates to the angle of view of the transmitted image. In other words: a container that is very shallow will yield a wide angle view; and a container with depth will yield a telephoto image. For purpose of explanation, let's assume that a sturdy, cardboard box about six inches square and four inches deep is used for the camera body. The bottom of the box will be used to hold the film (more on that later) and the box top or lid will become the light-transport system (aperture, shutter). Some would choose to line the box with black paper or paint it black to cut down on the possibility of stray light, but it the lid is properly prepared, or if the lid is equally as large as the box itself, that won't be necessary. To convert the lid to an aperture you will need to first cut out a hole in the center of the lid about one-inch square. Cover the hole with as heavy a grade of aluminum foil as you can find and tape it down with black plastic electrician's tape. Then using a small sewing needle, carefully punch a hole in the center of the foil with the needle, taking care not to move the needle from side-to-side. An easy, but deliberate, straight-in-straight-out, motion will work nicely. The foil will provide a much more accurate aperture unlike one made by punching the needle directly through the lid. Also, it gives you the opportunity to repeat the process if need be with the minimum of trouble. It may also be easier to punch the hole in the foil prior to taping it on the camera. Greater accuracy can be obtained by using a cushion underneath, like a phone book or piece of cardboard. A more sophisticated aperture can be made by using thin, brass shims which can be found at a good automotive supply or hardware store. The difference of making an aperture in cardboard, aluminum foil or brass shims can be noticed in the clarity and sharpness of the final photograph. Modern cameras use computer designed lenses, ranging from hand ground glass to machine ground plastic. But the aperture itself can function as a lens. The pinhole (in this case, needle hole) transmits rays of light so that they strike the film in tight clusters. The result is an acceptably clear photo. Results are improved with better materials, but also with smaller apertures. Size makes a difference because the smaller aperture transmits only a few rays from each point reflected from the scene. The finer the rays of light, the tighter the cluster hitting the film and the better the representation of the image viewed In other words, pin-point accuracy.Larger apertures will transmit a much softer and less focused image. Experimenting with the foil apertures of various sizes will provide a dramatic illustration. Shadow catchingWith camera in hand it is time to load the film and get to making photographs. While real photographic film can be used, it is easier and a more visible learning experience to use photo paper as film. Let's review the basic structure and differences with a little, very little, chemistry. Photographic film is made in layers.The base is clear plastic coated with light sensitive material held in place by an emulsion layer. The light sensitive material is actually tiny particles of silver that when exposed to light, chemically react and etch a reflection of the image viewed. Not all film reacts the same. Some, with larger silver particles, is more fight sensitive and color film is different than black and white, either having more layers to capture various hues or using dyes. All film, and photographic papers, have an emulsion layer. This is actually a gelatin that holds the silver in place. The emulsion is fastened with adhesive to the plastic film base and then coated with a scratch resistant material. Film, unless packaged in a light-tight container, must be handled in total darkness both when loading the camera and processing. For this reason, it is easier to use photographic paper, which can be used under low level photo lights, in the pinhole camera. Photo paper is also a lot less light sensitive and therefore less likely to react to stray light. Later, you can choose to experiment with real film as your experience increases. Photo paper can be purchased in a wide variety of sizes with the most common being 5 x 7 and 8 x 10. Paper is coated with a plastic resin (RC) and will process quickly and dry flat. The paper will probably need to be cut to fit the inside of your pinhole camera. The cutting and loading will have to be done under safelight, a dark red, or yellow photo light. Once cut to size it can be placed in the camera so that the emulsion, slick side, faces the pinhole (aperture). Before you turn on the fight and leave the darkroom you will need to place your finger gently over the pinhole to prevent stray light from exposing the film. Incidentally, your finger now becomes the pinhole camera's shutter. Because photo paper reacts slower to light than film and because the sewing needle creates a tiny aperture, exposure times for the photograph will be lengthy. This will call for a bit of experimentation, however. Not all cameras, aperture and lighting conditions will be the same. Suggested exposure times outside, bright sun: one minute outside, cloudy: five minutes inside, sunny window: eight minutes inside, sunny room: 14 minutes inside, dim light: 30 to 40 minutes Keep in mind that the longer the exposure the darker the resulting negative which will make for a light, or washed out print. Think of baking. The longer something is in the oven the darker it will become. Just a little experience with the pinhole camera will greatly improve your pictures. Remember that if a negative is overexposed (too dark) and you had exposed it for six minutes, a three minute exposure will reduce the exposure by 50 percent. This is all too logical, but often beginning photographers will only reduce the exposure by a few seconds, producing a negative almost identical to the problem they are trying to solve. Returning to the darkroom with your exposed negative you will unload the camera -- again under safelight conditions -- and process the print in a series of chemical baths. Each chemical should be pre-mixed and placed in plastic trays commonly used for the purpose. All of the following materials are readily available at any decent camera supply store. The first processing solution is the paper developer. Dektol, a Kodak product is common. Developer can be mixed 1:2 (one cup Dektol, two cups water) and it will turn brown when exhausted. Under the safelight, gently slide the photo paper negative into the solution and rock the tray carefully. It is a good idea to use photo tongs (not metal tongs) to handle the print. Some people may react to the developer using their hands and there is also less chance of chemical contamination from tray to tray. Each tray, other than the water baths, should have separate tongs. The negative image will begin to appear after a few seconds in the developer. After about two minutes, development has been achieved and the negative, if properly exposed, will not change. If the negative has been severely overexposed it will turn completely black. From the developer, the negative is placed in a water bath that removes the Dektol and arrests development. The negative is then placed in the fixing agent. Kodak fix, or fixer, is a chemical that removes all the unexposed silver particles and hardens the paper surface. After a few seconds in the fixer, room lights may be turned on. Total time in the fixer need not exceed four minutes. The negative is then placed in the last water bath for about four minutes. Remember to turn off the white light and handle the paper only under a photo light! If your darkroom has an enlarger -- machine for printing prints from negatives -- you can use its light source for the printing process. Turn on the enlarger --keep photo paper packaged-- and adjust the size and shape of the light it projects so that it is several times larger than your intended print. Place a photo easel under the light, or mark the top edge of the light beam with tape, and turn the enlarger off. Now, with all the lights off, except the photo light, remove a sheet of fresh photo paper and place it emulsion side up in the easel, or where the light will strike, marked by the tape. Place your negative, wet or dry, upside down and directly on top of the photo paper. If the negative is wet, roll out any bubbles with your hand or a roller made for that purpose. If the negative is dry you will probably have to sandwich it and the photo paper under a clean and rather heavy piece of glass. You are now ready to make a second exposure. This time exposing the paper to the enlarger's light that will pass right through the negative and reverse the image on the photo paper by creating another silver particle reaction. Now, run the newly exposed piece of photo paper through the same chemical process and, there you have it, a wet, but finished black and white positive print. Blot off the excess water, or use a photo squeegee and set it out to dry. The resin coated paper will dry soon and dry flat. If you are in a big hurry for a final dry print, use a hair dryer, taking care not to crackle the resin coating with heat too high. Some photographers have been known to use a microwave to dry resin coated prints. If you do not have an enlarger, or access to one, you can still make a positive print. Just turn on the room light. This will take some experimentation as the light won't be as concentrated as under the enlarger's lamp, but will work. Additional HelpThere are a number of books on pinhole photography. Some bookstores and larger camera supply houses usually have something on the subject. Also, books on the history of photography usually mention the camera obscure, which operated in similar fashion. Books in the ASU library Oakes, John Warren. Minimal Aperture Photo, TR268 .024 1986 Smith, Lauren. Visionary Pinhole, TR268 .S65 1985 Smith, Lauren. Pinhole Vision I, TR268 .S64x 1981 Smith, Lauren. Pinhole vision II, TR268 .S642x 1981 Eastman Kodak. How to make..., TR268 .E38x 1976 Page authored by Robert Alber and the ACEPT W3 Group Department of Physics and Astronomy, Arizona State University, Tempe, AZ 85287-1504 Copyright © 1995-2000 Arizona Board of Regents. All rights reserved.
<urn:uuid:dd9d93d5-18ee-4cd9-84dc-a3e4cef172f7>
CC-MAIN-2015-35
http://www.asu.edu/courses/phs208/patternsbb/PiN/rdg/camera/camera.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00154-ip-10-171-96-226.ec2.internal.warc.gz
en
0.933696
2,877
3.75
4
At a time when kids are maturing emotionally and physically, it's important to set up good nutrition habits for the future. The issue of child weight loss has gotten a lot of attention recently. When Dara-Lynn Weiss wrote in Vogue magazine about the dramatic (some might say Draconian) methods she used to help her seven-year old daughter lose weight, the media and the public jumped on her. Denying her daughter "reproachfully" of dinner one night after hearing what she'd eaten during a school celebration, was one of the admissions that sparked the backlash. Parents find themselves in a difficult and confusing position when they are told their child needs to lose weight because he or she is clinically overweight or obese... The health risks to kids, especially when considered over the course of their lives, are enormous. When her daughter's physician told Weiss that her daughter, at 4'4" and 93 pounds, was clinically obese at six years old, she knew she had to take action. Few readers were outraged that a mother would step in to help her daughter become a healthier weight; what sparked controversy were the methods Weiss used, such as snatching hot chocolate from her daughter and pouring it out after a barista was unable to give a calorie count for the beverage. (For the record, her daughter did achieve a healthy weight by age 8.) The article struck a chord and not just because of the controversy it sparked. It raised an important issue. Parents find themselves in a difficult and confusing position when they are told their child needs to lose weight because he or she is clinically overweight or obese. The health risks to kids, especially when considered over the course of their lives, are enormous. Serious overweight in children contributes to the development of type 2 diabetes and high cholesterol. Child Weight Loss Is a Delicate Issue Weight loss in children is tricky for a number of reasons, not the least of which is that they are still growing and need to have a solid nutritional foundation to maintain that growth. Being overweight is a psychologically loaded issue for a child (as for anyone else): Self-esteem, self-worth, and popularity can be wrapped up in it, so it's especially important to come at the weight loss endeavor as productively and positively as possible. Here is some of the best-supported advice for parents who are trying to help their children lose weight. The bottom line: The focus should always be on health, and on making the experience as positive and rewarding - and as anxiety-free - as possible. Be Sure Weight Loss is Necessary The National Institutes of Health (NIH) warns that "limiting what children eat may interfere with their growth." So determining the right way to go about it - and whether it's really warranted - is important. There is no one single calculation to determine if a child needs to lose weight. Some use body mass index (BMI) to determine whether a child is overweight or obese, but "BMI is tricky because children haven't reached peak bone mass, and this can affect the measure," says Rebecca Solomon, Registered Dietician (RD) and Nutrition Coordinator at Mount Sinai Medical Center in New York City, who adds that "the decision is really a multi-factorial one." When it comes to child weight loss, it's always best to get the go-ahead from your pediatrician. In addition to making sure you are doing the right thing, having his or her authority behind you can only help. Since some kids' BMI may be on the borderline, the first step is to talk to your child's pediatrician to determine if weight loss plan is advised. Your child's doctor will look at all the factors - body weight, age, height, eating habits, activity level - and tell you whether it's time to work on developing a plan or whether watching and waiting is enough. The Best Plan Is The One That's Tailored to Your Child Once it has been decided that your child would benefit from a weight loss plan, it's important to develop one that's specific to his or her needs. Solomon says that since our society - parents and children alike - is becoming more and more sedentary, kids are less likely than they once were to "outgrow" the baby fat as they age. That is why a specific plan to lose weight is often needed. Don't forget to ask your child what strategy he or she feels will be best and most successful. One recent study found that kids had an easier time sticking to weight loss plans that included more low-glycemic foods (those that raise your blood sugar slowly over time, like fruits, veggies, and whole grains), and a harder time staying with high-protein, low-carbohydrate plans like the Atkins diet. Consulting with an expert to develop the best game plan is a good place to start, but don't forget to ask your child what strategy he or she feels will be best and most successful. Shift the Focus, Change the Language Regardless of whether you are overweight or normal weight yourself (more on this below), it is incredibly important to keep the discussion in a positive light, and frame the challenge in such a way that health is the goal, rather than losing weight to "fix" a problem. There are a lot of factors tied up in weight for a child (and for adults, for that matter). For many children, there is often a perceived value judgment associated with body weight, says Solomon. She urges parents to "avoid using negative words like 'fat' or 'heavy,' because there are too many negative connotations." Instead, say something like, "This just means that you weigh more than you should for how tall you are." Then focus on the health-related implications, both physically and mentally. This may mean talking to your child about the positive outcomes, like how much better they'll feel physically, how their clothes will fit differently, even how interactions with others may change as a result. Frame the challenge in such a way that health is the goal, rather than losing weight to 'fix' a problem. Showing that you empathize with a child who is struggling with his or her weight is really what it's all about. Amy Jamieson-Petonic, MEd, RD, of the Academy of Nutrition and Dietetics and Director of Coaching at Cleveland Clinic adds that "Being overweight carries quite a stigma about it... Physicians, psychologists, registered dietitians, other healthcare providers and family members need to help the child become more comfortable with who they are as a person, and let them know that they are cared for and loved." Help Your Kid Stay Motivated Part of a parent's job in weight loss and life is keeping your child motivated - without nagging or pressuring. Suggest activities that put you in it together: Try out rollerblading, bike riding, swimming, hiking, or any other physical activity that strikes your child's fancy, always making sure your child has a voice in what activities you do together. It's also important to celebrate your child's success along the way. For example, a new outfit every so often, to reflect the weight loss that your child is experiencing, can help her feel the results in a new way. Food treats are also fine. They underscore that eating isn't the enemy, just eating too much. The goal is always to help your child understand that getting healthy and being active should be fun, not work, and that there will be lots of satisfying rewards - external and internal - throughout the process. Parents' Weight - and Example -- Can Be a Problem or an Advantage Though it may sound funny, overweight parents who have an overweight child may actually have somewhat of an advantage (more on normal weight parents below). This is because children pick up the habits of their parents - both the bad and the good. Jamieson-Petonic says, "Families have a HUGE impact on whether or not the child will be successful. When I work with kids, I tell the parents that eating healthy is a family affair, and that everyone needs to be on board, and that everyone will benefit from these strategies." One of the most effective methods for helping children lose weight is when the parents change their own habits. Kids learn best through observing others' behavior. They also have excellent antennae for picking up habits and moods in the household. This is why it is so important to have healthy habits and a good relationship with food yourself. Your behavior will rub off on your child more than you may want to believe. The 'No Cookie' rules almost never work, because I can guarantee you that children are finding the foods elsewhere. When parents are normal weight and their child is overweight, it's a little bit harder. Solomon says that there's an "extra component of perceived judgment in these situations - and it can be so hard on struggling kids, since it can damage self-esteem, self-worth, often exacerbates the problem. Kids may often ask themselves, 'How come I can't maintain normal body weight, when mom and dad can?'" Kids may act out or rebel in these situations, by binging when mom or dad isn't around. Solomon likes to meet with children alone, in addition to parent-child meetings, precisely because "a lot of kids just don't speak freely about what they're going through in the company of parents." And helping kids speak freely about what's going on inside them is a big part of finding a solution to the problem. Weight loss programs often actually work better when peers play the leading role and the parents are less involved. Knowing what kind of role to play - and when to step back - is important, and talking to your child to learn what would make him or her feel most comfortable in the weight loss endeavor can be a good place to begin. Be Firm, Not Strict Being firm about certain habits is a good, even necessary, tack to take. There are certain ways in which we can set up guidelines to encourage healthier habits in our kids. For example, the NIH recommends limiting the number of hours of TV or video games a child can play every day. In this way, at least some of the unhealthy variables that can contribute to weight gain can be reduced. (You may find yourself benefitting from this concept, too.) On the other hand, too much restriction can backfire. For example, outlawing certain foods in the house is not likely to be a successful method, says Solomon: "Nothing should be restricted. The 'No Cookie' rules almost never work, because I can guarantee you that children are finding the foods elsewhere." The NIH also urges parents avoid being too strict, and says that there can even be a place for a little fast food or sweets in a healthy diet. The key is for these foods to be the exception rather than the rule. It's About Setting Your Child Up for the Future The pull of the media and McDonalds are awfully hard to compete with. The clever marketing tactics that fast food outlets use, and even with the kid-friendly packaging on unhealthy grocery store items are designed to attract. But it is possible to overcome the pull of the media and help your child make better choices. We can't be around our kids 24 hours a day. This is why we need to give them the tools to make their own good decisions. Here, too, studies show that parents are more effective than they realize when it comes to helping their kids make better food choices. The NIH suggests that parents help their kids be attuned to peer and media pressures by talking about making smart personal choices, rather than letting those around them or the media influence them to make poor ones. Helping set up good relationships with food early on is the very best thing you can do for your child. We can't be around our kids 24 hours a day, and that's the way it should be, as Solomon reminds us. But this is why we need to give them the tools to make their own good decisions. (It's also why too much restriction is less likely to be successful.) Teaching healthy habits - by example - will set them up for a lifetime of healthy eating and living. And again, the ultimate focus should be on gaining something positive, not on correcting a negative. Jamieson-Petonic underlines that it's best to "focus on the health benefits of developing a healthier lifestyle, not on weight... The rewards of becoming healthier are tremendous, and if you help kids develop healthy habits today, they will carry them through a lifetime." This article originally appeared on TheDoctorWillSeeYouNow.com, an Atlantic partner site.
<urn:uuid:33818037-94b0-4e42-9f7d-3b3b649529d0>
CC-MAIN-2015-35
http://www.theatlantic.com/health/archive/2012/05/how-to-help-your-children-maintain-a-healthy-weight/256984/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00162-ip-10-171-96-226.ec2.internal.warc.gz
en
0.97295
2,594
2.515625
3
Severe weather, destruction of nesting habitat, and heavy competition with sparrows and starlings have caused a decline in the nations bluebird population. Although the pressure is not as drastic in Texas as in the North and East, this beautiful member of the thrush family needs a helping hand. Three species of bluebirds – eastern, western, and mountain – make their homes in Texas during various times of the year. All of them are close in size, 6-1/2 to 7-1/2 inches, and weigh about one ounce. Most common and widespread is the eastern bluebird, Sialia sialis. Although it is considered a partial migrant, it winters throughout most of Texas, except the Trans-Pecos. Almost anywhere, except the treeless prairies and heavily wooded forests, is suitable to this particular bird's needs. The male has a bright blue back, rusty breast and throat, and white belly and undertail area. Its beautiful coloring caused the famous American writer Henry David Thoreau to say that the bird carries the sky on its back. American naturalist John Burrows observed that it also has the warm reddish-brown of the earth on its breast. Coloration of the female is much duller and paler. The young, unlike adults, have mouse-gray backs and the white speckled breasts so characteristic of thrushes. Only while they are young do these birds display their relationship to the thrush family in their coloration. A tinge of dull blue in the wings and tail give a hint of the bright colors they will wear one day. When perching, this species appears dumpy and round-shouldered. Flight is considered more or less irregular unless the bird is traveling long distances. Short flights usually are not at a great height. During courtship the male ascends fifty to one hundred feet and then floats down to flutter around the female. He may even offer her food as he woos her with songs and tries to convince her to examine the nest site he has chosen. Finally she flies into the cavity and accepts it and the male. After lining it with grass, she lays four to six light blue eggs. Most, if not all, of the incubation during the required twelve-day period is done by the female. Both parents feed the nestlings, but again, the female does the larger share. However, when the young become fledglings and are able to leave the nest, the male takes over so the female can prepare the nest for a second brood. The male continues to feed the fledglings while teaching them to feed themselves. Sometimes young from the first brood help the parents feed the second brood. About three-fourths of bluebirds' diet consists of insects such as beetles, grasshoppers, and caterpillars. Berries and other fruit make up the rest of their menu. Food preferences make the bluebird one of those species considered beneficial to people. The western bluebird, Sialia mexicana, is very similar to the eastern except the male's throat is blue and he has a rusty patch on his back. Females are duller than the males and have a whitish throat. This species winters in the Trans-Pecos and breeds in the Guadalupe Mountains. Except for its whitish belly, the mountain bluebird, Sialis currucoides, is a beautiful turquoise blue. No red appears on either the male or the female. In fall and winter the male's plumage shows touches of dull brown, which is the predominant year-round color of the female. Her drab coloring is relieved only by bluish markings on her rump, tail, and wings. The mountain bluebird, which winters in the western two-thirds of Texas, has a straighter, less hunched posture than the other bluebirds. All species of bluebirds are cavity nesters, which means they nest in holes in trees, shrubs, fence posts, and bird-houses. With a bit of interior remodeling, they can convert abandoned woodpecker holes into comfortable nests. Chip-strewn floors may be all right for hardy woodpeckers, but a soft grass lining must be added for the more delicate young bluebirds. At one time there were plenty of natural nesting sites for the "blue robin," a name given the bird by early settlers because of its reddish breast. Its preference for sites bordering open areas was met as the pioneers cleared forest lands for farming. The holes in the posts and rails of the wooden fences they built provided additional nesting places and the bluebird's population grew. Their first efforts benefited the bird, but later actions were not so kind. When early settlers imported the English house sparrow and the European starling, both cavity nesting birds, they brought to America two species that are in direct competition with the bluebird for available nesting sites. Since sparrows and starlings are extremely aggressive, the gentle bluebird often lost out to its foreign competitors. Non-migrating sparrows contested the bluebird's rights to live in cities and towns in their northern range by being well established in all available housing when the bluebirds returned from their southern migration. There was nothing the bluebirds could do but move to the country. Fortunately for them, sparrows seldom use abandoned woodpecker holes or natural cavities in decaying trees as homes. Changing lifestyles also brought problems for the bluebird. As small farms were consolidated into larger, more profitable agricultural complexes, thousands of miles of hole-riddled wooden fences were eliminated. Metal fence posts often replaced wooden ones that had provided nest sites along our roadsides. Invention of the chainsaw did not help the bluebird either. These efficient machines made it possible for landowners to cut down old, unsightly, cavity-filled trees from pastures and fencerows, thereby removing natural bluebird housing. Severe weather also takes its toll of the brightly colored birds. Although the bluebird is an early migrant, it is not a hardy bird. Prematurely warm weather may draw flocks of them north too soon, and then they freeze when cold weather returns. With everything working against them, it is a wonder there are any blue-birds left at all. Noticing a decline in the birds' numbers, concerned conservationists launched several campaigns to provide artificial housing for the birds. Results have been very good, especially when the houses have been placed outside the city limits or in parks. In some areas, bluebird trails have been established on rural roads. The bluebird houses are attached to fence posts or trees and spaced no closer than 200 feet nor more than a half-mile apart along the roads for miles. One man in Illinois in one season put 102 houses along 43 miles of road near his home. The world's longest bluebird trail stretches through Manitoba and Saskatchewan in Canada. Its 7000 nesting boxes cover about 2,000 miles of roadways. More than 8,000 young bluebirds and 15,000 tree swallows, a species which also finds bluebird houses to its liking, were raised in these Canadian nests in one year. When bluebirds are present, they adapt quickly to the artificial nesting cavities and even seem to prefer them to natural ones. For those of you who would like to help the bluebirds, here are some instructions for building their houses. Whether the house design is plain or fancy makes no difference to the birds, but there are some basic requirements that must be met. First, and very important, is the size of the entrance hole. It should be no larger than 1-1/2 inches in diameter and should be located so the lower edge of the hole is between 4 and 5-1/2 inches from the bottom of the house. If the hole is smaller than the prescribed size, the bluebird cannot enter. If the hole is placed too low, there isn't enough space below it for nesting material; however, a hole placed too high could prevent the nestlings from reaching the opening to the world of flight. No perch or landing platform should be attached beneath the entrance hole. Such accessories attract sparrows and discourage bluebirds. Floor space may vary from an eight-inch square to a less spacious four-inch square. Trim off the four corners slightly or drill a half-inch hole in each one to provide floor drainage. Recommended side height is eight inches, but it can be taller as long as the entrance hole spacing is correct. For ventilation, drill four one-fourth-inch holes in each side about an inch below the roofline, or allow the sides to be one-fourth inch shorter than the front and back to create a crack between the roof and sides. The front, roof, or bottom should be hinged in some manner so the house can be cleaned before each nesting season. The house should not be cleaned between the first and second brood in one season. Color has little to do with acceptance or rejection by nesters, but if paint or stain is applied, it should be confined to the outside. Hot sun and treated interiors can combine to create noxious fumes capable of killing nestlings. Bluebird houses should be hung so they will not swing in the breeze. For best results, attach them firmly to a post or tree at least five feet from the ground in open areas. Bluebirds nest successfully in old fence posts at heights of two or three feet, but they are not as likely to attract predators in these natural cavities as in man-made houses because their fence posts look like hundreds of other unoccupied fence posts. To prevent climbing predators from reaching the nest, it may be necessary to add a metal shield below the house. Greased metal poles also help to discourage predators. Wherever you put your birdhouse, make sure no overhanging branches or foliage prevent the birds from flying directly to the entrance. Some birders insist that the entrance face south, but others claim the house may face any point on the compass. Although the 1-1/2-inch entrance hole excludes starlings, sparrows have no trouble entering. If a sparrow lays claim to your bluebird house before a bluebird is attracted, remove the sparrow's nest as quickly as it is built. This may have to be repeated several times before the nesting sparrow gives up and moves to another location. Only with your help will the mild-mannered bluebird be able to compete with the sparrow. Your efforts, whether you build one or a dozen bluebird houses, will help this bird compete for nesting space. Wouldn't it be tragic if the lack of housing wiped this beautiful song-bird from the face of the earth? Build a Bluebird House - 1 x 10-inch lumber–33 inches. - 6'/2 inches of '/2-inch wood dowel or metal hinge. - One 1-1/2-inch wood screw with washer. - 20 to 25 l'/2 to P/t-inch nails. - Wire or ring-shank nails to attach box to post. - Dimensions given are for 3/4-inch thick lumber. - Make entrance hole precisely l-1/2 inches in diameter and l-1/4 inches from the top. - Provide space between top and sides for ventilation. - If possible, use 1-3/4-inch galvanized siding nails or aluminum nails. - Round comers on bottom of box for drainage, and recess bottom 1/4-inch. - Roughen inside of front board by making notches with a saw or holes with an awl or drill, to assist young in climbing to entrance hole. - Top of the box should be attached at the back by a 1/2-inch wooden dowel or metal hinge, and in front by a 1-1/2-inch wood screw to facilitate easy opening for inspection and cleaning. - Drill two or three holes in the back panel of the box above and below the enclosure, to aid in quick, easy attachment to pole or post. - Do not add any type of perch to the box; it will only serve to attract sparrows. Site selection is the single most important step in having a successful bluebird program. Bluebirds utilize only a very specific type of habitat for nesting and only rarely will deviate from it. In general, bluebirds prefer open areas with scattered trees where the ground is not covered with tall undergrowth. There are three general areas that should be avoided when selecting a nest site: - Avoid placing nest boxes in towns or within the immediate area of farm yards. House sparrows invariably will occupy every such nest box. - Do not place boxes in heavy timber. Bluebirds prefer sites associated with timber, but more at the edge of a clearing rather than in the timber stand itself. - Do not place boxes in or near areas of widespread insecticide use. Bluebirds feed almost entirely on insects during the nesting season. Installation and Maintenance - Place boxes at 150- to 200-yard intervals. - Mount boxes about five to seven feet above ground level. Fence posts make excellent mounting sites. - Clean boxes as soon as possible after a successful hatch. Bluebirds will not utilize the same nest box unless it is cleaned. 1989 – Bluebirds: Introducing Birds to Young Naturalists. The Louise Lindsey Merrick Texas Environment Series, No. 9, pp. 24-27. Texas A&M University Press, College Station.
<urn:uuid:8f1dcd60-25ee-4ff3-9814-08506c2caf9b>
CC-MAIN-2015-35
http://tpwd.texas.gov/publications/nonpwdpubs/introducing_birds/bluebirds/index.phtml
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00160-ip-10-171-96-226.ec2.internal.warc.gz
en
0.952367
2,790
3.59375
4
Strikes are suspensions of work against an employer, factory, industry, and so forth, performed by workers and maintained until some demand is met by the entity against which they are striking. Most strikes are undertaken by labor unions during collective bargaining, in attempts to improve workplace conditions, increase wages, or obtain better contracts between the union and the company. Strikes are sometimes used to put pressure on governments to change policies. Occasionally, strikes destabilize the rule of a particular political party. Strikes are generally initiated out of good intentions on the part of the workers, a way of pressuring employers or the government to treat them more fairly for the benefit of all. However, self-centered thinking can cloud the issue. When unions do not take into account the needs of society as a whole but rather seek to obtain benefit only for themselves the results may be harmful to all. In such cases, government must intervene for the good of all its citizens. On the other hand, when government abuses its power, a general strike is an effective, non-violent way of forcing those in power to rethink their position. The strike tactic has a very long history. Towards the end of the twentieth dynasty, under Pharaoh Ramses III in ancient Egypt in the twelfth century B.C.E., the workers of the royal necropolis organized the first known strike or workers' uprising in history. Much later, in 1768, in support of demonstrations in London, sailors "struck," or removed the top-gallant sails of merchant ships at port, thus crippling the ships. For example, during the economic panic of 1893, the Pullman Palace Car Company cut wages 28 percent as demands for their train cars plummeted and the company's revenue dropped. When Pullman refused to negotiate, 4,000 Pullman Palace Car Company workers reacted by going on a wildcat strike in Illinois on May 11, 1894, bringing traffic west of Chicago to a halt. The strike was broken up by United States Marshals and some 2,000 United States Army troops, commanded by Nelson Miles, sent in by President Grover Cleveland on the premise that the strike interfered with the delivery of U.S. Mail, ignored a federal injunction, and represented a threat to public safety. In most countries, strikes were made illegal, as factory owners had far more political power than workers. However, most western countries partially legalized striking in the late nineteenth or early twentieth centuries. Strikes occur for a number of reasons, the most common being economic issues (wages and hours of work) and workplace conditions. United States labor law draws a distinction, in the case of private sector employers covered by the National Labor Relations Act, between "economic" and "unfair labor practice" strikes. An employer may not fire, but may permanently replace, workers who engage in a strike over economic issues. On the other hand, employers charged with committing unfair labor practices (ULPs) may not replace employees who strike over ULPs, and must fire any strikebreakers they have hired as replacements in order to reinstate the striking workers. Most strikes are undertaken by labor unions during collective bargaining. The object of collective bargaining is to obtain a contract (an agreement between the union and the company,) and the contract may include a no-strike clause which prevents strikes, or penalizes the union and/or the workers if they walk out while the contract is in force. The strike is typically reserved as a threat of last resort during negotiations between the company and the union, which may occur just before, or immediately after, the contract expires. Strikes may be specific to a particular workplace, employer, or unit within a workplace, or they may encompass an entire industry, or every worker within a city or nation. Strikes that involve all workers, or a number of large and important groups of workers, in a particular community or region are known as general strikes. Under some circumstances, strikes may take place in order to put pressure on the State or other authorities. A notable example is the Gdańsk Shipyard strike led by Lech Wałęsa. This strike was significant in the struggle for political change in Poland, and was an important mobilized effort that contributed to the fall of governments in communist East Europe. A strike may consist of workers refusing to attend work or picketing outside the workplace to prevent or dissuade people from working in their place or conducting business with their employer. Less frequently, workers may occupy the workplace, but refuse either to do their jobs or to leave. This is known as a "sit-down strike." Generally, strikes are rare: According to the News Media Guild, 98 percent of union contracts in the United States are settled each year without a strike. Occasionally, workers decide to strike without the sanction of a labor union, either because the union refuses to endorse such a tactic, or because the workers concerned are not unionized. Such strikes are often described as "unofficial." Strikes without formal union authorization are also known as "wildcat strikes." In many countries, wildcat strikes do not enjoy the same legal protections as recognized union strikes, and may result in penalties for the union members who participate or their union. The same often applies in the case of strikes conducted without an official ballot of the union membership, as is required in some countries such as the United Kingdom. Another unconventional tactic is work-to-rule (also known as an "Italian strike," in Italian Sciopero bianco), in which workers perform their tasks exactly as they are required to but no better. For example, workers might follow all safety regulations in such a way that it impedes their productivity or they might refuse to work overtime. Such strikes may in some cases be a form of "partial strike" or "slowdown;" while Italian law allows that (no one can be sanctioned for following the safety and/or security rules) such form of strike is "unprotected" in some circumstances under United States labor law, meaning that while the tactic itself is not unlawful, the employer may fire the employees who engage in it. During the development boom of the 1970s, in Australia, the "Green ban" was developed by certain more socially conscious unions. This is a form of strike action taken by a trade union or other organized labor group for environmentalist or conservationist purposes. This developed from the "black ban," strike action taken against a particular job or employer in order to protect the economic interests of the strikers. A sympathy strike is, in a way, a small scale version of a general strike in which one group of workers refuses to cross a picket line established by another as a means of supporting the striking workers. Sympathy strikes, once the norm in the construction industry in the United States, have been made much more difficult to conduct due to decisions of the National Labor Relations Board permitting employers to establish separate or "reserved" gates for particular trades, making it an unlawful secondary boycott for a union to establish a picket line at any gate other than the one reserved for the employer it is picketing. Sympathy strikes may be undertaken by a union as an organization or by individual union members choosing not to cross a picket line. In Britain, sympathy strikes were banned by the Thatcher government in 1980. A "jurisdictional strike" in United States labor law refers to a concerted refusal to work undertaken by a union to assert its members’ right to particular job assignments and to protest the assignment of disputed work to members of another union or to unorganized workers. Employers of labor can also go on strike; either through a lock-out of workers (blocking workers from working normally, resulting in loss of wages) or through an investment strike (refusing to commit funds to maintaining or expanding production). A "student strike" has the students (sometimes supported by faculty) not attending schools. Unlike other strikes, the target of the protest (the educational institution or the government) does not suffer a direct economical loss but one of public image. A Hunger strike is the voluntary refusal to eat. Hunger strikes are often used in prisons as a form of political protest. Like student strikes, a hunger strike aims to worsen the public image of the target. A "sickout," also known as the "Blue flu," is a quasi-legal way for police, firefighters, and air traffic controllers to strike: They call in sick en masse. A "Japanese strike," on the contrary, has the workers maximizing their output. They are nominally working as usual, but the surplus can break the planning, especially in just-in-time systems. The Railway Labor Act bans strikes by United States airline and railroad employees except in narrowly defined circumstances. The National Labor Relations Act generally permits strikes, but provides for a mechanism to enjoin strikes in industries in which a strike would create a national emergency. The federal government invoked these statutory provisions to obtain an injunction against a slowdown by the International Longshore and Warehouse Union in 2002. Some jurisdictions prohibit all strikes by public employees (under such laws as the "Taylor Law" in New York). Other jurisdictions limit strikes only by certain categories of workers, particularly those regarded as critical to society: Police and firefighters are among the groups commonly barred from striking in these jurisdictions. Some states, such as Iowa or Florida, do not allow teachers in public schools to strike. Workers have sometimes circumvented these restrictions by falsely claiming inability to work due to illness—this is sometimes called a "sickout" or "blue flu." The term "red flu" has sometimes been used to describe this action when undertaken by firefighters. It is also illegal for an employee of the United States Federal Government to strike. Prospective federal employees must sign standard form 61, an affidavit not to strike. President Ronald Reagan terminated air traffic controllers after their refusal to return to work from an illegal strike in 1981. In Communist regimes, such as the former USSR or the People's Republic of China, striking is illegal and viewed as counter-revolutionary. Since the government in such systems claims to represent the working class, it has been argued that unions and strikes were not necessary. Most totalitarian systems of the left and right also ban strikes. In some democratic countries, such as Mexico, strikes are legal but subject to close regulation by the state. The term "scab" is a highly derogatory term most frequently used to refer to people who continue to work when trade unionists go on strike action. This is also known as "crossing the picket line" and often results in their being shunned or even assaulted. The terms "strike-breaker," "blackleg," and "scab labor" are also used. Trade unionists also use the epithet "scab" to refer to workers who are willing to accept terms that union workers have rejected and interfere with the strike action. Some say that the word comes from the idea that the "scabs" are covering a wound. However, "scab" was an old-fashioned English insult. An older word is "blackleg" and this is found in the old folk song, "Blackleg Miner," which has been sung by many groups. The classic example from United Kingdom industrial history is that of the miners from Nottinghamshire, who during the 1984-1985 miners' strike did not support strike action by fellow mineworkers in other parts of the country. Those who supported the strike claimed that this was because they enjoyed more favorable mining conditions and, thus, better wages. However, the Nottinghamshire miners argued that they did not participate because the law required a ballot for a national strike and their area vote had seen around 75 percent vote against a strike. During "economic" strikes in the U.S., scabs may be hired as permanent replacements. The concept of "union scabbing" refers to any circumstance in which union workers, who normally might be expected to honor picket lines established by fellow working folk during a strike, are inclined or compelled to cross those picket lines or, in some manner, otherwise engage in workplace activity which may prove injurious to the strike. Unionized workers are sometimes required to cross the picket lines established by other unions due to their organizations having signed contracts which include no-strike clauses. The no-strike clause typically requires that members of the union not conduct any strike action for the duration of the contract. Members who honor the picket line in spite of the strike frequently face discipline, for their action may be viewed as a violation of provisions of the contract. Therefore, any union conducting a strike action typically seeks to include a provision of amnesty for all who honored the picket line in the agreement that settles the strike. No strike clauses may also prevent unionized workers from engaging in solidarity actions for other workers even when no picket line is crossed. For example, striking workers in manufacturing or mining produce a product which must be transported. In a situation where the factory or mine owners have replaced the strikers, unionized transport workers may feel inclined to refuse to haul any product that is produced by strikebreakers, yet their own contract obligates them to do so. Historically the practice of union scabbing has been a contentious issue in the union movement, and a point of contention between adherents of different union philosophies. For example, supporters of industrial unions, which have sought to organize entire workplaces without regard to individual skills, have criticized craft unions for organizing workplaces into separate unions according to skill, a circumstance that makes union scabbing more common. Union scabbing is not, however, unique to craft unions. Most strikes called by unions are somewhat predictable; they typically occur after the contract has expired. However, not all strikes are called by union organizations—some strikes have been called in an effort to pressure employers to recognize unions. Other strikes may be spontaneous actions by working people. Whatever the cause of the strike, employers are generally motivated to take measures to prevent them, mitigate the impact, or to undermine strikes when they do occur. Companies which produce products for sale will frequently increase inventories prior to a strike. Salaried employees may be called upon to take the place of strikers, which may entail advance training. If the company has multiple locations, personnel may be redeployed to meet the needs of reduced staff. Some companies negotiate with the union during a strike; other companies may see a strike as an opportunity to eliminate the union. This is sometimes accomplished by the importation of replacement workers, or strike breakers. Historically, strike breaking has often coincided with union busting. One method of inhibiting a strike is elimination of the union that may launch it, which is sometimes accomplished through union busting. Union busting campaigns may be orchestrated by labor relations consultants, and may utilize the services of agencies that engage in intelligence gathering, or that provide asset protection services. Similar services may be engaged during attempts to defeat organizing drives. Another counter to a strike is a lockout, the form of work stoppage in which an employer refuses to allow employees to work. Two of the three employers involved in the Caravan park grocery workers strike of 2003-2004 locked out their employees in response to a strike against the third member of the employer bargaining group. Lockouts are, with certain exceptions, lawful under United States labor law. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:76b3883f-1357-4f68-9b4e-c7ab940f090f>
CC-MAIN-2015-35
http://www.newworldencyclopedia.org/entry/Strike
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00104-ip-10-171-96-226.ec2.internal.warc.gz
en
0.968159
3,251
3.859375
4
Generally you must have 5 -15 apneas per hour to be defined as mild. However there is another breathing disorder newly being recognized it is called UARS ( upper airway resistance syndrome). Although many doctors will not treat this - they usually just say it's due to "depression" or "stress". Perhaps your doctor can prescribe a trial of bi-pap just to see if it helps. An informational site i enjoy reading at is called sleep apnea dot org and also talk about sleep dot com. Sleep disorders are commonly treated as psychiatric disorders- do your research and find another doctor. Please keep us posted as to your progress- Thanks scarlet37. What you describe is the measurement for obstructive sleep apnea: mild: AHI of 5-15 Complex sleep-disordered breathing is a distinct form of sleep apnea. It has recognizable characteristics that are present without, and often worsened during, positive airway pressure treatment. Both sleep state stability and the behavior of the respiratory control system contribute to this complexity. It is only with a clear understanding of the factors contributing to complex sleep-disordered breathing that implementation of truly effective clinical therapy can be achieved for this disorder, which to date is poorly controlled. (see Recognition and Management of Complex Sleep-Disordered Breathing). Anyone who has ever gone without a good night's sleep is aware that doing so can make a person emotionally irrational. While past studies have revealed that sleep loss can impair the immune system and brain processes such as learning and memory, there has been surprisingly little research into why sleep deprivation affects emotions. When we're sleep deprived, it's really as if the brain is reverting to more primitive behavior, regressing in terms of the control humans normally have over their emotions. Too frequently though, in my experience some physicans tend to ignore common sense. Without sleep, the emotional centers of our brains dramatically overreact to bad experience, Fortunately, we now, we have some research that shows otherwise. Acceptance by the physician is another obstacle the patient has overcome. (see amyglada in the Oct. 23, 2007 issue of the journal Current Biology). The amyglada in the temporal lobe affects emotion and controls anxiety. When a person is sleep deprived, the amyglada is working in overtime. When physicians see someone with anxiety or even depression (who wouldn't be depressed from not getting decent sleep) and has problems with sleep, or even worse, suffers from sleep apnea, they prescribe meds without focussing on the cause or source of the anxiety or depression. This is a a band-aid solution, at best. At worst, anti-depressants have nasty side effects and many of them don't work better than a placebo, while at other times, they just "zone you out". It's very frustrating that I am not able to find any information on the parameters (numbers) in diagnosing Complex Sleep Apnea (CompSAS), like with OSA and the mild, moderate, severe values posted above. It seems that diagnosing this is very subjective. If the patient is not responding to xPAP, physicians typically blame it on a psychiatric issue. It's no wonder that my research has shown that not many physicians world wide know very little about Complex Sleep-Disordered Breathing or Complex Sleep Apnea. Some say it's solely to do with constant positive pressure and others don't know. I just came across this: http://www.resmed.com/en-au/clinicians/about_sleep_and_breathing/documents/complex_sleep_apnea_education_factsheet_1011213.pdf A diagnosis of central sleep apnea (CSA) requires all of the following: • An apnea index > 5 • Central apneas/hypopneas > 50% of total apneas/hypopneas • Central apneas or hypopneas occurring at least 5 times per hour • Symptoms of either excessive sleepiness or disrupted sleep Wouldn't complex be treated the same as central or obstructive? As far as the docs not knowing much The same could be said in regards to Narcolepsy. It has taken me 2 years and a thyroid surgery to have the true dx to go with my problems. Although I am certain I do have breathing issues in addition to Narcolepsy I cannot get an MD to treat me properly. Having said that I DO wholeheartedly understand sleep deprivation and the emotional effects on the brain. I am tired of being treated with medications that don't work effectively and when they don't I am thrown on yet another anti-depressant. My new psychiatrist heard me clearly last week and agrees that this is NOT a mental disorder and has referred me to a neurologist he wokrs closely with. I just hope he is as good as the psych. I wish I could give more specific info for you Sam- please check the above sites I mentioned- their forums are a wealth of info. lad to hear your psych is seeing the light and has referred you to a neurologist. That's very encouraging. Yes and no, there is really no way to treat central apnea effectively. One interpretation (from Resmed ) is that Complex sleep apnea (CompSA) is a form of sleep apnea in which central apneas persist or emerge during attempts to treat obstructive events with a continuous positive airway pressure (CPAP) or bilevel device. Yet, other studies from Geoffrey S Gilmartin; Robert W Daly; Robert J Thomas indicate that that's just one way. Complex Sleep Apnea is still poorly understood and the reasons are not clear why it occurs. It's very "challenging" as the scientists write in trying to o treat CompSA. Patients with CompSA cannot be adequately treated with CPAP or bilevel device. The clinical consequences are residual symptoms (fatigue, sleepiness, depressed mood) and intolerance to therapy. The criteria used to measure I have found from the above link at Resmed is CompSA: • The persistence or emergence of central apneas or hypopneas upon exposure to CPAP or bilevel when obstructive events have disappeared. • CompSA patients have predominately obstructive or mixed apneas during the diagnostic sleep study, occurring at least 5 times per hour. • With use of a CPAP or bilevel, they show a pattern of central apneas and hypopneas that meets the Centers for Medicare Services (CMS) definition of CSA Patients with CompSA may be seen as those who cannot tolerate conventional CPAP or bilevel therapy both during lab titration and at home. Neither CPAP nor bilevel therapy seems to alleviate their sleep disorders. For CompSA patients, treatment with CPAP or bilevel therapy will leave them with a somewhat elevated AHI, and their disorder will not be completely resolved. Bilevel therapy has been traditionally used to treat Central Apneas along with Oxygen therapy. Oxygen therapy is not without it's complications too. There is a new (noisy) machine to help with CompSA, an SV machine but it's effectiveness is equivocal and very very costly. It costs around $9,000. It's further troubling for me now because I am having major intense headaches that are further waking me up in the in the middle of the night. These headaches linger all day. I can't find a physician to acknowledge my sleep data from xPAP machine or how I feel and how it's affecting my quality of life. They look at the results from my PSG and although they're not the best, they're not the worst and they're just looking at the numbers and not analyzing them or my symptoms....in fact, ignoring most of them. I have recently been diagnosed with complex sleep apnea. About ten years ago I was diagnosed with obstructive sleep apnea and had a UPPP. It is not a pleasant surgery and is extremely painful for about 10 days afterward. I was much better for along time ...recently( the past 6 months) I have had a return of the day time sleepiness and increased depression. Do not knock the anti -depressants none that I tried ever zoned me out. The right one in the right dose helps tremendously. I am alert and much happier and much more pleasant to those around me. I have just had the sleep study that gave me the complex sleep apnea diagnosis and have not seen my doc yet. But if you do your research you will see that the VPAP with proper setting has tremendous success with complex sleep apnea. Vpap itself is actually not very effective in treating CompSA. You may be thinking of either Resmed SV which is the first only recognized machine/treatment for CompSA. Respironics has it's own version of the SV. Basically, the SV is a non invasive ventilator (it breathes for you). Suri123, I am not using any aids to ease my symptoms. I have tried many in the past including prescription sleeping pills and anti-depressants but they have not worked. Hi Sam888, how are you doing? Hope you feel better now? Thanks for your reply and an update on the Resmed SV, which is far better compare to a CPAP machine as it has many more advantages like delivering different sets of air pressure - increased when the patient inhales and decreased when exhales breath, making it easier for sleep apnea patients who have difficulty breathing spontaneously at their own rate and depth. Also can be useful in those with neuromuscular disorder as there is no requirement for an active process. More information regarding the product can be seen at this link http://www.2p-cpap.com/products.asp?category_id=4, consult your doctor for advise. Take care and post your progress. I have many medical conditions that I have to have treated. Now,I have recently been diagnosed with complex sleep apnea. and severe insomina. During the sleep test, they said I sleep 2 hours a night and that is not all at one time. . I watch the clock beside my bed. I have seven people living in my hose, so there is no other room for me to go to . I don't want to wkae up the kids. The insurance will only pay for CPAP. I will have to stop working because my health concerns and problems will bother others. So I will have no insurance, no money and not much time to live. Now I have heart problems. The doctors have advised me that I will only live for a few more years. Too much is wrong, and other things are showing up. So try and dind a way to treat your medical problem. Dn't dtop. Life is to short. For those who have sleep problems, keep trying to find ways to help you live a better life. Or you won'y have a life. The Content on this Site is presented in a summary fashion, and is intended to be used for educational and entertainment purposes only. It is not intended to be and should not be interpreted as medical advice or a diagnosis of any health or fitness problem, condition or disease; or a recommendation for a specific test, doctor, care provider, procedure, treatment plan, product, or course of action. Med Help International, Inc. is not a medical or healthcare provider and your use of this Site does not create a doctor / patient relationship. We disclaim all responsibility for the professional qualifications and licensing of, and services provided by, any physician or other health providers posting on or otherwise referred to on this Site and/or any Third Party Site. Never disregard the medical advice of your physician or health professional, or delay in seeking such advice, because of something you read on this Site. We offer this Site AS IS and without any warranties. By using this Site you agree to the following Terms and Conditions. If you think you may have a medical emergency, call your physician or 911 immediately.
<urn:uuid:6986c374-6920-40bd-be12-9759f1c95b8d>
CC-MAIN-2015-35
http://www.medhelp.org/posts/Sleep-Disorders/Re-Post-Whats-the-criteria-in-diagnosing-Complex-Sleep-Apnea/show/426455
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00220-ip-10-171-96-226.ec2.internal.warc.gz
en
0.960241
2,505
2.71875
3
Schools and communities are increasingly understanding the importance of getting kids to run. Programs proliferate around the country, but few address the key issue of how do you get kids to want to run, for a lifetime?Getting kids to run is first learning what makes them want to run. T-shirts, water bottles or entering some big kids’ events may hold their interest for a while but they don’t make kids want to run. The triggers that start kids running, and will keep them running, are having fun, being with friends, setting and accomplishing simple goals, enjoying success and having ownership of their running. For parents and coaches, it is helping children discover that running isn’t a program with a starting and ending date, nor is it something they are expected to do because mom or dad does it. It is understanding what motivates children and applying those principles to running. Making Running Fun When children are having fun, they will likely continue doing whatever they are doing. For parents and coaches, this means if you want children to run, you have to make running fun. Forget about drills and warm-up routines, distances to be run each day or preparing for an end-of-the-program race. These will be dealt with later. The only thing that matters is making every run a fun run. If you want kids to have fun, you have to be having fun yourself. Be ready to laugh and enjoy the moment, whatever the moment brings. Start with running in different places as often as possible, including a few unusual places where kids wouldn’t think of running. Keep changing the focus by running on a trail one day and going to a track to learn starts and relay exchanges the next; have special holiday runs, and periodically have a team picnic or go to a local pool or lake on a hot day. Instead of just running laps, set up a playground obstacle course; plan a Scavenger Hunt Run in a local park; do continuous relays to set team records for how many miles or laps completed; have a pajama run or a flashlight run; host a kids pentathlon; do a Prediction Run or a running version of King-on-the-Mountain. And don’t forget simple games like an Indian run or a running version of Red Light, Green Light or Simon Says. They will get the kids laughing, even the older ones. If the kids greet each other after a run by giving each other high-fives and chest bumps, are cheering for each other, or staying around after each run is finished, then whatever effort is being made to make running fun is probably working. Keep it up! Friendships and Belonging Being with friends and enjoying a sense of belonging are far more important to get young children running, and keep them running, than any incentives you may use to get children to run. With friends, kids are anchored, secure, comfortable and ready to be engaged. Without them, kids are uncertain and less ready to take on new challenges. If all the kids come from one school or a neighborhood, friendships and belonging can happen naturally. If not, coaches should make the process of building friendships and creating a sense of belonging a top priority. Form new groups or pairings with each run so the children get to know each other, design team challenges that make the children work together to accomplish some task, form relay teams made up of kids of different abilities and prompt each runner to cheer for his or her teammates. Belonging is triggered by creating a team atmosphere, even if the team doesn’t compete. Teach kids to run together, as a team, rather than everyone running at a different pace. Build a team identity with a name, team colors, a team slogan and a logo for T-shirts. Keep team records, like how far the team ran on a special night, and plan team parties or trips to the local swimming pool. For special runs, like a kids fun run attached to a local road race, have the runners wear their team T-shirt. It will make the kids feel special and allow others to see these kids are part of something special.Engineering for Success Experiencing success is a very powerful motivator for anyone, and especially for children. Cunningham and Allington, in Classrooms that Work, a book about fourth- and fifth-grade grade reading programs, say, “Success precedes motivation and once children see they can be successful, they will participate; thus teachers must engineer success.” Their message also applies equally well to kids and running. Engineering for success starts with designing a running program built on incremental steps, with opportunities for each runner to succeed at each step. When kids do it right, tell them they did it right. Do not just tell kids, “You did great today,” but tell them what they did that was great, like running an even pace or running as a team or doing an obstacle course faster the second time than the first. Make a big deal out of it and make it sincere. Another aspect of engineering for success is creating runs where the same children are not always bringing up the rear. You can find many ways to do this. One is to imagine the spokes of a wheel. Each spoke is a different route with the coach standing in the center. One route may be out around a tree and back to the start. Others can be up a short hill and back or to the playground to do the monkey bars and back. Send a different child off on each spoke and rotate so each child does them all. If there are more kids than spokes, send them off in waves 30 seconds apart. Another is the cloverleaf run. In this, the coach sets up three loops, all coming together where the coach is stationed. Each loop is a different distance with the shortest one being for new and young runners or slower runners. Let the kids run the same loop four to five times with a rest between runs. The kids who are running the short loop may do it in, say 1:24. The other runners, those on the longer loops, also may be hitting the 1:24 range. Although the kids aren’t racing per se, they generally respond to the challenge of running as fast as the others, even if their loop is shorter. Setting GoalsIt is never too early for children to start setting goals and working to accomplish them, providing they are appropriate goals — ones that can be accomplished in each run or at the next run. Big goals, like running a 5K even before they can run a half-mile, are too hard for kids to visualize and too far in the future to have meaning. For young runners, running the same pace each time for a short trail run, doing one more run up and down the bleachers or finishing together in a group run are all goals that can be accomplished within a single run. The runners adopt a goal and they master it, all in one practice. Doing a circuit three times on one night and setting a goal of doing it four times a week later also is a doable goal, one that most young children can understand and relate to. Earning a medal in a kids run several weeks away is not as relevant. Kids need to experience success today, and some goals, especially those looming down the road, may produce stress in children. Just forget about the medals until the day of the run and give the children only the intermediate goals that will work toward it. Setting and meeting goals are integral to experiencing success and need to include both team and individual goals. Some children may not be ready to run as far as others, but together they can set team records for how many laps they completed on a favorite course. Congratulate them, give high-fives, shake hands, or whatever else conveys your support for a job well done, for doing what they were trying to do. If kids are going to start running, it should be their decision to run — not their mom’s or dad’s, but theirs. For adults, this means instead of encouraging (i.e., pushing) children to run, start creating opportunities for children to discover running. Take them to places where they can see runners or to places where other kids are running. Play running games with them, walk/jog to the store together rather than driving, or take them to watch a kids fun run. Let them make the decision to enter the next one. For a parent who runs with his or her young child, make it the child’s run, not yours. Run with your child either before, after or completely separate from your own run. You can’t make the child the center of attention when you are doing your own workout. And be sure to make the run simple and fun. Improvise an obstacle course that the child will likely do better than the adult or do a walk/run on a trail through the woods, stopping to explore nature. Just make whatever you do, theirs. For parents running with their child in a kids fun run, first be sure the child really wants you to run. If they want you, then get them settled down so they don’t go out too fast, run beside your child or even slightly behind and let them set the pace. This is their run. Let them own it. If your child has left you in a cloud of dust, your job is done. Just get over to the side, jog to the finish and discreetly step off the course. When children run because of the feeling of satisfaction and the simple enjoyment they get out of it, they are intrinsically motivated. Bravo! This is as it should be. But, we do recognize that the promise of a water bottle, T-shirt or Feelin’ Good Mileage Club Toe Tokens — extrinsic motivators — have gotten millions of children running. The question is not whether extrinsic motivators are good for running. The question is what extrinsic motivators are appropriate for young children and what role do such rewards play in motivating them? I have three simple rules here. First, anything and everything has to be earned. No one gets anything just for showing up and participating. Second, always keep the motivator fairly simple, at the very low end of what is a tangible reward. Third, when using extrinsic motivators, even the very low-end kind, recognize they tend to lose their value if overused. It is amazing what joy kids get out of earning a Toe Token, colored shoe laces, a sweat band or some zany prize from the local dollar store drawn from a grab bag. Forget the fancy medals or trophies. Keeping it simple and fun ensures that the kids don’t focus more on the prize than the run, and distract from their opportunity to discover the intrinsic values that running offers. The real danger is when such prizes become more important than running itself. When this happens, children will stay motivated only as long as they continue to enjoy the perks. When the perks stop, or they no longer are seem important to the child, running will stop. For some children, especially younger children, running around cones on a playground day after day may be a necessity. If so, this is all the more reason to incorporate games, challenges, obstacle courses, relays and anything else the coach can think of to make every run unique. Otherwise, kids will tire quickly and soon be looking for something else to do. And who can blame them? Running in little circles day after day is boring. For running programs where children are transported by parents, building in variety is easy by not only changing what they run but also where they run. Allow the children to discover running on grass, on dirt trails and asphalt, on a synthetic track and on flat courses and hilly ones. Let them experience running in the early morning or in the evening and never let a little rain cause a run to be canceled. Mix it up; never repeat the run from yesterday or even one from the week before. That said, repeating some runs is important in order to allow the kids to measure progress. Find a few runs that kids like most and repeat them every few weeks. Encourage the kids to run farther or faster or with less recovery between efforts than what they did last time and make a big deal out of it when they succeed. The variety is created by changing the challenge. For these runs, keep simple records to help the kids identify what they need to do to improve. To add importance to these runs, give them special names and speak of them in reference to the challenge they offer. Although most everyone agrees that it is not OK for adults to push kids into running, it is certainly OK for adults to organize kids running programs: to make running fun, to offer small challenges that allow children to experience success and to make running a social experience, something every child needs. It also is OK to let them see and experience that running goes far beyond the playground or neighborhood park and to help them discover that running isn’t a program with a starting and ending date nor is it something they are expected to do. For parents who run, be sure to not let your running be the only running your children see. If you run on the roads, great, but find someone willing to work with your kids on shorter track distances. If you were a sprinter at one time, let a trail runner work with them for a while. If a road race in your area is looking for volunteers, consider having your kids staff an aid station along the course. Let them hand out water cups and, of course, clean up after. The kids will love it, plus it gives them a unique exposure to running, far better than just watching the action from the side of the road. Take them to a track or cross country meet. If there is one for middle school runners, that works best. Why? Because the kids will be able to better identify with those runners than they will at a high school or college meet. Watching kids who are just a little older than they are triggers “that could be me” motivation. Keeping Kids Running When planning a running program for children, first think along the lines of letting running become a simple pattern in the life of the child, not some form of commitment or expectation. Allow running to come in and out of their life alongside all the other things that children experience in a similar way. The kids come, run, have fun, renew friendships and then go off and do something else for a while before running again. Consider organizing a series of runs tied to each season of the year, each designed to keep kids excited about running and each offering a new challenge, such as preparing them for a special team-only event, a destination run, a field trip somewhere or entry in a kids run attached to a local road race. Go watch a cross country race and then run on the same course or take advantage of the holidays by having the kids get ready to run as a group in a local parade. Another possibility is to create a theme built around running at places of unique local interest or places that let the kids discover different natural habitats or maybe a series of runs, each in different neighborhoods, to see where their teammates live. Create certificates of completion for each season, but even before completion, start building interest in the next season or series of runs. At a summer run, tell them about the nighttime flashlight run they can do in October or the evening run through holiday lights to see the lighting of the town Christmas tree in December. Let the kids know what is coming, pump them up a little, but then give them some time away from running so when they come back, they will be energized and ready to go. For more information on children’s running, visit the Center for Children’s Running website at childrensrunning.org. A lifelong runner, Douglas Finley has served as director of parks and recreation in Lansing, Mich., as adjunct faculty at Michigan State University and as an administrator with the Michigan Department of Natural Resources. He has coached middle school and high school cross country and track, and now coaches the Eagles Running Team, a noncompetitive running program for children in the Lansing area.Tags: RT December 2011
<urn:uuid:5f26a956-ff89-4608-84ba-f7e1cc48fd3d>
CC-MAIN-2015-35
http://www.runnersworld.com/race-training/how-to-teach-kids-to-love-running
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00159-ip-10-171-96-226.ec2.internal.warc.gz
en
0.961271
3,357
2.6875
3
issue 212 - October 1990 Why men hate women '...And when the woman saw that the tree was good for food, and that it was a delight to the eyes, and that the tree was to be desired to make one wise, she took of the fruit thereof, and did eat; and she gave also unto her husband with her, and he did eat....' And we all know what happened then - or do we? ' Celia Kitzinger throws new light on an old story. It almost certainly wasn't an apple in the Garden of Eden. The Genesis story refers simply to 'forbidden fruit' and biblical scholars argue that a quince or a fig was more likely. But in mediaeval woodcuts, on stained-glass windows and in classical Christian art, the apple symbolizes the first sin and Eve is portrayed as the first sinner. Eve the temptress, created by a male deity, formed from the rib of Adam, later to cause the fall of 'man' from grace and innocence - this patriarchal myth of woman underpins Western culture. In casting woman and serpent as evildoers, Judaic writers overturned a powerful earlier tradition which associated both with wisdom and fertility. In the ancient goddess religions, snakes were the special companions of women, symbols of sexuality, linked through the shedding of their skins - which was seen as a form of rebirth - with women's creative and reproductive powers. Early Mediterranean statues and reliefs depict fecund goddesses with great nourishing breasts, generous hips and bellies ripe with pregnancy, often with serpents entwined sensuously about their bodies. Appalled by this pagan tradition, the authors of Genesis converted the sensual, fertile goddess into a shameful sinner. They covered her nakedness with an apron of fig leaves, and punished her sexuality with pain and oppression: 'In sorrow thou shalt bring forth children; and thy desire shall be to thy husband, and he shall rule over thee'. The myth was used by early churchmen as a vehicle for expressing their horror and disgust at women's bodies: 'What is the difference whether it is in a wife or in a mother, it is still Eve the temptress that we must beware of in any other woman,"1 wrote St Augustine in the late fourth century. Projecting all guilt upon women, branding them as lustful allies of the Devil who wean men from God and lead them from the path of virtue, the Genesis story enshrines the myth of feminine evil as a justification for female oppression. The history of Western men's attitudes to women is a history of woman-hatred, often with terrifying consequences. During the European witch hunts, thousands of women were tortured and murdered when woman-as-Eve was transformed into woman-as-witch. The infamous Malleus Maleficarum - a document produced by two Dominican monks who were appointed by Pope Innocent VIII in 1484 to investigate and stamp out witchcraft - states that 'all witchcraft comes from carnal lust, which is in women insatiable'. The charges levelled against witches included every misogynistic sexual fantasy harboured by the monks and priests who officiated over the witch hunts: witches copulated with the devil, devoured new-born babies and rendered men impotent. A whole chapter of Malleus is entitled: How, as it were, they Deprive Man of his Virile member'.2 Witches were also accused of using herbs to ease the pain of labour at a time when the Church held that pain in childbirth was the Lord's punishment for Eve's original sin. The Inquisitors concluded: 'Blessed be the Most High who has so far preserved the Male sex from so great a crime.'2 This virulent loathing of women's bodies continued during the late nineteenth and early twentieth centuries in the West: any expression of sexual desire by women was considered filthy, corrupt, sinful and marked them as whores, the daughters of Eve. Upper- and middle-class Victorian men relied on working-class female prostitutes to satisfy their sexual appetites, while demanding the purity of their wives and inflicting upon them the impossibly sentimentalized and saintly ideal of the Virgin Mother. Middle-class women and girls who expressed sexual feelings - with men or through masturbation - were often diagnosed as 'morally insane' and imprisoned in mental asylums. Others were 'cured' through sexual surgery, including clitoridectomy or 'female circumcision', which doctors first practised on indigent American women and black female slaves. These same physicians continued a long tradition of viewing menstruation as dirty and dangerous, 'the curse' inflicted upon women because of Eve's sin. The new professions of gynaecology and psychology denounced women's bodies and minds as seriously defective, and used 'scientific' discoveries to justify excluding women from higher education and from political life. The first feminists struggled against ideas like these - often with remarkable humour. Weary of quotations from the Bible being used to lend God's authority to the subjection of women, the leading US feminist Elizabeth Cady Stanton published The Woman's Bible in l8953, a caustic and entertaining commentary on the scriptural passages most favoured by misogynists. As a comment on Eve's lofty nature she notes that the serpent did not try to tempt her from the path of duty by brilliant jewels, rich dresses, worldly luxuries or pleasures, but with the promise of knowledge... and he found in the woman that intense thirst for knowledge that the simple pleasures of picking flowers and talking with Adam did not satisfy'. Compared with Adam, she says, Eve appears to great advantage throughout the entire drama. Few contemporary feminists would consider the Bible sufficiently central to our oppression to be worthy of this sort of attack. Yet the underlying woman-hating motif of the Genesis story is reiterated throughout Western culture, permeating language, law, medicine, psychology, art and literature. At last count the English language had 220 words (almost all derogatory) for a sexually promiscuous female and only 20 for a sexually promiscuous male (most of these complimentary). Words associated with women are sexualized so that apparently equivalent terms acquire very different meanings. A 'master' exercises authority whereas a 'mistress' is the so-called kept woman. The term 'sir' retains respect while 'madam' refers to someone who keeps a brothel. A 'lord of all he surveys' is quite different from a 'lady of the streets', and the meaning of 'he's a professional' is generally understood differently from 'she's a professional'. Even the word 'woman' is used as a term of abuse. Likewise, words available to describe female genitals - 'cunt', 'slit', 'crack', 'slot' - reflect centuries of sadistic male use. Too often even modern obstetric medicine treats women's genitals with brutality. A pregnant woman faces cold metal instruments being shoved carelessly into her by clumsy doctors. She endures unnecessary episiotomies - the cutting of skin between anus and vagina - to speed delivery and is then sewn up again 'tight as a virgin for your husband'. This hostility towards the female body is expressed through pornography - most sickeningly in 'snuff' movies in which the actress is literally murdered on screen. In the pornographic portrayal of the sexual act women are overpowered and silenced, strapped helpless to tables, bound spreadeagled on beds, humiliated, degraded, gagged, handcuffed, beaten, assaulted with gun, knife, whip, penis. The pornography of pregnancy - in which pregnant women are depicted as whores, huge bellies fetishized, cunts displayed for the camera - is the ultimate 'triumph of the phallus over the death-dealing vagina' .4 Men frequently fear women's sexual power and feel justified in blaming them for acts of male violence. A 12-year-old girl who was raped after visiting her attacker's bedsit for coffee, behaved foolishly' commented one British judge in 1988. A few years earlier the judge observed of a 17-year-old girl who was raped by a motorist with whom she hitched a lift after being stranded following a party, that 'the victim was guilty of a great deal of contributory negligence'.1 Another rapist was sentenced to only three months in prison because his five-year-old victim was, said the judge, 'an unusually sexually promiscuous young lady.' He added, 'I do not put blame on the child exactly, but I do believe she was the aggressor.'5 Female sexuality causes men to lose self-control so that they cease to be responsible for their actions - or so runs the accepted wisdom. And in the US one woman continues to be raped every three minutes, one wife battered every 18 seconds. Roots of hatred Why do men express such hatred of women? Psychoanalysts suggest that men's gender identity is very fragile because, within typical child-rearing practices, girls can identify with their primary care-taker while boys have to separate themselves from their mother in order to achieve and assert their masculinity. 'The whole process of becoming masculine is at risk in the little boy from the date of his birth on; his still-to-be-created masculinity is endangered by the primary, profound, primeval oneness with the mother.'6 It is only by setting woman apart as Other, by resisting intimacy with her, by treating her with contempt and aggression, that men assert their own independent and fragile masculinity. And because men have distanced themselves from 'the weaker sex' over the ages, setting themselves up as superior, it must be unbearably humiliating to need and desire women so much. Sex with women can re-evoke in men 'the unqualified, boundless, helpless passion of infancy. If he lets her, she can shatter his adult sense of power and control; she can bring out the soft, wild, naked baby in him'.7 In heterosexual intercourse men risk discovering in women an unsettling power which contradicts and undermines their own more obvious social, political and physical power. No wonder male sexual desire is so desperately tormented and full of conflict. Because women know men to be vulnerable and fragile, they are often tempted to excuse them as 'just little boys' who need to over-compensate for their sense of inadequacy or 'womb-envy' with acts of spiteful misogyny. Female nurturing is presented as the solution to male violence - as though women haven't been doing that for centuries. Germaine Greer once commented that 'women have very little idea of how much men hate them'. For it is painful to confront the extent of men's hatred. But only when both men and women acknowledge its existence, its extent and its pervasiveness, can we act to end it. . Celia Kitzinger teaches psychology in London. 1 Misogynies, Joan Smith (Faber and Faber 1989). 2 For Her Own Good, Barbara Ehrenreicb and Deidre English (Anchor and Doubleday 1978) and Beyond Power: Women, Men and Morals, Marilyn French (Jonathan Cape 1985). 3 The Woman's Bible Part One, Elizabeth Cady Stanton (1895, reprinted Polygon Edinburgh, 1985). 4 Pornography: Men Possessing Women, Andrea Dworkin, (Women's Press, London, 1981). 5 Spare Rib, Issue 118, 1982. 6 The Sexuality of Men, ed. Andy Metcalf and Martin Humphries (Pluto Press, London). 7 The Mermaid and the Minotaur. Dorothy Dinnerstein. (Harper and Row 1977). This first appeared in our award-winning magazine - to read more, subscribe from just £7
<urn:uuid:4bb6989d-845c-43ed-8a84-b3c20ee411f0>
CC-MAIN-2015-35
http://newint.org/features/1990/10/05/hate/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00107-ip-10-171-96-226.ec2.internal.warc.gz
en
0.963787
2,418
2.640625
3
The Cathedral-Basilica of the Immaculate Conception in Mobile has been the focal point of Catholic worship, community, education, and evangelization since its completion in 1850. The name honors Mary, the mother of Jesus, and the structure is listed on the National Register of Historic Places. Mobile has been the center of Alabama's Catholic life since the first decades of the colonial era, when French Catholics first settled the region. Several churches preceded construction of the present cathedral, including the 1793 structure on the corner of present-day Royal and Conti Streets. When this Spanish-era church burned down in 1827, it was replaced by a wooden church and then a small brick church on the south side of Conti Street. In 1825, the Vicariate Apostolic of Alabama and the Floridas (the term used to describe the ecclesiastical jurisdiction of the region) was established, with Father Michael Portier, a French-born priest serving in New Orleans, being installed to head the vast territory. When the Diocese of Mobile was established on May 15, 1829, Portier was named the founding bishop, with the small brick church as his initial cathedral (a church headed by a bishop). The original diocese included all of Alabama and Florida. Bishop Portier recognized the need for a larger, more solid structure that would better provide for the area's Catholics, serve as the diocesan center of worship and mission outreach, and be a source of pride for Catholics in their faith. Local Catholics, with support from non-Catholics, raised funds for the new edifice, and additional financial help was secured from the Society for the Propagation of the Faith branch in France. On November 29, 1835, the cornerstone for the cathedral was laid and blessed in the square bounded by Claiborne, Conti, Franklin and Dauphin Streets. Former seminarian Claude Beroujon was hired as architect for the new cathedral, having previously designed the first building constructed at Spring Hill College (1831) and the Visitation Convent (1833; now the Visitation Monastery). The foundations of the 162-by-90-foot structure were in place by 1837, but the economic crisis known as the Panic of 1837 and a yellow fever epidemic in 1839 delayed progress. By the mid-1840s, the economy had improved and construction resumed, supported in part by generous contributions from the people of Mobile. By late 1845, the brickwork was finished, but financial concerns delayed the roof's completion. By 1850, Mobile's Catholic population was estimated at about 5,000. In addition to the cathedral, Catholics worshiped at St. Francis de Sales Chapel at Visitation Convent, St. Vincent de Paul Parish, and St. John Nepomucene Chapel served by the Jesuits at Spring Hill College. The impressive structure, dedicated as the Cathedral of the Immaculate Conception on December 8, 1850, was embraced as a source of civic pride by many of the city's citizens, Catholic and non-Catholic alike. A rector, appointed by the bishop, serves as the immediate administrator, usually assisted by one or two other priests who also served outlying missions during the nineteenth and early twentieth centuries. The cathedral structure is of Roman design. The brick foundation of the church's main body has walls ten feet thick and reverse brick arches beneath all exterior walls and beneath the double row of columns in the interior. In the church's main body, two rows of Doric columns support the ceiling and divide the interior space into three naves with barrel-vaulted ceilings. The two side ceiling vaults presently have paintings of North American saints. The center vault ceiling bears gold-leaf fleurs-de-lis and shamrocks symbolizing the French and Irish cultural heritage of the diocese. The floor of the main aisle is Italian marble bearing water-engraved and brass-inlaid coats-of-arms of all of Mobile's bishops. The sanctuary originally included an altar of Alabama marble, but it was replaced with an Italian marble altar shortly after 1927. The apse is a hemispherical semi-dome. In the rear of the church, a choir loft houses the Wicks organ with more than 3,000 pipes, installed in 1957 and restored in 2000. The exterior front façade has three sets of doors, each of which opens onto one of the three aisles, and a 20-foot-square tower on either side with a 60-foot portico between. The towers and the portico, with its ten fluted Doric columns, were added as funds became available. The portico was added during the tenure of Bishop John Quinlan (1859-1883), who is buried beneath it. Sometime after 1890, during the tenure of Bishop Jeremiah O'Sullivan (1885-1896), the towers were completed. Bishop Edward P. Allen (1897-1926) completed the installation of the stained glass windows, designed and built by the Franz Mayer Company of Munich, Germany; the last window was installed in 1910. The 12 main windows portray New Testament themes highlighting Mary's role in the life of Jesus and the life of the Church. The cathedral has undergone periodic enhancements and repairs. In 1865, the explosion of a magazine in the harbor area of Mobile 14 blocks from the cathedral blew out all the plain glass windows on the north wall (before the stained glass had been installed), and they had to be repaired. In the 1870s, a cast-iron fence was added. The bell, produced by the McShane Bell Foundry in Baltimore, Maryland, was consecrated on December 27, 1876. The bell, originally in the church nave, was later moved to the north tower. On notable occasions, Catholics and non-Catholics alike gather at the cathedral, as they did on D-Day in 1944, during World War II. On March 19, 1954, a large fire caused the floor in the sanctuary to collapse, and the basement was flooded during efforts to extinguish the fire. There was also extensive smoke damage, and a major restoration was required. The renovated sanctuary included a bronze canopy supported by four marble columns over the altar. In 1962, Pope John XXIII designated the Cathedral of the Immaculate Conception as a Minor Basilica. Such designations are awarded to selected Catholic churches because of their antiquity or their historical, cultural, or artistic importance and significance as a place of worship. After receiving this designation, the cathedral was permitted to display two symbols of the status: an umbrellino, a small silk umbrella mounted on a wooden frame and covered with the papal colors of alternating red and yellow, and a small bell known as a tinabellum, a replica of the bell used in Rome's major basilicas to announce the pope's arrival. In 1964, during Bishop Thomas J. Toolen's tenure (1927-1969), the crypt chapel below the main church was completed; the tombs of all the bishops except John Quinlan (under the Portico) and John May (St. Louis) are located in the chapel. The sanctuary was again modified in 1971 in accordance with the liturgical reforms of Vatican Council II, which promoted more lay participation and replaced Latin with the vernacular for most services. Since its early days, the Diocese of Mobile has owned a large section of the block directly opposite the cathedral's entrance. In the late 1970s, Bishop John May (1969-1980) had the buildings that were there demolished in order to build a park. The city of Mobile partnered with the diocese to develop the entire block as a park, now owned by the City of Mobile and known as Cathedral Square. It is a venue for many public events, including special archdiocesan events several times during the year, and a popular place for Mobilians seeking quiet. In 1980, Mobile became an Archdiocese with its Province encompassing the states of Alabama and Mississippi, including the Dioceses of Birmingham, Jackson, and Biloxi, with the Cathedral-Basilica of the Immaculate Conception as the archbishop's church. Oscar H. Lipscomb, a Mobile native, was named the first archbishop (1980-2008). Archbishop Thomas J. Rodi, formerly the Bishop of Biloxi, became Mobile's archbishop in 2008, with the Cathedral-Basilica of the Immaculate Conception as his principal church. On December 8, 2004, the Cathedral-Basilica of the Immaculate Conception celebrated the tercentennial of the establishment of the Catholic Church in Mobile. This day also marked the completion of the three-year Cathedral Restoration Project, which included an exterior stabilization and restoration, roof repair, and a restored interior ceiling, side aisles, and a new Carrera marble floor. The sacramental books of the present cathedral and its predecessors bear witness to the great ethnic diversity of the parish community, including the Native Americans, Europeans, Africans, European Colonists, and African Americans of the colonial period, the large Irish and German immigrant population of the mid-nineteenth century, service men and women, and industrial workers from throughout the United States during World War II and its aftermath, and the Hispanic and Asian immigrants more recently. The parish's influence has always reached beyond the cathedral itself, including early schools for both boys and girls. These institutions closed as Mobile's population moved out of the downtown area and the diocese opened schools that were closer to the new centers of population. The cathedral parish has also supported orphanages, associations to help the poor, the sick, the elderly, and the needy, and efforts to provide financial assistance in times of natural disasters and special need. The Cathedral-Basilica of the Immaculate Conception is a popular church for weddings of Catholics from all over Mobile. It offers a concert venue for Mobile's Musica Sacra Choir and Orchestra and Mobile's Singing Children. The cathedral is an active participant in historic Mobile events, with self-guided tours or docent-led group tours by request and in conjunction with the nearby home of the first bishop of Mobile, the Portier House. The main function of the Cathedral is, as it has been since its dedication, to provide for the spiritual needs of the downtown Mobile area. It offers daily Mass and the opportunity for confession and is open to all during the day for private prayer.
<urn:uuid:4df8ceb7-3555-4e2f-89f4-f3c6227062c9>
CC-MAIN-2015-35
http://www.encyclopediaofalabama.org/article/h-3499
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00048-ip-10-171-96-226.ec2.internal.warc.gz
en
0.973151
2,154
3.015625
3
I have been hearing or reading about it so many times that it's beginning to get on my nerves. No doubt, I love Bukit Timah (the dipterocarps, the colugos, pangolins!!!), and I think being so highly urbanised, Singapore still has a rather high plant diversity and that is really something. However, I think it is really embarrassing that so many publications and websites, including those by NParks and unfortunately, even NUS, gave the wrong information that Bukit Timah has more tree or plant species than the whole of North America. In some websites and publications, it was said that a 2ha plot in Bukit Timah has more than 350 tree species, and this is more than all the tree species ever recorded in the whole of North America. However, a search using the Internet shows that according to Elbert L. Little Jr.'s Checklist of United States Trees (Native and Naturalized), published in 1979 as Agricultural Handbook 541 by the United States Department of Agriculture, USA alone has 747 species of native trees (including Alaska but excluding Hawaii)! That means, North America certainly has a lot more tree species than the 2ha plot. Some other websites claimed that Bukit Timah has more tree species (not just the 2ha plot) than the whole of North America. According to NParks, Bukit Timah has more than 840 species of flowering plants, including the various shrubs and herbs. Even if we were to outrageously assume that there are only about 100 shrubs and herbs, and the rest are all trees, USA alone with 747 native tree species will still have at least the same, if not more than what Bukit Timah has. Going to the total number of plant species. According to some websites, Dr David Bellamy, a renowned conservationist, once pointed out that the number of plant species growing in the Bukit Timah Nature Reserve is more than that in the whole of North America. We already know that Bukit Timah has 840 species of flowering plants (hence excludes the gymnosperms and ferns etc), but finding the total number of plant species in USA proved to be a challenge. I managed to find the USDA website, and just looking at the list of threatened plants in Florida, North Carolina and California, there are already 1,085 species! Obviously if you include the non-threatened ones, the list will be a lot longer. Update: Thanks to Pat, who highlighted that discounting Hawaii, continental USA (including Alaska) has ~17,000 species of vascular plants. See comments below for details. So, Bukit Timah certainly does not have more plant species than the whole of North America. In fact, what we have is a lot less than what North America has. However, what we can say is that we certainly have more tree species than the United Kingdom, or Canada, and possibly many other temperate countries. And if we conserve well, future Singaporeans may still be able to say that proudly in say a few hundred years' time. Hopefully this clears things up and the embarrassingly wrong information will not be passed on to more people. Sunday, December 26, 2010 I have been hearing or reading about it so many times that it's beginning to get on my nerves. Posted by Ron Yeo at 2:32 AM Thursday, December 23, 2010 The tide at Changi today was not really very low today, but I was glad that we still managed to see a number of interesting things. Found this Pink Sand Dollar (Peronella lesueuri) just beneath the sand among some seagrass. There were several of the usual plain Sand Dollars (Arachnoides placenta) too. The Salmacis Sea Urchin (Salmacis sp.) did not appear to be in season, as I only found one of them, and a few tests. These interesting sea urchins carry all kinds of marine debris to help them camouflage. A little Pencil Sea Urchin (Prionocidaris sp.) was spotted among the seagrass. The thick spines resembles pencils sticking out of a pencil holder, and hence the common name. We found only one little orange Cake Sea Star (Anthenea aspera). Guess the tide was not low enough for us to find bigger ones. As it turned a little darker, the Sand Stars (Astropecten sp.) starting emerging from the sand. There were lots of Pink Thorny Sea Cucumbers (Colochirus quadrangularis) and a few Pink Warty Sea Cucumber (Cercodemas anceps). The latter has shorter spines and yellow markings. This poor yet unidentified sea cucumber was washed ashore by the strong waves. I was pleasantly surprised to find this rather big Haddon's Carpet Anemone (Stichodactyla haddoni) so near to the upper shore. These big sea anemones are unfortunately collected by poachers for the aquarium trade. Not sure what this Moon Snail (Polinices didyma) was doing getting all bloated and kind of twisted. Could it be feeding? This animal feeds on smaller snails and clams by holding them in its huge foot, secretes an acid to soften the shell, and use its radula (something like a tongue) to drill a hole through the shell to reach the meat inside. As per our other trips to this shore, several Orange Striped Hermit Crabs (Clibanarius infraspinatus) were spotted. I also saw this unknown hermit crab in a pretty murex shell. Unlike true crabs which have a hard exoskeleton protecting the whole body, hermit crabs have a soft abdomen, and hence need to hide in the shells of dead snails for protection. The tide started rising rather quickly as we were leaving, and we saw two Mantis Shrimps (Harpiosquilla sp.) got washed ashore. These are spearers which hunt for small fishes and other small animals with their power and spiny claws. Anyway, nice to be back in Singapore exploring our shores! :) Monday, December 06, 2010 Initially, I was hoping to reach Ubin earlier and explore a bit, but the queue for the bumboat was surprisingly long, and I ended up reaching just half an hour before the meeting time, and hence only had time to look around the volunteers' hub. As usual, the Kemunting (Rhodomyrtus tomentosa) planted near the volunteers' hub was blooming with pretty pink flowers. This shrub is rather uncommon in Singapore, but interesting is an invasive species in some countries where it was introduced. Here are the fruits. After the award presentation and lunch, we moved on to Chek Jawa for a walk. At the entrance, we were greeted by a rather tamed Wild Boar (Sus scrofa). Guess the visitors were really feeding it too much, and it was obviously sniffing around us for food! I managed to take a look at the Pemphis (Pemphis acidula). There were lots of fruits. Most of the flowers looked like they were withering though. I also took a look at the Lenggadai (Bruguiera parviflora) near the mangrove boardwalk, which I noticed was flowering when I was guiding here two weeks back, but didn't have the time to take photos. There was even a crab spider which climbed onto one of the flowers. And I also spotted an interesting-looking caterpillar. Also near mangrove boardwalk, I noticed a Dungun (Heritiera littoralis) fruiting! Wow! My first time having decent photos of the fruits! Previously when I saw them fruiting, they were always too high up, or I did not have my camera with me. Some of my friends call it the ultraman fruit, as the fruit has this ridge-like structure running across the middle of the fruit, making it appear like the head of the TV character, Ultraman. The Sea Almond (Terminalia catappa) trees appeared to have just shed their leaves and were growing ones. When we reached the intertidal area, many volunteers were already down on the sandflat! The top find of the day must be this Honeycomb Stingray (Himantura uarnak). Adults are reported to be able to grow to more than 2m long! The one we found was probably just about 1m long. There were lots of Noble Volutes (Cymbiola nobilis) laying eggs that day! I saw at least 15 of them, and at one spot, 3 of them within a 1m radius! There were lots of Glassy Bubble Shells (Haminoea sp.) on the soft sand. Like other headshield slugs (Order Cephalaspidea), they have well-developed headshield, which is a broadening at the head used to plow beneath the sand surface and help prevents the sand entering the mantle cavity. We also found another headshield slug, a Philinopsis sp. Members from this genus generally feed on other headshield slugs, such as the Glassy Bubble Shells! There were several Sandfish Sea Cucumbers (Holothuria scabra). These sea cucumbers are the same species that are usually served in restaurants, but must be treated to remove toxins in them before they can be eaten. There were also several of this smooth and slimy sea cucumber which I am not sure of the exact ID. Our hunter-seekers also found us a few Sand-sifting Sea Stars (Archaster typicus)! They feed on tiny organic particles among the sand, and digestion is done externally. And to do that, they have to push out their stomachs through their mouths located on their undersides, and lay the stomachs over the sand to digest the edible bits. The Sand Star (Astropecten sp.), on the other hand, swallows its food and digest it internally. It feeds on small snails and clams. We saw a Pencil Sea Urchin (Prionocidaris sp.) too! Unlike most other sea urchins, the spines of this sea urchin were thick like pencils (with some imagination), hence giving it its common name. The animal above is a Pygmy Squid (Idiosepius sp.), and is only about 1cm long! Several juvenile Mangrove Horseshoe Crabs (Carcinoscorpius rotundicauda) were spotted too, burrowing just beneath the sand surface, probably searching for little worms to feed on? All too soon, tide was rising and we had to end the walk. We were really luck that the weather was fine throughout the entire event. Thanks to NParks for organising this! :) Sunday, December 05, 2010 Is it finally opening? The above photo was taken earlier today about 7.30pm. It appeared that it was finally opening up, as we could see more of the maroon spathe! For the past few weeks, I have been regularly visiting the Singapore Botanic Gardens, hoping to catch the matured inflorescence of the Titan Arum (Amorphophallus titanum) - the biggest unbranched inflorescence in the world! It was initially estimated to mature between 17 and 20 Nov 2010, but it appeared that the estimation was way wrong. As such, I had lots of photos of the immature inflorescence instead during my various visits. Here's how it looked like during my first visit on 17 Nov 2010. This one was taken on 19 Nov 2010. One of the bracts had withered. During my visit on 23 Nov 2010, all bracts covering the inflorescence had withered. Read some where that it should start maturing soon after the last bract has fallen off, and so I visited it again on 24 Nov 2010. However, the spathe remained closed. For the past one week, I decided to just call the garden's hotline to check if it has opened up, but there were no good news. It was only earlier today that Angie told me that she had visited the plant in the morning, and it appeared to be opening up. We visited it around 7 plus in the evening, and while it's still a long way from opening up fully, it had certainly made a lot of progress! Will be visiting it again tomorrow morning to check if it has opened up fully. During my trip to the Singapore Botanic Gardens the past few weeks, I saw quite a lot of other interesting stuff.. At one corner, a Pink Mempat (Cratoxylum formosum) was also blooming spectacularly! From a distance, it looked like cherry blossom! Here's a look at a flowering branch.. When I was there on 19 Nov 2010, the weather was great and I managed to get quite a few nice close-ups. Lots of bees were attached to the flowers. Some of the non-flowering plants were pretty without the flowers, like the Spikemoss (Selaginella sp.). So are the mosses (Division Bryophyta). Apart from the plants, there were a few "sure-can-see" animals in the Singapore Botanic Gardens too! Among them are the Lesser Whistling Ducks (Dendrocygna javanica). Facing different directions... And the Black Swans (Cygnus atratus). I was really lucky that they had a few young cygnets with them. Once in a while, the parents would tuck their heads into the water and pick up some freshwater plants. Black Swans are actually native to Australia. I remembered seeing wild ones when I went to Western Australia a few years ago. I also saw an Oriental Pied Hornbill (Anthracoceros albirostris) near the entrance on one of the days. And of course, there were the super cute Spotted Wood-owls (Strix seloputo) that I blogged about earlier. So I guess even if there were no blooming Titan Arum, there were actually still plenty of things to see! :)
<urn:uuid:d9b3ccf8-c224-47f1-9fb9-a4338855dddc>
CC-MAIN-2015-35
http://tidechaser.blogspot.com/2010_12_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00166-ip-10-171-96-226.ec2.internal.warc.gz
en
0.97305
3,008
2.546875
3
Sir Lancelot The Great Knight Both the English and French cycles of Arthurian Legend are dominated by three inter-related themes: • The fellowship of the knights of the Round Table • The quests for the Holy Grail (the Sangreal) • The Arthur/Guinevere/Lancelot love-triangle Throughout, Lancelot is arguably as important a figure as Arthur himself. In French versions of the legend more attention is focused on Sir Lancelot than on King Arthur, and the French – compared to their English counterparts – appeared to be interested in the balance between the spiritual dimension and the earthly. The character of Lancelot fitted the bill more readily than did the King, but ultimately, for all his ‘noble chevalry’, Lancelot remains a figure of tragic failure. In summary: Sir Lancelot is regarded as the first and greatest of King Arthur’s legendary knights. Son of King Ban of Benoic (anglicized as Benwick) and Queen Elaine, he is known as Lancelot of the Lake (or Lancelot du Lac) because he was raised by Vivien, the Lady of the Lake. His knightly adventures include the rescue of Queen Guinevere from the evil Méléagant, a failed quest for the Holy Grail, and a further rescue of Guinevere after she is condemned to be burned at the stake for adultery (with him). Lancelot is also loved by Elaine of Astolat (the daughter of King Pelles) who dies of grief because her love is unrequited. Another Elaine (Elaine of Corbenic) tricks him – apparently he thought she was Guinevere – into sleeping with her (and begetting Galahad). His long relationship with the real Guinevere ultimately brings about the destruction of King Arthur’s realm. Le Chevalier de la Charrette Sir Lancelot first appears in Arthurian legend in ‘Le Chevalier de la Charrette’, one of a set of five Arthurian romances written by the French poet Chrétien de Troyes (completed by Godefroy de Lagny) as a large collection of verses, c.1180 to 1240. Lancelot is characterised alongside other knights, notably Gawain, Kay, and Méléagant (or Meliagaunce) – a consistent rival and parallel anti-hero against Lancelot – and is already heavily involved in his legendary romance with Guinevere, King Arthur’s queen. The dual role of (i) superb knight-at-arms and (ii) enduring, courtly lover defines Lancelot’s legendary gallantry. The incongruous notion of the super-hero resorting to a ‘charrette’ (cart) arises when Guinevere was abducted by Méléagant (the son of King Bagdemagus). Lancelot – hesitatingly at first, to Guinevere’s later disgust – pursued him in a cart driven by a dwarf. The episode culminates in Lancelot’s ‘crossing of the Sword Bridge': a bridge consisting from end to end of a sharply honed blade. Ultimately it is Lancelot’s character – the epitome of constancy and obedience to love – which is the key to his defeat of Méléagant and the self-love, treachery, and cruelty which he personified. During the ensuing combat between Lancelot and Méléagant (which Lancelot came close to losing because he could not stop gazing upon her – he collected himself just in time) King Bagdemagus successfully pleaded with Guinevere to stop the fight so his son’s life could be spared. Lancelot was forced to defend her honour a second time, when Méléagant later accused her of an affair with Kay, and once again Bagdemagus successfully pleaded for his son. Lancelot finally slew Méléagant in combat at King Arthur’s court, and his literary reputation as chivalric hero and arch-exemplar of ‘saver-of-damsels-from-distress’ was sealed. The origin of the affair between Lancelot and Guinevere Chrétien de Troyes composed ‘Le Chevalier de la Charrette’ at the request of the Countess Marie de Champagne, daughter of Louis VII of France and Eleanor of Aquitaine, then later the wife of Henry II of England. It was apparently written to foster the notion of the ‘Courts of Love’ as the principal settings for (adulterous) social relations rather than the spontaneous passion typified by the story of Tristan and Iseult. Like other courtly ladies of the day, Guinevere required a lover, and the literary Lancelot – a convenient and suitable hero – was pressed into service. Lancelot in the Vulgate Cycle ‘Lancelot en Prose’ – The Vulgate Cycle – is a comprehensive trilogy (‘Lancelot Propre’, ‘La Queste del Saint Graal’, and ‘La Mort de Roi Artu’), believed to have been compiled by Cistercian monks between 1215 and 1235 and which mark the transition between verse and prose versions of the Arthurian legend. The authors contrasted earthly chivalry with spiritual chivalry idealized in the Quest for the Sangreal. Sir Lancelot is ‘the best knight in the world’ but cannot succeed in that quest, which is eventually achieved by his son, the virgin knight Sir Galahad. The blame for the destruction of the Round Table is placed firmly on Lancelot and his affair with Guinevere – which started with a kiss and is supposedly the story which, in ‘Dante’s Inferno’, Francesca tells Dante that she and her lover Paolo were reading when they exchanged their first kiss: “That day we read no further”. In ‘Lancelot en Prose’ the affair between Lancelot and Guinevere began through a series of stories culminating in his knighting at Arthur’s court and his falling secretly in love with the queen. Guinevere knows of his love but the affair is not consummated until Galehaut, King of the Long Isles and Lord of Surluse, makes war on Arthur – who would have lost his kingdom except for the feats of arms of an unknown knight in black armour who comes to Arthur’s aid at the last moment. Galehaut is so impressed by the Black Knight that he befriends him and at the knight’s request agrees to make peace with Arthur. Because the knight is often red-eyed from sadness, Galehaut discovers the secret of his love for Arthur’s queen, and out of friendship for the (still un-named) knight he arranges a meeting between him and Guinevere. According to a translation by Carleton W. Carroll, Galehaut says “My lady, I ask that you give (the knight) your love, and that you take him as your knight forevermore, and become his loyal lady for all the days of your life, and you will have made him richer than if you had given him the whole world.” The Queen replied, “In that case, I grant that he should be entirely mine and I entirely his…” and at Galehaut’s behest she gave Lancelot a prolonged kiss. Galehaut then asked her for the Black Knight’s companionship. “Indeed,” she replied, “if you didn’t have that, then you would have profited little by the great sacrifice you made for him.” Then she took the knight by the right hand and said, “Galehaut, I give you this knight forevermore, except for what I have previously had of him. And you,” she said to the knight, “give your solemn word on this.” And the Lancelot did so. “Now do you know,” she said to Galehaut, “whom I have given you?” “My lady, I do not.” “I have given you Lancelot of the Lake, the son of King Ban of Benoic.” Guinevere had finally revealed Lancelot’s identity to Galehaut, whose joy was “the greatest he had ever known” for he had heard many rumours that this was Lancelot of the Lake and that he was the finest knight in the world, though landless, and he knew that King Ban had been a very noble man. In the Vulgate Cycle’s ‘La Mort de Roi Artu’ Arthur’s army lays siege to Lancelot in his castle Joyous Garde, inspired by Gawain’s desire for revenge for the slaying of his brothers in Lancelot’s rescue of Guinevere. The subsequent combat between Lancelot and Gawain is one of the most dramatic in Arthurian Legend and signifies pure blood revenge rather than the notion of the romantic duel. In contrast, Lancelot’s reluctance to dispatch his old friend remains firmly in the chivalric tradition. The Vulgate Cycle was an important source for Sir Thomas Malory in his Le Morte d’Arthur (1485) and which he refered to as “the French book”. Lancelot in Sir Thomas Malory’s Le Morte d’Arthur Malory frees Lancelot (which he spells as “Launcelot”) from much of the spiritual passion seen in the Vulgate Cycle – instead he emphasizes Lancelot’s relative success, not his ultimate failure, and the passion between the two erstwhile lovers is restrained. Le Morte d’Arthur was published by William Caxton as 21 books; Sir Lancelot first appears, briefly, in Book II, when the wizard Merlin prophesies that “Here in this place (editor’s note: a church near Camelot) shall be the greatest battle between two knights that there ever was or ever shall be, and yet the truest lovers, neither shall slay the other” and (editor’s note: written by Merlin on the pommel of the dead Balin’s sword) “No man shall handle this sword except the best knight in the world, and that will be Sir Launcelot or else Galahad his son, and with it Launcelot shall slay the man he loved best in the world, and that will be Sir Gawain.” Lancelot is gradually aggrandised by Malory up to ‘The Noble Tale of Sir Launcelot du Lake’ (Book VI) in which he declares his love for Guinevere (spelt by Malory as “Gwenyvere”). Thereafter (very briefly): he dubs Gareth knight (Book VII – ‘The Tale of Sir Gareth of Orkney’). In ‘The Tale of Sir Tristram de Liones’ (Book VIII) Lancelot suffers calumny from King Mark because of his friendship with Tristram, and rescues Gawain. He befriends La Cote Male Taile, rescues him, and establishes him Lord of Pendragon (Book IX), then jousts with Tristram and Palomides. Later (Book XI), Lancelot is tricked and drugged into sleeping with Elaine (de Corbenic), thinking her Guinevere, and begets Galahad. Guinevere is angry but he finds himself with Elaine again, who is sent away and he goes mad. A now insane Lancelot (Book XII) attacks a knight and scares his lady in the pavilion, but the knight, Bliant, takes the sleeping Lancelot to his castle to cure him. Healed by the Saint Grail, Lancelot returns with Elaine to her father’s castle. Later he is persuaded by Ector to return to Arthur’s court. Lancelot dubs his son Galahad knight (Book XIII). The knights go on a quest of the Sangreal but Lancelot confesses sin. He has a vision (Book XV) in which he joins the black (sinful) knights against the white (pure) knights. He falls into his old adulterous ways with Guinevere (Book XVIII) who is accused of poisoning a knight at a feast. Lancelot returns to defend her, wearing the sleeve of Elaine of Astolat (much to Guinevere’s annoyance). He is wounded and Elaine dies for love of him. Meliagaunt (Méléagant) abducts Guinevere (Book XIX), Lancelot gives succour, lies with her, and is trapped. He cures King Urre. Then he and Guinevere are discovered “in flagrante” (Book XX), after which he slays a number of knights, (including Agravain who betrayed him). Lancelot and friends rescue the queen from the stake. Gawain and knights make war on Lancelot who slays Gareth. Finally (Book XI) he and Guinevere part for the very last time, then he goes to Glastonbury and becomes a monk. The Lancelot who occupies Malory’s stage is “the fyrste knyght that the Frey[n]sh booke makyth me[n]cion of aftir kynge Arthure com from Rome.” He is no longer the romantic hero characterised in forgoing French versions of Arthurian Legend – his excellence springs from his fighting prowess and noble deeds. Far from needing to prove himself to a Guinevere whom he already loves, he reveres her above all others only in response to her admiration and honouring of his matchless proficiency as a knight. Throughout most of Malory’s tale Lancelot consistently denies that he and she are lovers: not exactly the stuff of high romance. Tournaments, battles, and adventures remain at the forefront of Lancelot’s priorities, necessitating a single state rather than the married one which would be bound to thwart the pursuit of an adventurous knighthood. Through the persona of Lancelot (and indeed through the foundation and eventual decline of the noble fellowship of the Round Table, not to mention the metaphorical passing of the seasons) Malory contrasts the prized medieval virtues of constancy and steadfastness with the inevitable rise and fall of the stable order of things. Lancelot, in particular, appears to symbolise on the one hand – in his innocence – the achievement of a certain kind of order, and on the other – in his ultimate sufferings – the tragic real-world truth that all good things come to an end. See also Arthurian Legend homepage.
<urn:uuid:bb2b5ed3-744e-4b79-8bb7-f4794d027268>
CC-MAIN-2015-35
http://www.arthurian-legend.com/sir-lancelot/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00104-ip-10-171-96-226.ec2.internal.warc.gz
en
0.959507
3,124
2.578125
3
Since the 18th century Splendour or Convenience? The Somerset House that we see today stands on the site of an earlier Tudor palace that was demolished in 1775. The demise of the old Somerset House coincided with a move to house many of the government's offices and the principal learned societies under one roof, and led to the site being chosen for a new building to solve this pressing problem. This approach was a radical departure from the established practice of using separate buildings for different departments of state and was seen as a means to promote greater efficiency among the government bureaucracies. Sir William Chambers, one of the leading architects of the day and Comptroller in the Office of Works, might have expected to be first choice for the Somerset House commission when it was awarded in 1774. Instead, he was overlooked in favour of William Robinson, Secretary of the Board of Works, but a man who had recent experience of designing major government buildings. There was much argument in Parliament as to whether the designs for the new building should favour splendour or economy. Joseph Baretti, a close friend of Chambers, described Robinson's initial plans as being in a plain manner, "rather with a view to convenience than ornament". However, following a parliamentary debate, Robinson was instructed to revise his concept and produce, "an ornament to the Metropolis and a monument of the taste and elegance of His Majesty's Reign". Meanwhile, Chambers expressed his displeasure at the choice of architect for such a prestigious project when he wrote that it was, "strange that such an undertaking should be trusted to a Clerk in our office... while the King has six architects in his service ready and able to obey his commands". The matter of Robinson's ability to produce an appropriate design was unexpectedly resolved by his sudden death in 1775, and the appointment of Chambers as his replacement. Chambers, who had lamented the destruction of the old Somerset House and been critical of Robinson's designs, was appointed to design and supervise the construction of 'a great public building... an object of national splendour'. It had to accommodate the three principal learned societies - the Royal Academy of Arts, the Royal Society, and the Society of Antiquaries - as well as various government offices. In particular, he had to provide the Navy Board with quarters that would reflect the rising importance of the Navy at a time when Britain was almost constantly at war. To further complicate his task, the King's Bargemaster was also to be based at Somerset House. This required that there was direct access to the Thames, enabling officers of the Navy Board to travel back and forth to the warehouses and dockyards at Deptford and Greenwich. The new building also had to provide living accommodation for the heads of the various departments housed there, including space for cooks, housekeepers, secretaries and many others. Chambers solved this problem by treating the offices as a series of town houses arranged in a quadrangular layout, extending across the whole site of the old palace and its gardens and out into the Thames, some six acres in all. Each department, regardless of size, was allocated a vertical slice of accommodation; six storeys comprising - cellar, basement, ground floor, first floor, attic and garret. By seeking to conceal two storeys below ground and one in the roof, Chambers reduced the visual impact to that of a building only three storeys high, while providing each of the various departments with a large set of rooms and its own separate entrance. A Great Feat of Engineering The construction of this huge project was to be phased. The Strand Block to the north was built first, its foundations laid in 1776 and the building generally completed by 1780. At the same time, piles were driven into the river-bed to form the foundations of the Embankment Building, the front of which was of Aberdeen granite and concealed the massive brick piers supporting the Navy Office block above. This was completed in 1786 and the east and west ranges some two years later. The steeply sloping site with its poor soil quality had presented Chambers with numerous difficulties to contend with. Although the structure suffered some partial early failures, the subsequent stability of this vast and complex building, foundations rising from the treacherous soil of the river bank, must be regarded as one of his greatest achievements. Sadly, Chambers did not live to see the completion of Somerset House. He tendered his resignation in March 1795 due to "infirmities incident to old age" and died less than a year later. In the summer of 1795 James Wyatt was appointed to complete the building. After many delays, it was finally declared finished in 1801, even though a part of Chambers' design still remained un-built. Longer and Higher In 1775 William Robinson had conservatively estimated that his "plain substantial structure" could be built for £135,700. By comparison, five years later Chambers reported to the House of Commons that the final cost of his building "will certainly not exceed the sum of £250,000" and, extremely optimistically, that it would "require six years and a half to complete the whole design". As the process of erecting this vast and complex building dragged on, the cost continued to spiral upwards, reaching £306,000 in 1788, £353,000 in 1790, and £462,323 when the account finally closed in 1801, some 25 years after the first foundations had been laid. The Strand Block contained the entrance vestibule described by Chambers as "a general passage to every part of the whole design", and rooms for the Learned Societies, intended for "the reception of useful learning and polite arts". These were the parts of his design where he considered "specimens of elegance should at least be attempted". Accordingly, they were afforded a more decorative treatment than the rest of the building, including sculptural decoration by Wilton, Bacon and Carlini and, in the Royal Academy, a library ceiling painted by Joshua Reynolds. The remaining three sides of the quad were home to various administrative departments. Of these, the Navy Office was given the most prominent position, occupying large parts of the South Wing and having the only elaborate interiors in the office ranges. Although most of the Somerset House interiors were relatively plain, numerous carvings and reliefs decorated the exterior of the building. Their main themes were patriotic - many had a marine motif, celebrating the Naval power and imperial ambitions of Britain at a time when both were growing. Additional decoration was provided by John Bacon, who made two sculptures of George III, a bust for the Society of Antiquaries, whose patron he was, and a full-length bronze for the Courtyard. One area where Chambers gave free rein to his imagination and flair for interior design was in the construction of two of the staircases. The Royal Academy Exhibition Room situated at the top of the Strand Building was served by a long and winding stair. Chambers included a series of decorated landings, or "stations of repose" from which spectators, "might find entertainment, to compensate for the labour past, and be encouraged to proceed". In designing the Navy Staircase (later renamed the Nelson Stair) in the southern part of the building, Chambers had the space to give free rein to his imagination. Here he created a sweeping staircase, soaring dramatically into space over the drop to the ground floor. The Navy Staircase suffered severely from bomb damage in 1940 and was carefully restored by Sir Albert Richardson. A Matter of Opinion As with any great public work, there were critics... and eulogists... Self-styled architectural critic Anthony Pasquin made an unflattering reference to the basement offices when he wrote, "In these damp, black and comfortless recesses, the clerks of the nation grope about like moles, immersed in Tartarean gloom, where they stamp, sign, examine, indite, doze, and swear..." While an anonymous contributor to the Somerset House Gazette in 1825 defended the building and described Pasquin's satire as "malignant criticism" intended to "...expose to general derision the bad taste of the King, the government, and the country..." Later, Henri Taine, peering through the autumnal gloom of 1850 at a building adorned with five decades of accumulated urban grime was moved to comment, "A frightful thing this huge palace in the Strand which is called Somerset House. Massive and heavy piece of architecture of which the hollows are inked and the porticos blackened with soot..." The final words on Chambers' achievement should, we feel, go to this contributor to the Somerset House Gazette, who wrote, "This then is Somerset Place, the work of an architect, who has manifested in its erection, a vast extent of intellect, as a mathematician, as an engineer, as an artist, and as an philosopher... an upright man." The Eastern Part of the Site Through the 1820s, a furious debate raged in educational and political circles between those who advocated the teaching of the Protestant religion, those who supported the sensitive issue of Catholic emancipation, and others, who favoured a radical departure and wanted to exclude the teaching of theological subjects altogether. A committee was formed, which included the Duke of Wellington, then Prime Minister, to raise funds for a new educational foundation that would put the Protestant religion firmly back into the curriculum. King George IV promised his patronage of what would be known as King's College (London), while the government granted the use of the vacant site to the east of Somerset House on which to build it, with but one condition; that the College should be erected "on a plan which would complete the river front of Somerset House at its eastern extremity in accordance with the original design of Sir William Chambers". The architect Sir Robert Smirke was selected to design the new college. A student at the Royal Academy Schools at Somerset House, Smirke later studied architecture in Greece and Italy and already had the design of the British Museum, a magnificent Greek Revival building, to his credit. Smirke had been appointed Attached Architect to Somerset House in 1815, and his design for the Legacy Duty Office completed the north west corner of the Courtyard in a style that sympathetically reproduced Chambers' existing elevations. In 1829, Smirke started work on King's College and, with it, the completion of the eastern part of Chambers' river front which had for so long offended "every eye of taste for its incomplete appearance". Although confronted with similar problems to Chambers before him - the steeply sloping site, unstable ground, river frontage and so on - Smirke ensured that the construction of King's College proceeded apace. The College formally opened in 1831, and by 1835 the whole of the eastern section of the river facade with its Palladian Bridge and six bays to the east were mostly completed. When the access road to the new Waterloo Bridge opened to the west of Somerset House in 1813, it afforded passers-by an unsightly view of the rear elevations of the Admiralty houses that formed the western side of the Courtyard. For years, this arrangement continued to attract criticism - in 1841 a writer in the Westminster Review described the back of the west wing as a "rude, unsightly surface of brick wall, patched over with windows placed at random". At the same time, the expansion of the Inland Revenue brought a pressing need for more office space. The government decided to solve both of these problems by building a new wing to Somerset House on the undeveloped westernmost part of the site. James Pennethorne, who had trained under John Nash, was the architect entrusted with the design of the New Wing in 1849, and with it, the completion of Chambers' great scheme. The New Wing was not, in fact, a new building, but a substantial extension and remodelling of the Admiralty residences. Spine corridors were cut to join the terrace as one building, new rooms were built facing on to Lancaster Place to provide office space for the Inland Revenue, including a Court of Appeal, and beneath this, an extensive substructure of vaults and basements provided premises for the Stamping Department. A new front elevation, including principal entrance and forecourt, was constructed in Portland stone and closely followed Chambers' architectural style. While, at its southern end, the New Wing terminated behind the river facade leaving Chambers' original design intact. The completion of the New Wing in 1856 was the last phase of a plan conceived some eighty years earlier and finally allowed Somerset House to be appreciated as a three-dimensional building, rather than as one having facades only onto the Strand and the river. Seventy-five architects officially congratulated Pennethorne in a letter published on 1st July of that year, they wrote, "Your professional brethren are anxious to congratulate you on the successful completion of your design... a striking architectural feature in the entrance to London by Waterloo Bridge". By the second half of the nineteenth century, London's roads were becoming increasingly congested and its sewers unable to cope with the needs of a rapidly growing population. As a part of its activities to modernise the infrastructure of the city, the Metropolitan Board of Works undertook to make new roads, construct the Victoria, Albert and Chelsea embankments, and lay a vast, new drainage system. The Embankment was intended to carry a new road along the edge of the Thames from Westminster to the City of London and, below ground, to accommodate large sewers and a line for the Metropolitan and District Railway. Construction of the Victoria Embankment to the designs of Sir Joseph Bazalgette began in 1864 and was completed in July 1870. The introduction of the Embankment had the effect of distancing the river from the buildings along its north bank, particularly significant for Somerset House, which had been designed to rise directly from the water. The new embankment truncated the elevation of Chambers' masterpiece; the Aberdeen granite base of the Embankment Building was concealed by the substructure for the road, the two Watergates were demoted to being entrances from the new raised carriageway, and the Great Arch with its two adjacent barge-houses became landlocked. The dramatic waterfront design of Sir William Chambers' Somerset House had effectively been destroyed a little more than a decade since the building of the New Wing had seen its completion. Reallocation of Space By the 1870s, bowing to pressure from government departments, the learned societies had vacated the North Wing of Somerset House and moved to Burlington House, thus releasing more space for office use. By 1900 the Registry of Births, Marriages and Deaths had taken their place. In 1873 the Admiralty moved from Somerset House to Spring Gardens. Their space in the West Wing was allocated to the Inland Revenue, and a cast iron bridge was built in 1896, spanning between this and their existing accommodation in the New Wing. The South Wing was shared between the Inland Revenue and the Principal Probate Registry, the vaults in the eastern end being used to store original copies of wills. The 20th Century As the needs of the Inland Revenue continued to grow at the end of the 19th century, it expanded into the East Wing releasing space in the South Wing for the Principal Probate Registry. The need for increased storage and public access to the Registry resulted in alterations to the cellar, basement and ground floors of the South Wing during the 1920s, and a substantial reconstruction of the Seamen's Waiting Hall. In the 1940s, the Inland Revenue was evacuated and the Ministry of Supply occupied Somerset House for the duration of the War. Subsequent bomb damage to the Nelson Staircase, Navy Boardroom and several bays to its east required extensive reconstruction which was carried out under Richardson & Houfe between 1950 and 1952. After the War, the Inland Revenue, the Principal Probate Registry and the General Register Office occupied the building. During this time, mezzanine storeys were introduced to many of the offices to increase their floor area, while, in the 1970s, original joinery was removed to enable fire precaution works and the upgrading of mechanical services under the direction of the Frizzell Partnership. In spite of these changes much of the space in Somerset House no longer proved ideal for its users and the North Wing was vacated by the Registrar General in 1970. Having remained empty for some 20 years, this part of the building originally designed by Chambers for "useful learning and polite arts" was occupied by the Courtauld Institute and its galleries. The adaptation was carried out by Green Lloyd and Adams and the reallocation of the building to the arts was seen as a major heritage gain. Following the vacation of the South Wing and Embankment Building by government departments in the last few years, a comprehensive restoration programme has seen galleries and other cultural spaces introduced here. The Embankment Terrace has been reopened and Chambers' great Courtyard has been transformed from a hidden car park into one of the most vibrant public spaces in the capital. These changes have been overseen by architects Inskip and Jenkins, under the direction of the Somerset House Trust and with financial support from the Heritage Lottery Fund.
<urn:uuid:21769eb2-264f-4a19-bdae-d50ec4130b53>
CC-MAIN-2015-35
http://www.somersethouse.org.uk/history/since-the-18th-century
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz
en
0.97652
3,566
3.1875
3
|Related Resources to May Broadcast Online resources and organizations and publications for parents and schools |No Child Left Behind Links and publications on the new education law A list of guest panelists on the May 2003 Education News Parents Can Use The U.S. Department of Education's Office of Special Education and Rehabilitative Services (OSERS) is committed to improving results and outcomes for people with disabilities of all ages. OSERS provides a wide array of supports to parents and individuals, school districts and states in three main areas: special education, vocational rehabilitation and research. Technical Assistance and Dissemination Network--U.S. Department of Education This U.S. Department of Education website lists numerous resources and organizations that cover a range of concerns for parents, teachers, administrators and community members interested in special education. Includes information on IDEA 1997, inclusion, transition, early childhood, and much more. The Council for Exceptional Children (CEC) is dedicated to improving educational outcomes for individuals with exceptionalities, students with disabilities, and/or the gifted. CEC advocates for appropriate governmental policies, sets professional standards, provides continual professional development, advocates for newly and historically underserved individuals with exceptionalities, and helps professionals obtain resources necessary for effective professional practice. The National Association of State Directors of Special Education (NASDSE) is dedicated to supporting states carry out their mission of ensuring a quality education for students with disabilities. NASDSE provides support to states through training, technical assistance documents, research, policy development, and partnering with other organizations The National Center to Improve Practice (NCIP) works to improve educational outcomes for students with disabilities by promoting the effective use of instructional technologies among educators. In order to accomplish this goal, NCIP created a national community of educators who play a leading role in promoting and implementing instructional technologies. The National Collaborative on Workforce and Disability for Youth (NCWD/Youth) assists state and local workforce development systems to better serve youth with disabilities. The NCWD/Youth is composed of partners with expertise in disability, education, employment, and workforce development issues. The National Institute for Literacy (NIFL) is a federal organization that shares information about literacy and supports the development of high-quality literacy services so all Americans can develop essential basic skills. The National Institute of Child Health and Human Development (NICHD) is an institution within the National Institutes of Health and the U.S. Department of Health & Human Services. The NICHD seeks to assure that every individual is born healthy and wanted, and that all children have the opportunity to fulfill their potential for a healthy and productive life unhampered by disease or disability. The Opelika City School System in Opelika, Alabama joined together with community leaders to provide opportunities for students to develop appropriate skills and behaviors for their lives after graduation. Students gain real world experiences that help make the transition from school an appropriate and meaningful process The Parent Advocacy Coalition for Education Rights (PACER Center) was created by parents of children with disabilities to help other parents and families facing similar challenges. The mission is to expand opportunities and enhance the quality of life of children and young adults with disabilities and their families, based on the concept of parents helping parents. Parent to Parent of Miami works with community members and parents to build and sustain active networks of families who have children with disabilities. Parent to Parent of Miami supports those families by providing information, educational training, support, emergency assistance and advocacy to better approach their child's disability. The website, and the television program, Reading Rockets, look at different reading strategies that help young children learn to read; features practical advice for parents; and includes the personal stories of children, families, and teachers. Reading Rockets is a national service of public television station WETA in Washington, D.C. and is funded by a grant by the U.S. Department of Education. Recording for the Blind & Dyslexic (RFB&D) is the nation's educational library for those with print disabilities. For 54 years, Recording for the RFB&D has been an invaluable educational resource, enabling those with print disabilities to complete their educations, advance their careers, and gain self-esteem. Located near Tucson, Arizona, the Vail School District adopted "The Screening to Enhance Equitable Education Placement" (STEEP) program to identify and address learning problems in classrooms and with individual students. The STEEP program uses comprehensive screening to determine how to provide the right type of assistance. With the STEEP model, fewer children need special education and special education is an option only after every effort is made to address the child's needs in general education. The What Works Clearinghouse (WWC) provides educators, policymakers, and the public with a central, independent, and trusted source of scientific evidence of what works in education. The WWC is administered by the U.S. Department of Education's Institute of Education Sciences. The Helping Your Child publication series focuses on providing parents with tools and information necessary to help their children succeed in school and life. Individuals with Disabilities Education Act of 1997 The Individuals with Disabilities Education Act of 1997 (IDEA) and its associated regulations are available in several different formats, including enhanced versions that take full advantage of the linking capabilities of the web. The National Institute for Literacy offers a series of publications with research-based resources and tips designed for parents, caregivers and teachers of children from birth to preschool, and from kindergarten to third grade. A New Era: Revitalizing Special Education for Children and Their Families President's Commission on Excellence in Special Education report on ways to strengthen America's four decades of commitment to educating children with disabilities. Provides suggestions for how parents can help their children be ready to read and ready to learn. This booklet also offers advice for selecting a good early reading program. First Lady Laura Bush highlights an initiative that encourages reading in classrooms and at home. The publication outlines various teacher recruitment and reading programs. The No Child Left Behind official web site provides resources and information to help answer questions about the new education law signed by President Bush on January 8, 2002. On this site the U.S. Department of Education offers information about school choice, supplemental services, testing and accountability, as well as parenting tips on reading and homework. The Achiever is a biweekly electronic newsletter published by the U.S. Department of Education that contains news and articles on education reform, tips for parents and teachers, and resources related to No Child Left Behind. Robert Pasternack is the Assistant Secretary for Special Education and Rehabilitative Services at the U.S. Department of Education. In this position, Dr. Pasternack serves as principal adviser to the U.S. Secretary of Education on all matters related to special education and rehabilitative services. He has served as state director of special education for the New Mexico State Department of Education, where he led a number of initiatives designed to improve results for students with disabilities. Dr. Pasternack has worked with students with disabilities and their families for more than 25 years both as an educator and clinical director. Amanda VanDerHeyden is a researcher with the Vail School District in Arizona. She is affiliated with the Early Intervention Institute at Louisiana State University Health Sciences Center. Her research interests include applied behavior analysis in educational settings and generally finding ways to help children learn. Dr. VanDerHeyden has helped Vail schools implement the Screening to Enhance Equitable Education Placement (STEEP) model of identifying and addressing students with learning problems. As co-creator of the STEEP model with Dr. Joseph Witt, she has published many articles on the model and other school psychology issues. She serves on the editorial boards for Journal of Early Intervention and Journal of Behavioral Education. Laurie Emery is the principal of Acacia Elementary School in Vail, Arizona, near Tucson. Acacia serves a primarily rural population and is a Title I school. Acacia uses the Screening to Enhance Equitable Education Placement (STEEP) model of identifying and addressing students with learning problems. Prior to becoming a principal, Laurie was a 5th grade teacher in Vail. She has a Masters in Education in Educational Leadership from Northern Arizona University. Mrs. Emery's particular areas of interest are assessment and teacher training. Reid Lyon is a research psychologist and the Chief of the Child Development and Behavior Branch within the National Institute of Child Health and Human Development (NICHD) at the National Institutes of Health (NIH). As a former third grade teacher and school psychologist, Dr. Lyon has taught children with and is familiar in detecting learning disabilities at an early age. Dr. Lyon has authored and edited over 100 publications on learning differences and disabilities in children. With the National Institute of Child Health and Human Development, Dr. Lyon is currently responsible for special education related research and reporting findings to the Congress and other government agencies. Isabel Garcia is the Executive Director of Parent to Parent of Miami. Her journey in the disability movement began shortly after the birth of her daughter, Daniela, who is now 20 years old and has cerebral palsy. In 1988, Mrs.Garcia became a volunteer support parent with the newly founded Parent to Parent of Miami and discovered that a key to healing is closely related to helping others. Mrs.Garcia's personal experiences and networking with parents of disabled children inspired her to help others in a similar situation. She also serves as the director of Community Parent Resource Centers, a program funded by the U.S. Department of Education. Her 17-year-old daughter, Maritza, has been accepted to attend the University of Florida in the fall. L.C. Thomas is an associate minister, an ink technician and a parent of seven children in Opelika, Alabama. Mr. Thomas has two school-age sons with learning disabilities presently enrolled in Opelika City School System's special education program. With the help of the transition program and its challenging academics, vocational and community experiences, Mr. Thomas' eldest son is about to graduate high school and attend community college and pursue a vocational career.
<urn:uuid:dacda419-a81a-4d1e-8913-a84b1ad89d36>
CC-MAIN-2015-35
http://www2.ed.gov/news/av/video/edtv/0305_resources.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00157-ip-10-171-96-226.ec2.internal.warc.gz
en
0.952394
2,085
2.84375
3
In the past decade, a variety of methodologies have evolved for assessing rural community needs quickly and with their participation. These include Rapid Rural Appraisal (RRA), Participatory Rural Appraisal (PRA), and Diagnosis and Design (D&D). A number of excellent references explain how to use these information-gathering tools (see "For further reading"). These usually start with a background search of available literature from local government offices and other sources, such as research institutes. Studies that appear unrelated to non-wood products may still describe important local values concerning the forest or people's access to it. For example, a study on the impact of a pulp and fuelwood plantation in West Africa contained local people's complaints of reduced availability of forest produce and listed the most important types: bushmeat, chewsticks, canes, poles and other housing materials (Falconer, 1990). Studies of nutrition, land tenure and agriculture can provide valuable indicators of local forest use. This background search is usually followed by a combination of household and/or group interviews or surveys and mapping exercises. A survey in six Asian countries asked households to list the forest species they used, rank the species by preference based on their use-value, and list the different plant parts used (Mehl, 1991). Households can further help by keeping weekly estimates of the products they consume and sell, in quantitative terms. Text box 3.2: Women's involvement in processing in Brazil Acre, women have responsibility for processing all plants intended for human and animal consumption: foods, beverages, spices, medicines and animal feed. Women in the area have refined skills in managing and exploiting some 150 species. Plants for food include wild and domesticated fruits and nuts, and field and garden crops. Processed products range from jams, chocolate and cooking oil, to coffees and herbal teas. The women use over 50 plants for medicines. Pest repellents also come from the forest. Both men and women make baskets, brooms, hats and other craft products. More than half of a group of women interviewed replied enthusiastically that if a market existed, they would make time to regularly prepare items for sale (Kainer and Duryea, 1992). markets and prices can help to indicate what non-wood resources are important. Where local market information systems exist (see Chapter 7), they may, with some adjustments, help to gauge local harvest rates for key products. Market figures alone, however, do not supply the full picture. In Zaire, studies found that most small game was traded or exchanged locally or consumed within the household and not recorded (Redford et al., The importance of women's concerns Other indicator groups For an accurate picture of local resource use, forest managers should identify the groups that depend most on the resource and monitor their use as a sensitive gauge. To optimize equity and stability, the managers should also consider how proposed activities would affect these groups. Despite the fact that women tend to depend more on non-wood resources for household use and income than men, they frequently have less voice in resource management decisions than men, and their priorities are often overlooked. In Latin America women have large roles in hunting using certain technologies (nets, basket traps and poison fishing) but not in others; in some societies, women are the ones to identify and track animals (Redford, op. cit.). Assessments of local NWFP use should recognize these variations and make a special effort to include women and address their needs. Other groups that tend to rely heavily on forest products for food and other subsistence needs include (FAO, 1989): the landless poor, who often depend on common property resources for fodder, fuel, handicraft materials and other needs; forest dwellers and shifting cultivators, who frequently lack secure land tenure and are squeezed out when pressures increase on forest resources; small-farm families, who may lack resources for subsistence production, and who experience declining fertility and shrinking farm-size through inheritance; pastoralists and herders, who are vulnerable to droughts and encroachment by cultivators and government programmes; young children, who depend on forest snacks for certain vitamins. identifying these vulnerable groups and the non-wood resources on which they rely, forest managers can anticipate and prevent (or reduce) conflicts and shortages caused by changes in forest For major marketed products, subsector analysis helps in understanding the commercial processes at work. A full subsector study can take a month or more to complete, but parts of it can be done in several days and provide useful information on local market flows. A full subsector analysis uncovers a range of information, including (ATI, 1995): the local market's main functions, technologies, participants and product flows; a summary of participants and alternative channels for product flows, and trends among channels; regulations and policies that influence local product flows; the number of enterprises that market a product, sales value amounts, employment levels and increase in product value at each stage. Text box 3.3: Adapting assessment methods: the example of mangroves Assessing resources and how nearby communities use them is a site-specific task. Resources range from desert oases and semi-arid savannas to montane forests, from herbs and vines to wildlife. Resource managers must adapt the assessment methods to suit the local species, ecosystem and human environment. Mangroves and other wetlands, for example, present a unique set of conditions for management and are subject to different pressures than land forests. In mangroves, non-wood activities such as fisheries often generate much more income than timber harvests. Mangroves can also create income through algae cultivation (for example, for export) and producing salt from evaporating seawater (FAO, 1989). Mangrove products include tannin for leather curing, medicines, honey, vinegar, cooking oil, wildlife and fermented drinks. Mangroves contribute to local food security particularly through their support of coastal habitats for fish, shrimp, oysters, crabs, cockles and molluscs. They also provide plant - borne food, such as nipa palm fruits, and high-protein fodder from Rhizophora leaves. Pressures unique to mangrove ecosystems include land reclamation efforts, destructive and unmanaged construction of fish and shrimp ponds, and harvesting for fuelwood and poles. Mangroves are also very sensitive to pollution from urban wastes, food-processing industries, power stations and dam construction. To manage mangroves effectively, a manager needs to know the dynamics of water bodies and forest cover. Assessment of mangrove ecosystems and their products requires more interdisciplinary collaboration than for dryland forests. This makes it especially important to clearly define data needs before starting to collect them. general types of data needed are still the same as described in Chapter 2 (resource biology, socio-economic information, existing and future demand, and operational and institutional information), but mangroves involve a variety of particular trade-offs. For example, there are socioeconomic trade-offs between fisheries and timber harvests. Additional logistical considerations include river transportation and pond or canal construction Subsector analysis starts by defining the product's end market. In the case of rattan, for example, end market products could be furniture and handicraft for both local and export sale. After identifying the main end markets, the analysis should describe each step from growth to harvest to final consumer; this sequence is known as the product's value chain (ATI, op. cit.). The analysis identifies the participants at each stage (collectors, processors, government agencies, NGOs, traders, market agents, etc.). For each stage, it lists all steps involved: What is required to complete each stage? What set of skills, equipment, and capital? Which participant performs which step? information from this part of a subsector market analysis is combined with the results of the rapid assessment of subsistence use, a picture emerges of (1) who collects and uses NWFPs locally, (2) who gains by them and (3) a rough estimate of what quantities are involved. Few, if any, forest resources are entirely unmanaged. Even where the forest appears undisturbed, some form of management is probably taking place. For example, the Kayapo Indians of the Amazon basin plant species along forest paths and in natural forest openings for food, medicine, building materials, dyes and insect repellent. Damar forests in Sumatra, Indonesia, appear quite natural but have been managed for generations to obtain damar resin and other products. In many cases, local forest management has increased the diversity of forest species for non-wood products. In West Kalimantan, Indonesia, Dalat communities have broadened distribution and increased the abundance of products, including illipe nuts (Shorea spp.) and fruits of durian (Durio zibethinus), rambutan (Nephelium spp.) and mangosteen (Garcinia mangostana), as well as a timber species, Eusideroxylon zwagerii. In Brazil's eastern Amazon, in the Ilha das Onças, people have maintained a variety of fruit and latex species as well as wood-producing species (Reds, 1995). cases, local management strategies build on basic practices such as: selective weeding around valued plants; enrichment planting, and occasional selective harvesting of timber species to open the canopy and stimulate seedling growth. These elements form a sound basis for sustainable forest management (see Chapter 4). To learn what management practices exist in an area, prospective forest managers should interview older local people and forest dwellers (both men and women), spending time with them in the forest. Assess how communities near the forest already use the forest resource for non-wood products, and the influence of local cultural, spiritual, social and economic values. This helps to fully account for existing demand and prevent over-harvesting. It also helps to identify the types of improvements most likely to succeed locally. For this assessment, examine household subsistence uses. Review background materials and use rapid appraisal methodologies to gauge the priority household uses. Gauge the importance of NWFPs in local markets, for example using subsector analysis. This method helps identify who sells, who buys, and how the products flow through the market. Investigate local management systems for the resource. Interview older villagers, forest dwellers and forest medicine providers to uncover information on how people use these products. These systems can include selective weeding around valued species, enrichment planting of these species in the forest and selective felling. Even where the forest appears unmanaged, local management systems can be important and offer keys to sustainable management. Look for indicator groups within the community - people who especially rely on NWFPs - in order to understand people-resource dynamics. This allows a manager to monitor forest health and that of nearby communities. Ensure that women's interests and preferences receive full weight in plans for forest management, in recognition of their role in product collection, processing and marketing. sound understanding of the biological resource and its relationship to the human environment, the forest manager or community is ready for the next step - identifying opportunities Arnold, J.E.M.. 1995. Socio-economic benefits and issues in non-wood forest products use. In Report of the expert consultation on non-wood forest products, Yogyakarta, Indonesia, 17-27 January 1995. Non-Wood Forest Products 3. FAO, Rome. ATI. 1995. Non-timber forest products manual. Draft version. Appropriate Technology International, Washington, D.C. Falconer, J. 1990. The major significance of "minor" forest products: the local use and value of forests in the West African humid forest zone. Community Forestry Note No. 6. FAO, Rome. Falconer, J. 1992. Non-timber forest products in southern Ghana: a summary report. ODA Forestry Series No. 2. UK Overseas Development Authority, London. FAO. 1989. Forestry and food security. Forestry Paper No. 90. FAO, Rome. FAO/Food and Nutrition Division. 1995. Non-wood forest products in nutrition. In Non-wood forest products for sustainable forestry, Yogyakarta, Indonesia, 17-27 January 1995. Non-Wood Forest Products 3. FAO, Rome. Fischer, F.U. 1993. Beekeeping in the subsistence economy of the Miombo savanna woodlands of southcentral Africa. In NTFPs - three views from Africa. Rural Development Forestry Network Paper 15c. Overseas Development Institute, London. Kainer, K.A., and Duryea, M.L. 1992. Tapping women's knowledge: plant resource use in extractive reserves, Acre, Brazil. Economic Botany 46(4):408-425. Mehl, C.B. 1991. Trees and farms in Asia: an analysis of farm and village forest use practices in South and Southeast Asia. Report Number 16. Multipurpose Tree Species Research Network, Bangkok. Poole, P.J. 1993. Indigenous peoples and biodiversity protection. In Davis, S., ed., The social challenge of biodiversity conservation. Working Paper No. 1. Global Environment Facility, Washington, D.C. Redford, K.H., Godshalk, R., and Asher, K. 1995. What about the wild animals?: wild animal species in community forestry. Community Forestry Note 13. FAO, Rome. Reis, M. 1995. Resource development for non-wood forest products. In Report of the expert consultation on non-wood forest products, Yogyakarta, Indonesia, 17-27 January 1995. Non-Wood Forest Products 3. FAO, Rome. Uraivan Tan-Kim-Yong. 1993. The Karen culture of watershed forest management: a case study at Ban Om-Long. Resource Management and Development Center, Chiang Mai University, Chiang Mai, Thailand. Vantomme, P. 1995. Information requirements and planning principles for managing non-wood forest resources in mangrove forests. In Report of the expert consultation on non-wood forest products. Yogyakarta, Indonesia, 17-27 January 1995. Non-Wood Forest Products 3. FAO, Rome. G.E. 1991. Management issues for development of non-timber forest products. Unasylva 42(165):3-8. FAO. 1990. The community toolbox: the idea, methods and fools for participatory assessment, monitoring and evaluation in community forestry. Community Forestry Field Manual 2. FAO. 1990. Community Forestry: Rapid Appraisal. Community Forestry Note 3. FAO, Rome. FAO. 1991. Non-wood forest products: the way ahead. FAO Forestry Paper No. 97. FAO, Rome. FAO. 1991. Guidelines for integrating nutrition concerns into forestry projects. Community Forestry Manual 3. FAO, Rome. FAO. 1994. Tree and land tenure rapid appraisal tools. Forests, Trees and People Programme Manual 4. FAO, Rome. FAO. 1995. The gender analysis and forestry training package. FAO, Rome Montagne, P. 1985. Contributions of indigenous silviculture to forestry development in rural areas: examples from Niger and Mali. Rural Africana 23-24 (Fall 1985-Winter 1986):61-65. R.A. 1994. The present situation of non-timber forest products in Costa Rica. Working Document No. 7, Project for Conservation and Sustainable Development in Central America. CATIE, Turrialba,
<urn:uuid:950d6950-7116-47a1-8067-65162ee1b76a>
CC-MAIN-2015-35
http://www.fao.org/docrep/v9480e/v9480e08.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00162-ip-10-171-96-226.ec2.internal.warc.gz
en
0.89344
3,321
3.4375
3
Ecoegions of Bhutan Bhutan has six ecoregions as shown in the figure below (from north to south): - Eastern Himalayan alpine shrub and meadows (Blue) - Eastern Himalayan subalpine conifer forests (Aqua) - Eastern Himalayan broadleaf forests (Light Yellow) - Himalayan subtropical pine forests (Sky Blue) - Himalayan subtropical broadleaf forests (Mid-yellow) - Terai-Duar savanna and grasslands (Dark Yellow) The transistion from the Himalayan subtropical broadleaf forests (6) ecoregion to the Brahmaputra Valley semi-evergreen forests ecoregion of northern Indian (at the bottom of the figure) approximately follows the southern border of Bhutan. The Eastern Himalayan Alpine Shrub and Meadows represent the alpine scrub and meadow habitat along the Inner Himalayas to the east of the Kali Gandaki River in central Nepal. Within it are the tallest mountains in the world-Everest, Makalu, Dhaulagiri, and Jomalhari-which tower far above the Gangetic Plains. The alpine scrub and meadows in the eastern Himalayas are nested between the treeline at 4,000 meters (m) and the snowline at about 5,500 m and extend from the deep Kali Gandaki gorge through Bhutan and India's northeastern state of Arunachal Pradesh, to northern Myanmar. The Eastern Himalayan Alpine Shrub and Meadows ecoregion supports one of the world's richest alpine floral displays that becomes vividly apparent during the spring and summer when the meadows explode into a riot of color from the contrasting blue, purple, yellow, pink, and red flowers of alpine herbs. Rhododendrons characterize the alpine scrub habitat closer to treeline. The tall, bright-yellow flower stalk of the noble rhubarb, Rheum nobile (Polygonaceae), stands above all the low herbs and shrubs like a beacon, visible from across the valleys of the high Himalayan slopes. The plant richness in this ecoregion sitting at the top of the world is estimated at more than 7,000 species, a number that is three times what is estimated for the other alpine meadows in the Himalayas. In fact, from among the Indo-Pacific ecoregions, only the famous rain forests of Borneo are estimated to have a richer flora. Within the species-rich landscape are hotspots of endemism, created by the varied topography, which results in very localized climatic variations and high rainfall, enhancing the ability of specialized plant communities to evolve. Therefore, the ecoregion boasts the record for a plant growing at the highest elevation in the world: Arenaria bryophylla, a small, dense, tufted cushion-forming plant with small, stalkless flowers, was recorded at an astonishing 6,180 m by A. F. R. Wollaston. The ecoregion has fourteen protected areas that cover more than 11,680 km2, including several-such as Annapurna, Makalu Barun, Sagarmatha, Jigme Dorgi, and Sakteng-that exceed 1,000 km2 (or, as in the case of Annapurna and Jigme Dorji, 2,500 km2). Although the total area protected represents about 30 percent of the ecoregion's area, the reserves are inequitably distributed. Most of the protected areas are in Nepal and Bhutan, whereas the eastern section of the ecoregion, especially in Myanmar, receives little or no formal protection. Because of the high species turnover along the east-west axis, more equitable protection is necessary for better representation of the ecoregion's biodiversity. Moreover, about half of the areas that lie within the existing protected areas represent bare rock and areas covered with permanent ice, not very important habitat for biodiversity conservation. The ecoregion represents the belt of conifer forest between 3,000 and 4,000 meters, from east of the Kali Gandaki River in Nepal through Bhutan and into the state of Arunachal Pradesh, in India. These forests usually are confined to the steeper, rocky, north-facing slopes and therefore are inaccessible to human habitation and cultivation. The Eastern Himalayan Sub-Alpine Conifer Forests represent the transition from the forested ecoregions of the Himalayas to treeless alpine meadows and boulder-strewn alpine screes. Their ecological role within the interconnected Himalayan ecosystem, which extends from the alluvial grasslands along the foothills to the high alpine meadows, makes the forests of this ecoregion a conservation priority. Conservation of the Himalayan biodiversity is contingent on protecting the interconnected processes among the Himalayan ecosystems. For instance, several Himalayan birds and mammals exhibit altitudinal seasonal migrations and depend on contiguous habitats that permit these movements. The integrity of the watersheds of the rivers that originate in the high mountains of this majestic range depends on the intactness of habitat, from the high elevations to the lowlands. If any of the habitat layers are lost or degraded, these processes will also be disrupted. The ecoregion straddles the transition from the southern Indo-Malayan to the northern Palearctic fauna. Here tigers yield to snow leopards, and sambar are replaced by blue sheep. But the ecoregion also has its own specialized flora and fauna, such as the musk deer and red panda, which are limited to these mature temperate conifer forests. This ecoregion represents the band of temperate broadleaf forest between 2,000 and 3,000 meters, stretching from the deep Kali Gandaki River gorge in central Nepal, eastward through Bhutan, into India's eastern states of Arunachal Pradesh and Nagaland. The Eastern Himalayan Broadleaf Forests is one of the few Indo-Pacific ecoregions that is globally outstanding for both species richness and levels of endemism. The eastern Himalayas are a crossroads of the Indo-Malayan, Indo-Chinese, Sino-Himalayan, and East Asiatic floras as well as several ancient Gondwana relicts that have taken refuge here. Overall, this ecoregion is a biodiversity hotspot for rhododendrons and oaks; for instance, Sikkim has more than fifty rhododendron species, and there are more than sixty species in Bhutan. In addition to the outstanding levels of species diversity and endemism, the ecoregion also plays an important role in maintaining altitudinal connectivity between the habitat types that make up the larger Himalayan ecosystem. Several birds and mammals exhibit altitudinal seasonal migrations and depend on contiguous habitat up and down the steep Himalayan slopes for unhindered movements. Habitat continuity and intactness are also essential to maintain the integrity of watersheds along these steep slopes. If any of the habitat layers, from the Terai and Duar grasslands along the foothills through the broadleaf forests and conifers to the alpine meadows in the high mountains, are lost or degraded, these processes will be disrupted. For instance, several bird species are found in the temperate broadleaf forests of Bhutan where the habitat is more intact and continuous with the subtropical broadleaf forests lower down, but in Nepal where the habitat continuity has been disrupted, these same birds have limited ranges. The fifteen protected areas that extend into the ecoregion cover about 5,800 kilometers2 (7 percent) of the ecoregion. With the exception of Namdapha, none exceed 1,000 kilometers2. However, there are several very large reserves that overlap across several ecoregions, although only parts of the reserves are represented in this one. Examples include Thrumsing La, Jigme Dorji, and Black Mountains national parks in Bhutan. The Jigme Dorji National Park exceeds 4,000 kilometers2 and sprawls across three ecoregions to include the alpine meadows, sub-alpine conifer forests, and temperate broadleaf forests represented by this ecoregion. Two others, Kulong Chu and Black Mountains, exceed 1,000 kilometers2, and Cha Yu, Makalu-Barun, Mehao, and Thrumsing La are more than 500 kilometers2. Bhutan recently revised its protected area system to link the existing reserves. Plans to develop similar linkages through conservation landscapes have been proposed in the other areas in the eastern Himalayas. These plans use ecoregions as basic conservation units for representation of biodiversity. The subtropical pine forests represented by this ecoregion extend as a long, disjunct strip from Pakistan in the west, through the states of Jammu and Kashmir, Himachal Pradesh, and Uttar Pradesh in northern India, into Nepal and Bhutan. Although Champion and Seth indicate the presence of large areas of Chir pine in Arunachal Pradesh, the easternmost extent of large areas of Chir pine is in Bhutan. The Himalayan Subtropical Pine Forests are the largest in the Indo-Pacific region. They stretch throughout most of the 3,000-kilometer length of this the world's youngest and highest mountain range. Some scientists believe that climate change and human disturbance are causing the lower-elevation oak forests to be gradually degraded and invaded by the drought-resistant Chir pine (Pinus roxburghii), the dominant species in these subtropical pine forests. Biologically, the ecoregion does not harbor exceptionally high levels of species richness or endemism, but it is a distinct facet of the region's biodiversity that should be represented in a comprehensive conservation portfolio. More than half of this ecoregion's natural habitat has been cleared or degraded. In central and eastern Nepal, terraced agriculture plots, especially between 1,000 and 2,000 m, have replaced nearly all the natural forest. Other than in the less populated western regions, little natural forest remains in Nepal. Similarly, habitat loss is widespread in Pakistan, Jammu and Kashmir, and Himachal Pradesh and Uttar Pradesh states in India. The few larger blocks of remaining habitat blocks are now found in Bhutan. This ecoregion represents the east-west-directed band of Himalayan subtropical broadleaf forests along the Siwaliks or Outer Himalayan Range, lying between 500 and 1,000 meters (m). The ecoregion achieves its greatest coverage in the middle hills of central Nepal, but the long, narrow ecoregion extends through Darjeeling into Bhutan and also into the Indian State of Uttar Pradesh. The Kali Gandaki River, which has gouged the world's deepest river valley through the Himalayan Range, bisects the ecoregion. The Himalayan Subtropical Broadleaf Forests ecoregion includes several forest types along its length as it traverses an east to west moisture gradient. The forest types include Dodonea scrub, subtropical dry evergreen forests of Olea cuspidata, northern dry mixed deciduous forests, dry Siwalik sal (Shorea robusta) forests, moist mixed deciduous forests, subtropical broadleaf wet hill forests, northern tropical semi-evergreen forests, and northern tropical wet evergreen forests. The ecoregion also forms a critical link in the chain of interconnected Himalayan ecosystems that extend from the Terai and Duar grasslands along the foothills to the high alpine meadows at the top of the world's highest mountain range. For instance, several Himalayan birds and mammals exhibit seasonal altitudinal migrations and depend on contiguous habitat to permit these movements. Therefore, conservation actions in the Himalayas must pay due attention to habitat connectivity because degradation or loss of a habitat type along this chain will disrupt these important ecological processes. The Terai-Duar Savanna and Grasslands ecoregion sits at the base of the Himalayas, the world's youngest and tallest mountain range. About 25 kilometers wide, this narrow lowland ecoregion is a continuation of the Gangetic Plain. The ecoregion stretches from southern Nepal's Terai, Bhabar, and Dun Valleys eastward to Banke and covers the Dang and Deokhuri Valleys along the Rapti River. A small portion reaches into Bhutan, and each end crosses the border into India's states of Uttar Pradesh and Bihar. This ecoregion contains the highest densities of tigers, rhinos, and ungulates in Asia. One of the features that elevates it to the Global 200 ecoregion is the diversity of ungulate species and extremely high levels of ungulate biomass recorded in riverine grasslands and grassland-forest mosaics. The world's tallest grasslands, found in this ecoregion, are the analogue of the world's tallest forests and are a phenomenon unto themselves. Very tall grasslands are rare worldwide in comparison with short grasslands and are the most threatened. Tall grasslands are indicators of mesic or wet conditions and nutrient-rich soils. Most have been converted to agricultural use. Ecoregions are areas that: share a large majority of their species and ecological dynamics; share similar environmental conditions; and, interact ecologically in ways that are critical for their long-term persistence. Scientists at the World Wildlife Fund (WWF), have established a classification system that divides the world in 867 terrestrial ecoregions, 426 freshwater ecoregions and 229 marine ecoregions that reflect the distribution of a broad range of fauna and flora across the entire planet.
<urn:uuid:dee8f779-048f-49aa-904a-bf99fe978ccf>
CC-MAIN-2015-35
http://www.eoearth.org/view/article/177484/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00102-ip-10-171-96-226.ec2.internal.warc.gz
en
0.915102
2,892
3.453125
3
Bras d'Or Lake ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2015)| |Bras d'Or Lake| in late summer |Location||Cape Breton Island, Nova Scotia| |Primary outflows||Gulf of St. Lawrence |Max. length||100 kilometres (62 mi)| |Max. width||50 kilometres (31 mi)| |Surface area||1,099 km2 (424 sq mi)| |Max. depth||287 m (942 ft)| |Water volume||32,000,000,000 m3 (4.2×1010 cu yd)| |Shore length1||1,000 kilometres (621 mi) (excluding islands)| |Surface elevation||0 metres (0 ft) (sea level)| |1 Shore length is not a well-defined measure.| Bras d'Or Lake // is an inland sea, or large body of partially fresh/salt water in the centre of Cape Breton Island in the province of Nova Scotia, Canada. Bras d'Or Lake is sometimes referred to as the Bras d'Or Lakes or the Bras d'Or Lakes system; however, its official geographic name is Bras d'Or Lake as it is a singular entity. Canadian author and yachtsman Silver Donald Cameron describes Bras d'Or Lake as "A basin ringed by indigo hills laced with marble. Islands within a sea inside an island." The lake is connected to the North Atlantic by natural channels, the Great Bras d'Or Channel north of Boularderie Island and the Little Bras d'Or Channel to south of Boularderie Island, connect the northeastern arm of the lake to the Cabot Strait. The Bras d'Or is also connected to Atlantic Ocean via the Strait of Canso by means of a lock canal completed in 1869—the St. Peters Canal, at the southern tip of the lake. There are several competing explanations of the origin of the name "Bras d'Or". The most popular is that the first Europeans to discover and subsequently settle the area were French, naming the lake Bras d'Or meaning "arm of gold"; this likely referring to the sun's rays reflected upon its waters. However, on the maps of 1872 and earlier, the Lake is named "Le Lac de Labrador," (or more simply "Labrador") and this is more likely the true derivation of the present name. The literal meaning of Labrador is "Laborer." In a paper prepared by the late Dr. Patterson for the Nova Scotia Historical Society he says he believed the name Bras d'Or came from the Breton form of Bras 'd'eau arm of water or of the sea. The Mi'kmaq Nation named it Pitu'pok, roughly translated as "long salt water". With an area of approximately 1,099 square kilometres, the extents of Bras d'Or Lake measures roughly 100 km in length and 50 km in width. Surrounded almost entirely by high hills and low mountains, the shape of the lake is dominated by the Washabuck Peninsula in the centre-west, Boularderie Island in the northeast, and a large peninsula extending from the centre-east dominated by the Boisdale Hills. The Washabuck Peninsula and Boisdale Hills divide the lake into northern and southern basins, linked by the 1 km wide Barra Strait. The maximum depth of Bras d'Or Lake of 287 metres is found in the St. Andrews Channel. The effect of local topography has resulted in the following major components of Bras d'Or Lake: - Great Bras d'Or - Little Bras d'Or - St. Andrews Channel - St. Patricks Channel - Baddeck Bay - Nyanza Bay - Whycocomagh Bay - Denys Basin - St. Peters Inlet - East Bay - West Bay The largest part of the lake measures approximately 25 kilometres across in the southern basin, framed by East Bay and West Bay with Denys Basin to the north and St. Peters Inlet to the south. The Barra Strait is crossed by highway and railway bridges running between the Washabuck Peninsula and the Boisdale Hills. The following major rivers empty into the lake (which can also be defined as an inland sea, or a gulf): - River Denys - Middle River - Baddeck River - Skye River - Georges River - Washabuck River The restricted tidal exchange in its three points of contact with the Atlantic ocean, coupled with significant freshwater drainage from the many rivers and streams in the Lake watershed, cause the Lake water to have less salinity than the surrounding ocean. Although salinity varies throughout the lake system, it approximates a one third fresh, two-thirds sea water mixture, and hence may be termed brackish water. Without much exchange with the ocean outside the Island, the Lake water quality is threatened by runoff from human activities in the surrounding watershed, especially from sewage treatment plants and septic tanks, ocean-going ships and small craft plying the lake system. Partly for this reason, the Lakes were designated a UNESCO Biosphere Reserve. Bras d'Or Lake is home to an array of wildlife with successful lobster and oyster fisheries, as well as the pursuit of other marine species. The lake's largely undeveloped shorelines have resulted in significant concentrations of bald eagle populations. A renowned summer vacation destination, the area around the Bras d'Or Lake has become popular with recreational boaters. As the majority of cruising vessels enter the Lake from the south via Lennox Passage and the St. Peters Canal, St. Peter's Lion's Club Marina located in Strachan's Cove (about 900 metres west of the Canal) in St. Peter's, Nova Scotia is the largest marina and has the most boating services, Baddeck is home to two marinas, two boatyards and the Bras d'Or Yacht Club, Ben Eoin has an 83-slip marina which opened in May 2013 and the Barra Strait Marina in Grand Narrows reopened in 2012. Launch ramps for trailered boats are scarce outside the Lake villages. The heavily indented shoreline of the Lake system, bold shores and numerous protected coves and harbours with snug anchorages provide shelter and keep wave action small, resulting in the Lake being known as one of the best sailing waters in North America. The Lake is the venue for the annual Race the Cape Sailboat Regatta which began in 2013. Before construction of the Trans Canada Highway and other roadways, ships and boats plied the Bras d'Or Lake carrying coal, gypsum, marble, agricultural and forestry products from Cape Breton to the outside world, via barge through the St. Peters Canal to destinations along the Atlantic coast of North America. In addition, some products - like marble quarried into the early 20th century at Marble Mountain, were shipped by Lake barges through both lakes to Sydney, NS for transshipment. Mail boats also ran on scheduled service across the lakes until the 1960s, providing connecting passenger service to the train at Iona, Nova Scotia. The largest communities located on Bras d'Or Lake are the villages of Baddeck, Eskasoni, Little Bras d'Or, St. Peter's, and Whycocomagh. Remaining parts of lake shorelines are largely rural with some farming, although encroaching urban sprawl from Sydney in the Cape Breton Regional Municipality (CBRM) is approaching East Bay. Many cottage and recreational properties are located in rural areas, largely owned by people from Sydney, Halifax, or from out of province. There is little significant protection for shorelines from development in the form of designated parks or conservation areas. In addition, Nova Scotia does not provide much protection of rural areas from subdivision of property. Despite this, most of the shoreline is undeveloped. Until modern roads were built in the 20th century, coastal freighters/steamships would make the rounds to various lakeside communities, frequently making connection with passenger trains at Iona/Grand Narrows where the railway crossed the Barra Strait. Unlike the industrial part of CBRM where coal, steel, and manufacturing industries flourished in the early 20th century, and the petroleum, manufacturing, and pulp and paper industries located in the Strait of Canso region since the construction of the Canso Causeway in 1955, Bras d'Or Lake has no major industries within its watershed aside from logging and gypsum extraction. The Mi'kmaq called the Bras d'Or "Pitupaq" Ba-doo-buck (long salt water). The lake connected ancient lakeside and even ocean coastal communities elsewhere on Cape Breton Island by Canoe such as Chapel Island, Eskasoni, Wagmatcook, Waycobah, and Galtoneg, among many others. The Lake was also a major source of food for these historic communities, with abundant populations of mussels, crabs, clams, trout, salmon, herring, cod and mackerel. Historic role as a research center and first Bell Labs From the summer of 1886, famed inventor and scientist Alexander Graham Bell made his estate and future retirement home at Red Head, a peninsula opposite Baddeck. He named his 640 Acre estate Beinn Bhreagh (pronounced "ben vreeah" meaning "Beautiful Mountain" in Scottish Gaelic) and lived there half of his life until his death in 1922. It is because of Bell's connection to this area that Beinn Bhreagh and Baddeck are routinely featured on National Geographic maps showing eastern North America. Bell established a research laboratory—the first Bell Labs on Beinn Bhreagh, and used the Bras d'Or Lake to test man-carrying kites, airplanes and hydrofoil boats, as part of his many and varying research activities. Flight of the Silver Dart Baddeck Bay, between Baddeck and Beinn Bhreagh, was the site of the first officially recognized heavier-than-air powered flight in the British Empire which then included Canada. The flight was performed by an airplane designed by Dr. Alexander Graham Bell, F.C. Baldwin and Glenn Curtiss and others, in the original Bell Labs on Beinn Bhreagh. The Silver Dart flew off the frozen ice of Baddeck Bay in January 1909. Commemorating this event, 100 years later, in January, 2009 a replica of the original airplane was flown off the ice in the same location: Baddeck Bay. After demonstrating their airplanes using tricycle landing wheels and other innovations, Bell's laboratory on Beinn Bhreagh designed and built a hydrofoil boat- the HD4 - which set a water speed record of 71 MPH (63 knots) in 1919. HMCS Bras d'Or, an experimental 1960s-era Canadian Forces hydrofoil, reportedly the world's fastest warship ever built, was named honor of the hydrofoils tested long before on Baddeck Bay by Bell. In 2003, National Geographic Traveler rated Cape Breton Island its second-ranked worldwide destination for sustainable tourism, citing Bras d'Or Lake as having a major influence on this designation. Cape Breton Island tied for second place with New Zealand's South Island and Chile's Torres del Paine, behind the Norwegian fjords. "The Bras d'Or Lakes are my favorite landscape on planet Earth. Nestled into the rolling hills of Cape Breton, Nova Scotia, their pristine tidal waters reflect centuries of Scottish culture, music, and friendly people." Gilbert M. Grosvenor, Chairman of the Board, National Geographic Society - "Canadian Technical Report of Hydrography and Ocean Sciences 230 - Modelling the tides of the Bras d’Or Lakes" (PDF). Department of Fisheries and Oceans - 2003. - Chisholm, Hugh, ed. (1911). "Bras d'Or". Encyclopædia Britannica (11th ed.). Cambridge University Press. - "Bras d'Or Lake". Natural Resources Canada. - "Carte du Canada ou de la Nouvelle France et des decouvertes que y ont ete faites. Dressee sur plusieurs observations et sur un grand nombre de relations imprimees ou manuscrites. Par Guillaume Del'Isle, Geographe de l'Academie Royale des Sciences. A Paris - David Rumsey Historical Map Collection (includes map)". Guillaume DeLisle, Quai de l'Horloge a l'Aigle d'Or, Paris, 1708. Retrieved April 24, 2012. - "A Chart Of The Gulf Of St. Laurence, Composed From A Great Number Of Actual Surveys - David Rumsey Historical Map Collection (includes map)". Sayer and Bennett, London, 1708. Retrieved April 24, 2012. - "Bras d'Or Lake". Government of Nova Scotia. - "Bras d'Or Lake (C. B.)". Place-names of the province of Nova Scotia by Thomas J. Brown. Royal Print & Litho. in Halifax, N.S. 1922. - "Facts About Canada: Lakes". The Atlas of Canada. Natural Resources Canada. - "UNESCO Biosphere". UNESCO. - Bethune, Jocelyn. Historic Baddeck: Images of our past, Nimbus Publishing, Halifax, N.S., 2009, ISBN 1-55109-706-0, ISBN 978-1-55109-706-0. - Great Canadian Lakes: Bras d'Or Lake. Archived. - The Natural History of Nova Scotia, 916 Bras d'Or Lake. Archived. - Bras d’Or Preservation Nature Trust - Cruising Cape Breton — The Cruisers' Guide to Cape Breton Island and the Bras d'Or Lakes
<urn:uuid:b57cefcd-9a25-4fa6-97b1-4b05495a0adf>
CC-MAIN-2015-35
https://en.wikipedia.org/wiki/Bras_d%27Or_Lake
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00217-ip-10-171-96-226.ec2.internal.warc.gz
en
0.911816
2,961
3.203125
3