text
stringlengths
178
648k
id
stringlengths
47
47
dump
stringclasses
10 values
url
stringlengths
14
2.83k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
48
144k
score
float64
2.52
5.09
int_score
int64
3
5
Research has proven that consuming whole grains can help prevent people from getting type 2 diabetes, but what if you already have diabetes or have pre-diabetes or other blood sugar issues — Should you consume grains and if so, how much? People with type 1 and type 2 diabetes or pre-diabetes should eat grains People with diabetes have high blood glucose (blood sugar,) either because insulin production is inadequate or because the body’s cells don’t respond properly to insulin or both. And people with pre-diabetes have higher blood sugar levels, but they aren’t as high as those with diabetes. - Type 1. The body does not produce insulin. - Type 2. The body does not produce enough insulin for proper function, or the cells in the body do not react to insulin (insulin resistance). - Pre-diabetes. The body is becoming resistant to insulin. The fasting glucose level is 101 to 125. Most people with type 2 diabetes initially had pre-diabetes. Carbohydrates, including the starch in grain, have the biggest impact on blood glucose, because all carbs are eventually broken down into simple sugar by the body. Complex carbs take longer for the body to turn into simple sugar, so they are the better choice for all people including diabetics. People with blood sugar issues should limit their carb intake to maintain healthy blood sugar levels, but they probably should not eliminate grains from their diet. Almost all research shows the benefit of consuming carbs in a balanced way. For example, here are some studies: - Two servings of whole grains was associated with a 21-percent decrease in risk of type 2 diabetes. - When eaten as part of a breakfast with a low glycemic index, whole grains can help control blood sugar all day long, according to a study conducted at Lund University in Sweden. - A study published in the American Journal of Clinical Nutrition shows that eating whole grains may be associated with a decreased risk of pre-diabetes. Some people believe that going without grains can be helpful for people with diabetes and blood sugar issues. For example, this study showed that going on a gluten-free Paleo diet brought people back to normal glucose levels. But if you want to try going gluten free to see if it brings your blood sugar levels back to normal, you can eat gluten-free grains. So, yes, most people with blood sugar issues should eat 100-percent whole grains because they are nutrient rich and full of fiber, which helps control blood sugar. Even better, sprouted grains are the healthiest choice for people with blood sugar issues. Glycemic Index and diet for people with blood sugar issues Knowing a food has a high or low glycemic index is a good start, but several factors can change a particular food’s glycemic index, so it can be hard to measure: - Other foods eaten at the same time. - Other components of the food, such as fat or protein. - How the food is prepared. - Your body’s reaction to the food. It’s best to choose foods closer to nature, or less processed, and to eat balanced meals, rather than only relying on the glycemic index of foods. Healthy grains to eat If you have blood sugar issues, you want to choose low-glycemic, complex, whole grains. Some good options: Amaranth. This non-gluten grain is high in protein and contains more calcium than milk. Barley. This grain is low glycemic, so it doesn’t cause a blood sugar spike. In fact, it has the lowest glycemic index of all grains. Brown Rice. In its natural form, brown rice is very nutritious, with 88 percent of your daily value of magnesium, a cofactor involved in insulin secretion and glucose levels. Buckwheat. This is actually a seed but is often considered a grain. Research shows that buckwheat can lower blood sugar levels. Freekah. This grain is low on the glycemic index and has about four times as much protein as brown rice. Kamut. This grain is similar to wheat but is higher in vitamins and minerals. Quinoa. This gluten-free grain has the highest protein value of any other grain. It also contains more calcium than milk. Millet. This grain provides 26 percent of the daily value of magnesium. Rye. A study published in the American Journal of Clinical Nutrition found that bread made from wheat triggers a greater insulin response than rye bread does. Considerations for people with diabetes and other blood sugar issues Don’t eat whole grains alone and treat them as side dishes. Pair grains with protein and unsaturated fats to help your body deal with the sugar more gradually. Some good proteins include beans, nuts and seeds. If you eat whole grain bread, pair it with nut butter or vegan cheese. Be careful not to over-consume grains. Two-thirds of a cup of cooked 100-percent whole grains or two slices of 100-percent whole grain bread is generally a safe amount at any one meal or snack. Eat grains in the least-processed state. Choose whole-kernel bread, brown rice and whole barley, millet and wheat berries. Traditionally processed grains, such as stone-ground bread or steel-cut oats, are good, too. Image Source: Rooey202/Flickr Click NEXT for some good whole grain products to try. This content provided above is for informational purposes only and is not a substitute for medical advice, diagnosis or treatment.
<urn:uuid:1f1265ef-1101-4ebf-bc02-03d9c01ca60a>
CC-MAIN-2016-26
http://www.onegreenplanet.org/natural-health/a-guide-to-consuming-grains-for-diabetics-and-people-with-blood-sugar-issues/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933249
1,157
3.28125
3
- Example sentences - A plurality of sparger jets protrudes from the wall surface for delivering an oxygen gas flow directly into the interior of the sludge. - Air is introduced into the slurry through spargers creating a countercurrent flow of air bubbles. - Both spargers gave similar ink recovery and fiber loss as a function of bubble surface area flux. Late 16th century (as a verb in the sense 'sprinkle (water) around'): apparently from Latin spargere 'to sprinkle'. The current senses date from the early 19th century. Words that rhyme with spargebarge, charge, enlarge, large, marge, raj, reportage, sarge, Swaraj, taj, undercharge For editors and proofreaders Definition of sparge in: What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:9661ab01-0ff5-4e18-bbab-b434f19c67d3>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/definition/american_english/sparge
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.87408
199
3.203125
3
It's a tale of bright lights, big colonies: Rural ants go wild in the city. The first systematic lifestyle survey of odorous house ants confirms how much a modest country dweller can change habits in the big city, according to urban entomologist Grzegorz Buczkowski of Purdue University in West Lafayette, Ind. In the forests of Tippecanoe County, Ind., he found odorous house ants, Tapinoma sessile, in colonies with just one queen each. With no more than a hundred ants, each colony could live in a single acorn. Ants from city parks and other seminatural areas formed somewhat bigger colonies, he says. But in West Lafayette and other urban zones nearby, Buczkowski found that nests of odorous house ants connect via bustling ant trails to form supercolonies. Each of the 15 colonies he sampled typically held some 58,000 ants and 238 queens, he reports online February 26 in Biological Invasions. One supercolony across the street from Buczkowski’s office covered more than a city block and held 6 million workers and thousands of queens. “In the forest, these odorous house ants have a pretty tough life,” Buczkowski says. Plenty of other species compete for food and shelter, and ants living in unheated acorns go dormant during the winter. But Buczkowski has documented that urban colonies stay active all year by retreating to warm refuges. “Even when it’s snowing outside, they can be happy inside reproducing,” he says. The ant’s name comes from the odor they release when vexed, a smell that Buczkowski describes as somewhat like a piña colada’s. The ant doesn’t sting or bite or chew up houses. Yet Buczkowski says that pest control workers tell him they’re getting increasing numbers of calls from human city dwellers dismayed by heavy ant traffic in their houses. “Here we have this native ant species that’s becoming a pest,” Buczkowski says. Unlike odorous house ants, most invasive ants live far from their native ranges. The infamous Argentine ant, for example, is no big deal in its South American home but forms supercolonies that are disrupting native ecosystems in the United States, Europe and elsewhere. What horrifies home owners offers a great opportunity for biologists, according to ecologist Sean Menke of North Carolina State University in Raleigh. Because the whole business of going from country cousin to world-beater takes place in the same geographical region, researchers can narrow down the urban effect on lifestyle changes. Even though odorous house ants are one of the most widespread native ants in North America, ranging from coast to coast, and have been annoying home owners to some extent for decades, Menke says entomologists have started studying their basic biology and urbanization only in the last few years. “Most scientists became scientists to get out of the urban environment, he says. Urbanization of odorous house ants occurred independently multiple times in different locations, Menke and his colleagues reported February 12 in PLoS ONE. In an analysis of 49 samples of odorous house ants from around the country, the researchers found large genetic variation. And rural ants proved the closest relatives to the urban dwellers in their own general region. Menke points out that even in natural settings, odorous house ants have proved highly adaptable, surviving in palm oases in Baja as well as high in Colorado mountains. He says he has both seen and heard of odorous house ants in rural areas that have multiple queens, which could hint that the species has a smoldering invasive capacity. “Many species are beginning to succeed and spread in urban environments,” Menke says. “We don’t know if it is because they are being forced there due to encroachment by people into their native habitats or because the species have altered their lifestyles.”
<urn:uuid:f2dd4dd9-ebd4-478f-a74f-1cd74d9325ac>
CC-MAIN-2016-26
https://ag.purdue.edu/entm/Lists/News/DispForm4.aspx?ID=62
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950659
829
2.890625
3
By Kirsty McHugh, OUP UK It’s Easter this weekend, so I thought today I would look at how the Oxford Dictionary of the Bible, by W.R.F. Browning, defines some of the key concepts of this important time in the Christian calendar. Read on to find out what the Dictionary says about Easter, Good Friday, and resurrection, and I hope those of you who, like me, have a nice long weekend off work have a fun and relaxing time. The word is used by [the Authorized Version of the Bible] for the annual commemoration of Jesus’ resurrection at Acts 12:4; modern versions prefer ‘Passover’. Easter soon became the chief festival of the early Church, and by the 3rd century it was preceded by a night vigil. At dawn those who had been prepared were baptized; all received communion. There was no observance of Good Friday as a separate memorial day until the 4th century. For some time Easter, as the Christian Passover, was observed on 14 Nisan, the date of the Jewish festival, whether it was a weekday or a Sunday. This continued for some time in Asia Minor, where the group known as Quartodecimens observed Easter when Jews celebrated Passover on 14 Nisan whatever the day of the week; but in Rome and elsewhere the feast was kept on the following Sunday, and this was the date settled by the Council of Nicaea (325 CE). The day when Christians each year commemorate the crucifixion of Jesus. There was for a long time no special annual observance of the crucifixion of Jesus except that from the 2nd century every Friday was a day of fasting. The death and resurrection were commemorated in a single Paschal festival over Saturday and Sunday. But in the 4th century there was a development of Holy Week at Jerusalem in which the historical events of the passion were rehearsed and then Good Friday became a distinct occasion for recalling the Crucifixion, and Easter Sunday for celebrating the Resurrection. Only the latter allowed the adjective ‘good’ to be applied to the Friday. There was a variety of beliefs about life after death held both by people generally and by intellectuals in the biblical eras. The ancient Hebrews rejected both the Canaanite Baal worship, which included in the cult the annual dying and rising again of the god, and also the Greek notion of the inherent immortality of the ‘soul’. But the New Testament concept of resurrection has only the barest hints in the Old Testament. The idea of a hopeless shadowy existence in sheol (e.g. Ps. 88: 3–5) similar to the Greek conception of Hades, as in Homer, is a state of misery where the dead survived as feeble shades. In later literature there is a richer conception of life after death. Job 19: 25–7 is searching for a more satisfactory view which would conform to the Hebrew sense that the human body, part of God’s creation, was ‘very good’ (Gen. 1: 31), and therefore life without ‘body’ was incomplete and unsatisfying. Moreover, while existence in sheol might be a fair reward for the wicked (Ps. 49: 14), surely the faithful deserved something better? So there is a promise of resurrection for Israel as a nation (Isa. 26: 19); Yahweh’s loyalists who have suffered will rise to an appropriate reward (Dan. 12: 2) and apostates will endure shame and everlasting contempt. In 2 Maccabees there is hope for the resurrection of those who suffer (7: 9) and the righteous will be vindicated (1 Enoch 104: 2–6), and in the time of Jesus this was also the view of the Pharisees (but not the Sadducees) and of Jesus himself (Mark 12: 18–27). The resurrection of believers is part of Paul’s hope for all believers at the end of history. He anticipates a complete transformation of the whole human person (1 Cor. 15: 53–5). The resurrection of the body is part of Christian belief about life after death. This has been superbly, but literally, depicted in great paintings, as by Stanley Spencer (1925) where the dead are emerging out of the churchyard at Cookham. The essence of the belief is that what has been of value on earth in the bodily, historical life is not wholly left behind but is transfigured.
<urn:uuid:3926c23e-9073-43e4-8585-08e52bccce1a>
CC-MAIN-2016-26
http://blog.oup.com/2010/04/easter-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96493
924
3.625
4
The following are some common types of RAM: - SRAM: Static random access memory uses multiple transistors, typically four to six, for each memory cell but doesn't have a capacitor in each cell. It is used primarily for cache. - DRAM: Dynamic random access memory has memory cells with a paired transistor and capacitor requiring constant refreshing. - FPM DRAM: Fast page mode dynamic random access memory was the original form of DRAM. It waits through the entire process of locating a bit of data by column and row and then reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately 176 MBps. - EDO DRAM: Extended data-out dynamic random access memory does not wait for all of the processing of the first bit before continuing to the next one. As soon as the address of the first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps. - SDRAM: Synchronous dynamic random access memory takes advantage of the burst mode concept to greatly improve performance. It does this by staying on the row containing the requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is that most of the time the data needed by the CPU will be in sequence. SDRAM is about five percent faster than EDO RAM and is the most common form in desktops today. Maximum transfer rate to L2 cache is approximately 528 MBps. - DDR SDRAM: Double data rate synchronous dynamic RAM is just like SDRAM except that is has higher bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is approximately 1,064 MBps (for DDR SDRAM 133 MHZ). - RDRAM: Rambus dynamic random access memory is a radical departure from the previous DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory module (RIMM), which is similar in size and pin configuration to a standard DIMM. What makes RDRAM so different is its use of a special high-speed data bus called the Rambus channel. RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 MBps. Since they operate at such high speeds, they generate much more heat than other types of chips. To help dissipate the excess heat Rambus chips are fitted with a heat spreader, which looks like a long thin wafer. Just like there are smaller versions of DIMMs, there are also SO-RIMMs, designed for notebook computers. - Credit Card Memory: Credit card memory is a proprietary self-contained DRAM memory module that plugs into a special slot for use in notebook computers. - PCMCIA Memory Card: Another self-contained DRAM module for notebooks, cards of this type are not proprietary and should work with any notebook computer whose system bus matches the memory card's configuration. - CMOS RAM: CMOS RAM is a term for the small amount of memory used by your computer and some other devices to remember things like hard disk settings -- see Why does my computer need a battery? for details. This memory uses a small battery to provide it with the power it needs to maintain the memory contents. - VRAM: VideoRAM, also known as multiport dynamic random access memory (MPDRAM), is a type of RAM used specifically for video adapters or 3-D accelerators. The "multiport" part comes from the fact that VRAM normally has two independent access ports instead of one, allowing the CPU and graphics processor to access the RAM simultaneously. VRAM is located on the graphics card and comes in a variety of formats, many of which are proprietary. The amount of VRAM is a determining factor in the resolution and color depth of the display. VRAM is also used to hold graphics-specific information such as 3-D geometry data and texture maps. True multiport VRAM tends to be expensive, so today, many graphics cards use SGRAM (synchronous graphics RAM) instead. Performance is nearly the same, but SGRAM is cheaper. For a comprehensive examination of RAM types, check out the Kingston Technology Ultimate Memory Guide.
<urn:uuid:71f2cfde-1954-4e25-8f09-299c4ab9e541>
CC-MAIN-2016-26
http://computer.howstuffworks.com/ram3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932216
899
3.828125
4
If you have bought a lap top in the last few months it probably has a hard drive that can hold about a terabyte of information. The cost and capacity of hard drives have improved along a curve that follows Moore’s Law for the last several decades. The hard drives are holding more and costing less. But hard drive storage needs a constant trickle of electricity to work. And storing them for years can get expensive. Hard drives are in computers and computers can break down. Our station looked at the limitation of hard drives and installed a tape system called LTO; it archives the hundreds of hours of video we produce every week onto a thin magnetic tape. One cartridge can hold almost about 70 hours of high definition video. There is a drawback to tape storage however, it has a shelf life. Gary Baker is our engineer that works on the LTO system. “It can last up to 30 years in the right environmental conditions if it’s maintained correctly” Our station only plans to store each tape for about five years before either putting all the information on a new tape (the next generation LTO tape coming out will hold ten times more information) and putting it into a new technology. Scientist in England are working on even better long-term storage solution using the code of biology: DNA. Strands of DNA are mother nature’s information storage device. A human DNA strand that is about 2.5 nanometers wide (a human hair is about 20,000 nanometers wide) and around six-feet long. It contains the entire blueprint to assemble over the two trillion cells (parts) that make up a human. In a very simple explanation the scientist are taking binary code (1’s and zeros) and translating it into the four letters used in DNA (U, C, A, G). The sequence is then used to make a synthetic DNA strand. In theory, this strand could last for millions of year in proper storage. The greater advantage would be its size. DNA is very small, your own strand fits inside a small part of almost every cell all curled up in a little ball. A single strand of DNA is equal to about 750 megabytes of information. A rope that is six foot long and one inch thick would equal the DNA strands of about 2.5 million people. In binary code the same 6-foot rope could also store about five billion books. To put that in perspective, if you could store binary code in DNA stands you could store every book ever published (Google estimates that to be around 26 million books) in only about 2” of the same rope strand. Right now the cost would make this idea unfeasible. But the cost of replicating DNA and sequencing it continues to drop. This would also be only for a long-term storage solution. Creating and storing the information in a DNA strand would require lots of time. So would translating it back. Don’t expect to ever see this idea used in cell phones or your home computer. But consider the possibilities. We could store all of the books in the Library of Congress in a strand of DNA that would fit in a coffee cup. Properly stored, that strand could last a few million years. Everything mankind has ever written in books written in a code that wrote man. Jeff Ray is the Environmental/Science Reporter at CBS 11 News. He can be reached at email@example.com
<urn:uuid:1580e15b-8413-4fb8-ac71-539670283ec3>
CC-MAIN-2016-26
http://dfw.cbslocal.com/2013/01/24/dna-code-can-store-huge-amounts-of-data-in-small-space/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950147
710
3.5625
4
Redirected from Populism and Nationalism The revival of religiosity all over Europe played an important role in bringing people to populism and nationalism. In France, Chateaubriand[?] provided the opening shots of Catholic revivalism as he opposed enlightenment's materialism with the "mystery of life," the human need for redemption. In Germany, Schleiermacher promoted pietism by claiming that religion was not the institution[?], but a mystical piety and sentiment with Christ as the mediating figure raising the human consciousness above the mundane to God's level. In England, John Wesley's Methodism split with the Anglican church[?] because of its emphasis on the salvation of the masses as a key to moral reform, which Wesley saw as the answer to the social problems of the day. All of these were united by a search for something to believe in because of the anxiety of the time. Chateaubriand[?]'s beginning brought about TWO Catholic Revivals[?] in France: first, a conservative revival led by Joseph de Maistre, which defended ultra-montanism[?], also known as the supremacy of the Pope in the church, and second, but at the same time, a populist revival led by Felicite de Lamennais[?], an excommunicated priest. This religious populism opposed ultra-montanism[?] and emphasized a church community dependent upon all of the people, not just the elite. Furthermore, it stressed that church authority should come from the bottom-up and that the church should alleviate suffering, not merely accept it, both principles that gave the masses strength. Nationalism became the secular religion of the masses; that something bigger than themselves that gave their life meaning. It was a religion spawned of a fear of losing this meaning. Fichte[?] began the development of nationalism by stating that people have the ethical duty to further their nation. Herder[?] proposed an organic nationalism that was a romantic vision of individual communities rejecting the Industrial Revolution's model communities, in which people acquired their meaning from the community/nation. The brothers Grimm collected German folklore to "gather the Teutonic[?] spirit" and show that these tales provide the common values necessary for the historical survival of a nation. Fredrick Jahn[?], a Lutheran Minister, a professor at the University of Berlin and the "father of gymnastics," introduced the Volkstum[?], a racial nation that draws on the essence of a people that was lost in the Industrial Revolution. Adam Mueller[?] went a step further by positing the state as a bigger totality than the government institution. This paternalistic vision of aristocracy concerned with social orders had a dark side in that the opposite force of modernity was represented by the Jews, who were said to be eating away at the state. In German nationalism[?], anti-Semitism began to raise its ugly head. In France the populist and nationalist picture was not so grim. Historian Jules Michelet[?] fused nationalism and populism by positing the people as a mystical unity who are the driving force of history in which the divinity finds its purpose. For Michelet[?], in history, that representation of the struggle between spirit and matter, France has a special place because the French became a people through equality, liberty, and fraternity[?]. Because of this, the French people can never be wrong. It is important to remember that John Michelet[?]'s ideas are not socialism or rational politics, but his populism always minimizes, or even masks, social class differences. Nationalism turned in the second half of the Nineteenth Century and the nationalist sentiment was altered into an elitist[?] and conservative doctrine. Power-state theorist[?] and multi-volume historian Heinrich von Treitschke's Politics[?] talked about top-down nationalism in which the state is the creator of the nation, not a result thereof. His state's power fashions political unity because, as he asserts, the national unity was always in place. For von Treitschke, the state is artificially constructed by the elite who know that power counts, but who also form myths such as racism for the comfort of the nationalistic masses. von Treitschke's nationalism had a dark side in his eternal struggle of nations, the weakness of confederated states and war as social hygiene[?] that culminated into a thought that all nations are egoistic, but their struggles embody morality and embrace progress. Such notions would later be proliferated in rather ugly methods by the likes of Hitler, Stalin, and even recently, Slobodan Milosevic.
<urn:uuid:68125464-1b57-404b-976f-c648d2229366>
CC-MAIN-2016-26
http://encyclopedia.kids.net.au/page/po/Populism_and_Nationalism?title=Chateaubriand
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968112
950
2.984375
3
Healthy Weight: Information for Municipalities Municipalities can help residents eat smart and move more. Municipal governments can set policies that improve the health of residents. Decisions about zoning, transportation, land use and community design influence access to healthy foods and opportunities to engage in physical activity, and should be made with the health of residents in mind. What you should do Build community capacity to make policy and environmental changes Assess your community’s food and activity environment and develop an action plan to make eating smart and moving easier for residents. Implement an action plan using: - Joint Use Agreements to encourage schools and communities to share physical activity resources, such as gymnasiums available to the community after hours and on weekends - Safe Routes to School to encourage children to safely walk and bike to school. - • Plans for safe and welcoming parks and green spaces to improve physical and psychological health, strengthen the community, and make neighborhoods healthier places to live, work, and play. - Crime Prevention Through Environmental Design principles can help improve neighborhood safety. - Urban Agriculture to encourage urban dwellers to develop gardens that grow fresh fruits and vegetables - A Healthy Corner Store Initiative to help increase access to healthy foods in the neighborhood. - A strategy to encourage full service supermarkets can be encouraged to locate in underserved neighborhoods. Ensure your community’s Comprehensive Plan considers healthy eating and active living. Consider health needs in the comprehensive planning process. Use these tools to help you: - Health Impact Assessments evaluate the health impact of projects before they are implemented. - Design for Health’s website includes tools to integrate health into planning and environmental design, including checklists, presentations, example comprehensive plans, and case studies. - Complete Parks Playbook to increase community outdoor activities and engagement. Implement a Complete Streets policy to ensure that the design and operation of the roadways keeps all users in mind – bicyclists, public transportation vehicles, wheelchair users, riders and pedestrians of all ages and abilities. Implement policies to limit unhealthy foods in communities through zoning
<urn:uuid:706dcf04-1ed0-42fa-98e6-f30176772491>
CC-MAIN-2016-26
http://health.ri.gov/healthyweight/for/municipalities/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90788
430
3.59375
4
Bolivia has suffered chronic political instability The people of South America's poorest nation go to the polls on Sunday 18 December to elect a new president, parliament and departmental prefects. Eight candidates are standing for president, but only Evo Morales, of the Movement Toward Socialism (Mas), and former president Jorge "Tuto" Quiroga are thought to have a real chance of winning. Mr Morales, the coca-growers' leader and an Aymara Indian, has a slight lead over his conservative rival. If elected he would become Bolivia's first indigenous president. Q: What is at stake? This is a key election for Bolivia. Despite having the second largest natural gas reserves in the region, the country is wracked by poverty. Nearly 70% of the population live below the poverty line and 14.4% live on less than $1 a day. Bolivia has had five presidents in four years and faces deep economic, ethnic and regional divisions. The political system is still dominated by a small, wealthy elite, with the majority indigenous population largely excluded from power. Since the last presidential election, disputes over the conditions under which foreign oil companies operate, the export of natural gas and the cultivation of coca have led to outbreaks of violence that have paralysed the country and polarised its people. Q: Who will be voting? About 3.7 million people are registered to vote, out of a total population of about 9.1 million, according to current United Nations population figures. About 30% of the electorate are Quechua-speaking and 25% are Aymara. Voters will elect the president, 27 senators, 130 deputies and nine departmental prefects. Voting is compulsory for all Bolivians over the age of 18. Bolivians abroad will not be able to take part. The polls will open at 0800 local time and will close at 1600. Q: Who are the main candidates? Evo Morales, 46, has been criticised by the USA for his closeness to the left-wing governments of Fidel Castro in Cuba and President Hugo Chavez in Venezuela. He has also come under fire for his plans to legalise the production and consumption of coca leaves, a traditional part of Indian life. He has said he will replace the "zero coca" policy with a "zero drug-trafficking" policy. An avowed "anti-neoliberal", he has pledged to increase state control over the country's oil and gas reserves by buying back foreign-owned refineries. He has also pledged to change the constitution through a Constituent Assembly to improve the rights of Bolivia's indigenous majority. In a nod to them he has said his government will be guided by the Aymara principles of "ama sua" (do not steal), "ama llulla" (do not lie), "ama kella" (do not be lazy) and "ama llunku" (do not be servile). The other main candidate, Jorge Quiroga, 45, is a US-educated engineer who served as president from July 2001 to August 2002 after President Hugo Banzer resigned due to ill health. A former consultant for the World Bank, he advocates raising the export price for Bolivia's gas and channelling the funds into social programmes aimed at the poor. His political campaign has been fought on a programme of creating more jobs by building up the productive sector and broadening the scope of free trade agreements, making health care more accessible, and implementing a "zero coca" policy in the coca-growing regions. Q: Who are their supporters? Some observers have described the presidential race between Mr Morales and Mr Quiroga as a wider face-off between the "cambas", as people from Santa Cruz are known, and the "collas", the Indians of La Paz and the highlands. Mr Morales's support comes largely from the western Cochabamba and Oruro departments and the township of El Alto above La Paz, where many rural Aymara come looking for work. Mr Quiroga's support is strongest in the wealthy eastern department of Santa Cruz, where most of the gas reserves and large-scale modern agribusinesses are found. He also has a strong following in the provinces of Beni, Chuquisaca, Pando, Potosi and Tarija. A large portion of the electorate is split between these two regions, with about 22% of registered voters living in Santa Cruz and about the same number living in the city of La Paz and the nearby town of El Alto. Q: Who is likely to win? According to the latest opinion polls, Mr Morales has a five point lead over Mr Quiroga but neither candidate is likely to obtain the 51% absolute majority needed to win outright. Q: What happens if there is no clear winner? Under Bolivia's constitution, if no candidate wins an absolute majority, the Congress elected on Sunday, which will be sworn in on 16 January, will choose between the two leading contenders. Interim President Eduardo Rodriguez's 180-day term ends on 23 January. In the last presidential elections in June 2002, Mr Morales came a close second to Gonzalo Sanchez de Lozada in the election but was roundly defeated in the Congressional vote by 84 to 43. Some observers predict that Mr Morales will win the election but then fail to secure enough votes in Congress to become president. If that happens there would be little chance of an end to Bolivia's political instability. BBC Monitoring selects and translates news from radio, television, press, news agencies and the Internet from 150 countries in more than 70 languages. It is based in Caversham, UK, and has several bureaus abroad.
<urn:uuid:d026e614-076e-4af6-a433-78d1b812de1a>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/americas/4531446.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960386
1,208
2.671875
3
Time visuals in history textbooks: some pedagogic issues This chapter explores in detail the pedagogic role of time visuals in historytextbooks (see also Coffin and Derewianka, 2008). It first offers a set of categoriesto capture how time is construed in different ways in school-based historicaldiscourse, including through visual resources in contemporary textbooks. It thendiscusses how current use of visual resources may (or may not) facilitate studentsÃÂÃÂÃÂÃÂÃÂÃÂÃÂÿunderstanding of time and, more generally, their historical knowledge andunderstanding. Please refer to publisher version or contact your library.
<urn:uuid:4c58c52b-d9ab-4831-a7aa-c987c126bbca>
CC-MAIN-2016-26
http://ro.uow.edu.au/edupapers/687/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.830372
125
3.046875
3
Spider Mite Problems In Roses Rose leaves stippled and dirty? Could be spider mites. Learn what they are, how they manifest, and what you can do about damage and infestation. What is it? The mites, related to spiders are a major pest to many rose gardens and greenhouse plants. They cause damage through their sucking action, which removes sap from the undersides of the leaves causing the green leaf pigment to disappear gradually. What does it look like? Roses will develop bronze colored leaves that look stippled and dirty. The undersides of the leaves or new growth may have a silken webbing adhered to them. As a result of the mites feeding, the pigment is drained from the leaves and they gradually lighten as chlorophyll disappears. Often the leaves will drop off. How does it manifest? Mites are active throughout the growing season, and the more motes you have, the more damage your roses may endure. Severely infested plants produce few flowers and become increasing lighter and more stippled looking as the leaves brown and are sucked dry of sap. Most mites favor weather above 70°F with a dry climate. By midsummer the mites multiply to tremendous numbers if not treated. What can you do about it? When you notice the first signs of spider mite damage, cover the undersides of the leaves thoroughly with a miticide which contains hexakis. This treatment should be repeated twice more at intervals of 7 to 10 days. Also repeat the procedure if a new mite infestation occurs.
<urn:uuid:6f27d947-f95b-46f6-99c3-cd7791ff74c7>
CC-MAIN-2016-26
http://www.allsands.com/gardening/gardening/spidermitesros_rtk_gn.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943822
327
2.765625
3
When Did Modern Humans Arrive in South Asia? Tuesday, June 11, 2013 HUDDERSFIELD, ENGLAND—Martin Richards of the University of Huddersfield argues that modern humans did not reach southern Asia before the super-eruption of the Mount Toba volcano in Sumatra, some 74,000 years ago. Stone tools, found in India below a layer of ash from the Toba volcano, had suggested that people had possibly left Africa and arrived in India as early as 120,000 years ago. Based upon evidence from modern mitochondrial DNA collected in India and other research, Richards says that modern humans arrived in India no earlier than 60,000 years ago. “There were people in India before the Toba eruption, because there are stone tools there, but they could have been Neanderthals—or some other pre-modern population,” Richard explains. Pirates of the Caribbean, evidence for the oldest Irishman, Iron Age Swiss cheese, India’s cannabis frescoes, and the Silk Road route to Nepal
<urn:uuid:9a3384af-7883-4c13-ba49-23aa2068e362>
CC-MAIN-2016-26
http://www.archaeology.org/news/980-130611-mount-toba-super-volcano-modern-human-migration
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94902
216
3.625
4
U.S. DEPARTMENT OF THE INTERIORBUREAU OF LAND MANAGEMENT National Historic Trails Interpretive Center With the completion of the Union Pacific Railroad across southern Wyoming in 1869, a series of stage and freight wagon roads were developed to serve fledgling communities to the north. Two of these roads, the Bryan-South Pass Road and the Point of Rocks-South Pass City Road, were both established to serve the boom town of South Pass City that sprang to life following gold discoveries along the upper Sweetwater region in 1867. Soon a new stage road, the Rawlins-Fort Washakie Stage Road was developed to serve the headquarters of the Wind River Indian Reservation. The stage roads are located mostly on BLM-managed public lands but are not marked or well mapped. Portions of Highway 287 from Rawlins to Lander parallel the Fort Washakie Road. |Last updated: 12-08-2010| |USA.GOV | No Fear Act | DOI | Disclaimer | About BLM | Notices | Social Media Policy|
<urn:uuid:beb7a7ac-98c6-4e26-923d-0d6fd4a8ea66>
CC-MAIN-2016-26
http://www.blm.gov/wy/st/en/programs/nlcs/Historic_Trails/map/stage.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928693
219
2.9375
3
Icelandic product design graduate Búi Bjarmar Aðalsteinsson has created a Fly Factory that breeds insect larvae for human consumption (+ slideshow). Aðalsteinsson has produced pate and dessert using larvae bred in the factory, to explore how to make insects palatable to western consumers and alleviate potential food shortages in future. "They taste like chicken," he says. "There is no distinct taste. It depends on how you spice them and how you prepare them." His favourite insect recipe is "the coconut-chocolate larvae dessert I just tried out," he adds. "The kids love it." Aðalsteinsson was inspired by a 2013 report by the Food and Agriculture Organisation of the United Nations produced a report called Edible Insects, which investigates how insects could help alleviate shortages of food in future. "We need to find new ways of growing food," says the report. "Insects offer a significant opportunity to merge traditional knowledge and modern science in both developed and developing countries." The conceptual micro-factory feeds insects on food waste and recycles the nutrients they excrete as fertiliser."The larvae are given organic waste and become rich in fat and protein, which then can be harvested for human consumption," says Aðalsteinsson, who designed the factory as his graduation project at the Icelandic Academy of the Arts. "The factory was designed so that it produces no waste and to make use of materials that would otherwise be disregarded and thrown away." The steel factory is intended to be used by restaurants and the food processing industry rather than for home farming, since Aðalsteinsson does not believe people will want to breed insects at home. Aðalsteinsson is not the only designer exploring how to encourage humans to each insects, which could replace meat as a more sustainable source of protein. Last year, Vienna-based designer Katharina Unger proposed a domestic gadget to breed insect larvae for cooking, while earlier this year Irish graphic designer Lara Hanlon developed a digital resource encouraging people to eat insects. The Fly Factory features a breeding tank where insect larvae are fattened while the nutrients they excrete are harvested. "The larvae also produce a clean and nutrient-rich soil, which is subsequently drained into compost canisters and then used for spice and herb production," says Aðalsteinsson. "The heat generated by the refrigerator that is used to store larvae and other ingredients will be employed as a thermal source to maintain the humidity and temperature of the flies' environment." The factory breeds larvae of the black soldier fly, an insect that is regarded as more sanitary than other, disease-carrying flies since the adult fly has no mouth parts and does not seek food during its short life. Instead, it only seeks a mate. Their clean reputation means the black solider fly's larvae are used to sanitise compost and as animal feed, but Aðalsteinsson claims they could form the basis of a human diet. "Insects can be an important resource in the search for eco-friendly methods of farming and producing protein-rich foods," says Aðalsteinsson, 25. "Larvae are similar to meat when it comes to protein, fat and nutrients," he adds. "But larvae need 5 to 10 times less feed to produce the same amount of growth. Larvae, and insects in general, are also very resourceful when it comes to feeding, as they are able to digest almost any biomass available in the natural environment." To test his design, Aðalsteinsson used larvae to produce a pâté and a pudding (see slideshow above). The project was developed under the supervision of head of product design Garðar Eyjólfsson and adjunct professor Thomas Pausz at the Icelandic Academy of the Arts. See our special feature about weird food. Here's an edited transcript of the interview with Aðalsteinsson: Marcus Fairs: Why did you decide to do this project? Búi Bjarmar Aðalsteinsson: The inspiration came from an article published in a local newspaper. It said that if more people ate insects it would reduce hunger and pollution as well as provide better nutrition. I I found out there was a proposal from the Food and Agriculture Organization of the United Nations. After reading its content I got excited about the necessity of finding more sustainable food sources. The biggest factor that makes insects super interesting is their abilities to transform almost any feed source into a very nutritious flesh. Marcus Fairs: What has been the reaction to it so far? Búi Bjarmar Aðalsteinsson: Some are very excited to taste it but others are not. Marcus Fairs: What do the larvae taste like to eat? They taste like chicken. There is no distinct taste. It depends on how you spice them and how you prepare them. Marcus Fairs: What is your favourite fly larvae recipe? Búi Bjarmar Aðalsteinsson: It's the coconut-chocolate larvae dessert I just tried out. The kids love it. Marcus Fairs: How important could insects be for feeding humanity in future? Búi Bjarmar Aðalsteinsson: Insects are not only important but rather a necessity if we are to keep on eating and demanding protein rich foods. As it turns out insects have very special capabilities that makes them suitable for farming purposes. First of all they eat almost anything organic and have some natural instincts that makes it easy to harvest without much work. Furthermore they need 5 to 10 times less feed then other meat production. Marcus Fairs: People in the west don't like the thought of eating insects. How can this prejudice be overcome? Búi Bjarmar Aðalsteinsson: The best way to introduce new kinds of food is to investigate local food culture. I started to come up with viable solutions to the integration of larvae production and Icelandic food culture. First I went to a local food specialist. We discussed how to influence food culture and how to change Icelanders views on insect eating. To make a long story short we decided upon two possible methods: one being some sort of fancy restaurant event and the other to produce some sort of produced foods in line with the western foods industry. For a long time I could not choose between the two but finally I made some samples of processed larvae products. Two of those products made it to the final show, one being a larvae pâté and the other a larvae putting. Marcus Fairs: Is your project a serious proposal, or is it intended to trigger a discussion about this issue? Búi Bjarmar Aðalsteinsson: My proposal is serious. We have to drastically change the way we eat and produce food so that we can live harmoniously with the planet. I think insects are one step in the right direction and that step needs to be taken immediately. Marcus Fairs: Why does your fly factory have an industrial aesthetic, rather than a domestic aesthetic? I think food is highly dependent on trust and when you make new types of food you have to make people believe in it. May main inspiration is the industrial kitchen. They just look so robust and you really believe they will keep on working for a long time. Another aspect of the aesthetics and the size is that I don't think that a majority of people are willing to grow their own food. Western society is dependent upon processed food and I am not about to change that with one project. So rather than fighting an existing culture I chose to embrace it and produce processed insect food. I envision my fly factory to be used in industrial settings inside restaurants or within the food industry. I think that insects do not need to be friendly rather they need to taste good and be affordable. Sign up for a daily roundup of all our stories
<urn:uuid:5d098e63-76f8-4c62-a96f-3d369adca577>
CC-MAIN-2016-26
http://www.dezeen.com/2014/05/04/fly-factory-breeds-insects-for-human-consumption-and-produces-no-waste/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965859
1,624
2.546875
3
IN THIS ARTICLE How is tonsillitis diagnosed? Your health-care professional will conduct a physical examination, with special attention to the throat and neck area. Tonsillitis caused by viruses may look very similar to bacterial tonsillitis, therefore diagnostic testing (for example, throat culture, rapid strep test) may be required to differentiate between the two potential causes. Your health-care professional will pay special attention to the following findings: Your health-care professional also may order a rapid strep test, which requires a swab from the back of the throat area. Results of the rapid strep test are generally available within 30 minutes. Sometimes a strep culture is sent to the lab for confirmation of strep infection, though this result may require 24-48 hours. In rare instances of severe or complicated tonsillitis, blood tests and/or imaging tests may be ordered. Furthermore, if other conditions that cause a sore throat are suspected, additional testing may be necessary. Medically Reviewed by a Doctor on 12/30/2015 Must Read Articles Related to Tonsillitis Patient Comments & Reviews The eMedicineHealth doctors ask about Tonsillitis: Tonsillitis - Treatment What was the treatment for your tonsillitis? Tonsillitis In Adults - Symptoms What symptoms did you experience with tonsillitis? Tonsillitis - Home Remedies What home remedies help soothe your tonsillitis symptoms? Tonsillitis - Experience Please share your experience with acute or chronic tonsillitis. - Myths and Facts About Baby Eczema - Diaper Rash: When to Call a Doctor. - How to Spot Deadly Allergy Triggers
<urn:uuid:32bb8a25-3762-4e09-a88f-e326a8272710>
CC-MAIN-2016-26
http://www.emedicinehealth.com/tonsillitis/page5_em.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887135
360
3.078125
3
I am working on a paper with the following directions: As opposed to persuasion, the objective for this essay is to research some element of a literary work or body of work, or to analyze some pattern of development within an author’s work or between two authors or a group of authors. Patterns of development might consider the regional influence of a series of writers, such as, the American Romantics in New England, the Harlem Renaissance, Modernism or Postmodernism If I understand correctly, analyzing two poets who wrote during the same movement would work; such as, Emily Dickinson and Walt Whitman during the modern American movement. Do I need to pick a poem from each poet? if so, what are two good ones? The paper has to be 7 pages. 1 Answer | Add Yours If you choose Emily Dickinson, she has many poems with the theme of death. there is a pattern to her writing. she often writes about death. Some of her poems that express the theme of death in such a unique way are as follows: "Because I Could Not Stop For Death" "I Heard a Fly Buzz--When I Died" "I Felt a Funeral in my Brain" In the first poem above, Emily Dickinson personifies Death as a friend. She takes a terrifying process and makes it seem less fearful. It is as if she is on a date with Death. She personifies Death as gentleman who is taking her to her final destination. She is not afraid of Death. She treats him as a "courtly beau." Although Whitman writes about Death as a theme in his poetry, he celebrates life and death in Leaves of Grass. Truly, Whitman celebrates life and the human body and sexuality: Its celebration of the human body and sexuality in frank and explicit language, particularly in the original long poem “Song of Myself,” No doubt, Whitman and Dickinson are pioneers of their own unique style of poetry writing. Both poets used unconventional ways of writing. They often did not follow grammatical standards. No doubt, both Whitman and Dickinson focused on their individual lives. Both focus on death as well. In their own individual and different ways of writing about death, both Whitman and Dickinson seem to feel an urgency to write about death repeatedly. Truly, neither Whitman nor Dickinson were intimidated by the theme of death. In the last stanza of Whitman's "Song of Myself," he writes about his impending death. On Emily's death bed, she writes to her cousins: Little Cousins,/ Called back-/ Emily." on her deathbed. Truly, Whitman and Dickinson are alike in their subject forms of their poetry. While addressing the larger issues in life, both poets write about common, ordinary events, people, and objects in life. We’ve answered 328,059 questions. We can answer yours, too.Ask a question
<urn:uuid:9aa24f4e-57e6-4f45-9cd0-084929c5a203>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/am-working-paper-with-following-directions-440025
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967489
595
3.171875
3
Prior to the fall of the Deutsche Demokratische Republik or DDR in 1989, Berlin was a divided city, separated by a wall. The East Germans, as they were once called, said the wall was to keep the Westerners from flocking into the Eastern Utopia, while the West Germans said it was to keep the East Berliners from flocking to the West (The FREE World). Who was right? It was all a matter of perspective! Today, Israel and the West Bank are divided by an "Israeli Wall" that is almost 500 miles - tantamount to the distance between San Francisco and Los Angeles. But the absolute longest wall was the Great Wall of China that spanned over 13,000 miles. Whether the intent was to keep people in or keep people out, gated communities and gated prisons are no different from one another, except perspective! The only difference is that those in gated communities are prisoners by choice, while those incarcerated in prison compounds are... involuntary prisoners! Are walls gates of security or gates of entrapment? In the State of Israel, the circuitous wall separating Palestine and Israel makes life difficult for Palestinians who need to get from their agricultural fields to their home. But why is a circuitous walk through the streets of Bern, Switzerland or Prague, Czech Republic rather relaxing whereas a trek from a Palestinian farm to one's home so stressful? Perhaps it has to do with one's surroundings or lack of surroundings. If we could focus on the journey, perhaps life would be rewarding. But if we focused only on the destination, the journey becomes just a passing of time. Life itself is a journey! Humans who demoralize others call each other dogs, but dogs are social creatures. We both require physical as well as mental stimulation and exercise. A wall is a separation barrier, whereas a gate is a funnel point of ingress/egress. When that point is regulated, it gives the regulator a feeling of power, and the regulated a feeling of helplessness. Grocery stores and airline counters are funnel points of ingress/egress - they are checkpoints. How did the term "gated community" become popularized? "Neighborhoods" only imply proximity, whereas "communities" are social relationships, and "gated communities" are exclusionary relationships. Senior citizens often purchased a home within a gated compound for security, and because they were no longer mobile, they did not leave the compound. Hence, their only communication were other senior citizens living in that same compound. But what does the term "bedroom community" mean? If we work eight hours in a day, plus an hour for lunch and another two hours for commute, and we spend eight hours sleeping, that means have five hours left to communicate with our neighbors every day. But is this realistic? Those who have families must shop for groceries, cook meals, wash laundry, help their children with homework, after school activities, etc... The chances that neighbors have time to communicate with one another are very slim in today's society. So a bedroom community is an oxymoron. How is it that people who live next door to one another do not know each other, yet the same people can use their mobile phones to chat with someone 1/2 around the world? It's all a matter of perspective!
<urn:uuid:4e7a9ac8-2ab5-4a3c-8151-e854599cd167>
CC-MAIN-2016-26
http://www.examiner.com/article/gated-communities-vs-gated-prisons
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973304
680
2.75
3
For the first few generations, the Romanovs were happy to maintain the statusquo in Russia. They continued to centralize power, but they did very little to bring Russia up to speed with the rapid changes in economic and political life that were taking place elsewhere in Europe. Peter the Great decided to change all of that. Peter the Great With Sophia in control, Peter was sent back to Kolomenskoe. It was soon noticed that he possessed a penchant for war games, including especially military drill and siegecraft. He became acquainted with a small community of European soldiers, from whom he learned Western European tactics and strategy. Remarkably, neither Sophia nor the Kremlin Guard found this suggestive. In 1689, just as Peter was to come of age, Sophia attempted another coup--this time, however, she was defeated and confined to Novodevichiy Convent. Six years later Ivan died, leaving Peter in sole possession of the throne. Rather than taking up residence and rule in Moscow, his response was to embark on a Grand Tour of Europe. He spent about two years there, not only meeting monarchs and conducting diplomacy but also travelling incognito and even working as a ship's carpenter in Holland. He amassed a considerable body of knowledge on western European industrial techniques and state administration, and became determined to modernize the Russian state and to westernize its society. In 1698, still on tour, Peter received news of yet another rebellion by the Kremlin Guard, instigated by Sophia despite her confinement to Novodevichiy. He returned without any sense of humor, decisively defeating the guard with his own European-drilled units, ordering a mass execution of the surviving rebels, and then hanging the bodies outside Sophia's convent window. She apparently went mad. The following day Peter began his program to recreate Russia in the image of Western Europe by personally clipping off the beards of his nobles. Peter's return to Russia and assumption of personal rule hit the country like a hurricane. He banned traditional Muscovite dress for all men, introduced military conscription, established technical schools, replaced the church patriarchy with a holy synod answerable to himself, simplified the alphabet, tried to improve the manners of the court, changed the calendar, changed his title from Tsar to Emperor, and introduced a hundred other reforms, restrictions, and novelties (all of which convinced the conservative clergy that he was the antichrist). In 1703 he embarked on the most dramatic of his reforms--the decision to transfer the capital from Moscow to a new city to be built from scratch on the Gulf of Finland. Over the next nine years, at tremendous human and material cost, St. Petersburg was created. Peter generated considerable opposition during his reign, not only from the conservative clergy but also from the nobility, who were understandably rather attached to the status quo. One of the most notable critics of his policies was his own son Alexis, who naturally enough became the focus of oppositional intrigue. In fact, Alexis seemed to desire no such position, and in 1716 he fled to Vienna after renouncing his right to the succession. Having never had much occasion to trust in others, Peter suspected that Alexis had in fact fled in order to rally foreign backing. After persuading him to return, Peter had his son arrested and tried for treason. In 1718 he was sentenced to death, but died before the execution from wounds sustained during torture. Peter himself died in 1725, and he remains one of the most controversial figures in Russian history. Although he was deeply committed to making Russia a powerful new member of modern Europe, it is questionable whether his reforms resulted in significant improvements to the lives of his subjects. Certainly he modernized Russia's military and its administrative structure, but both of these reforms were financed at the expense of the peasantry, who were increasingly forced into serfdom. After Peter's death Russia went through a great number of rulers in a distressingly short time, none of whom had much of an opportunity to leave a lasting impression. Many of Peter's reforms failed to take root in Russia, and it was not until the reign of Catherine the Great that his desire to make Russia into a great European power was in fact achieved. By the following summer the conflict between Peter and Catherine had become quite serious. In only six months of rule, he had managed to offend and outrage virtually the entire court by diplomatic bumblings and large segments of the population through his hostility to the church and his evident disdain for Russia. Support for Catherine was widespread, and Peter was suspicious. Early on the morning of June 28, Catherine left her estate at Peterhof, outside of St. Petersburg, and departed for the city. Everything had been prepared in advance, and when she arrived she was greeted with cheers by both the troops of her factional supporters and the populace. By the next morning, Peter was confronted with a fait accompli--and a prepared declaration of his abdication. A week later, he was dead. Catherine went on to become the most powerful sovereign in Europe. She continued Peter the Great's reforms of the Russian state, further increasing central control over the provinces. Her skill as a diplomat, in an era that produced many extraordinary diplomats, was remarkable. Russia's influence in European affairs, as well as its territory in Eastern and Central Europe, were increased and expanded. Catherine was also an enthusiastic patron of the arts. She built and founded the Hermitage Museum, commissioned buildings all over Russia, founded academies, journals, and libraries, and corresponded with the French Encyclopedists, including Voltaire, Diderot, and d'Alembert. Although Catherine did in fact have many lovers, some of them trusted advisors and confidants, stories alleging her to have had an excessive sexual appetite are unfounded. With the onset of the French Revolution, Catherine became strikingly conservative and increasingly hostile to criticism of her policies. From 1789 until her death, she reversed many of the liberal reforms of her early reign. One notable effect of this reversal was that, like Peter the Great, Catherine ultimately contributed to the increasingly distressing state of the peasantry in Russia. When Catherine the Great died in 1796, she was succeeded by her son Paul I. Catherine never really liked Paul, and her feelings were reciprocated by her son. Paul's reign lasted only five years and was by all accounts a complete disaster. His most notable legacy is the remarkable and tragic Engineer's Castle in St. Petersburg. Paul was succeeded by his son Alexander I, who is remembered mostly for having been the ruler of Russia during Napoleon Bonaparte's epic Russian Campaign. Copyright (c) 1996-2005 interKnowledge Corp. All rights reserved.
<urn:uuid:765c785e-073e-40a5-9678-f9fc284c6825>
CC-MAIN-2016-26
http://www.geographia.com/russia/rushis04.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986801
1,371
2.984375
3
November 10, 2009 1. 70% of people requiring a stem cell transplant need an unrelated donor. The first choice is a family member, but more people will have to rely on a stranger. On any given day, 16000 people around the world are waiting on a list to find an anonymous bone marrow donor. 2. Register by providing a blood sample in Quebec or the UK, or a cheek swab sample in the rest of Canada or the US. In the US, UK and most of Canada, you can even fill in your registration online and get a kit sent to your home. This is a free service in Canada and the UK as well as many other countries. In the US, although there is often a fee associated with lab typing, you can have the costs waived by registering online for free (a new development since Summer 2009) via Be The Match, and there are additional ways to register for free. 3. Donation is safe, fast, and not risky. You never donate stem cells or bone marrow at risk to your own life. Whatever is donated replenishes itself naturally in the body. 4. There are 2 ways to do it. 70% of people will donate in a process that takes a few hours and is similar to donating blood. For a few days leading up to the extraction process, the donor receives injections to produce additional stem cells in the body. 30% of people will donate by having liquid marrow extracted from within the back of the pelvic bone. 5. Many people cannot find matches. There are 8 blood types, but for a stem cell match there are several million combinations of possible human leukocyte antigen (HLA) profiles - 150 billion different possibilities in theory. Even though blood is important, and people who need transplants (in addition to many other people) need donated blood to survive, people with a rare blood type can probably find a match in a room with 100 people of different ethnicities (but the rate will go up in a room where everyone is the same ethnicity). For someone looking for a stem cell match, they may need a stadium of 20000 or 50 stadiums (or more) to find a match. The most likely match within the general population is someone of the same or similar ethnicity. If you are of African descent, it is most likely your match will be, too. Then they have to hope that person is on the registry. This is where we can help. We can take what we know and tell others to combat the misinformation about the process, so that people understand how important the need is for donors to come forward and how things really work. Use Livejournal, Facebook, Twitter, MSN or your blog. Learn more at: Be The Match (US) www.bethematch.org OneMatch (Canada) www.onematch.ca Anthony Nolan Trust (UK) www.anthonynolan.org.uk Feel free to use the Comments section to ask questions. I will answer them or find someone who can answer them.
<urn:uuid:0de1bbe4-f39b-40ba-b0ee-9ff4f6691435>
CC-MAIN-2016-26
http://www.healemru.com/2009/11/5-fast-facts-on-bone-marrow.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947858
629
2.828125
3
A few weeks ago, I came across a really interesting serious game called Foldit. The object of the game is to fold three-dimensional models of proteins into the shapes that would take the least amount of energy to sustain. The purpose of the game is to find the best solutions for using proteins in real world applications, such as understanding, treating, and curing diseases. Researchers post challenges on the Foldit site, and they then analyze the highest-ranking solutions to see whether those could be used to solve the real-world challenge. In one particularly successful outcome last year, researchers used results from the game to discover the structure of an important enzyme in the Mason-Pfizer monkey virus, an AIDS-causing monkey virus. Simple interface, many levels After hearing about the application, I downloaded and played with it for a few days. I haven’t solved any medical mysteries yet, but I did get a chance to look at how the game works. The application is interesting on many levels: - It’s an example of a well-executed task analysis. The game’s developers identified that spatial abilities, rather than an advanced understanding of amino acids, are the crucial skills needed to fold a protein. This allowed them to enlist the aid of non-scientists. - It includes a well-constructed series of example puzzles with simple instructions to introduce users to the tools and rules they need to understand in order to fold a protein. None of the instructions is longer than a tweet (140 characters), but each one provides enough information to add one more skill to the player’s repertoire. - It offers a social learning component. Users can add and track buddies, chat about strategies with one another on the board, and see the scores of other players. - Because users might need to solve the puzzle in several sittings, the interface offers tools to make it easy for them to remember where they were in their process. They can make notes, revert back to the configuration that gave them the highest score for that puzzle, or revert back to a recent high score to help them build from a promising start. - From a usability perspective, the interface gives players a way to automate repetitive steps, so they can spend their time working on the strategies, rather than manually carrying out the individual steps. Players can also use the View menu to change the appearance of the model without changing the underlying structure. (Think of it as changing the model’s font). Figure 1 will give you an idea of what the interface looks like. - It’s an example of a serious game that has a number of loyal players who choose to work towards solutions in their leisure time. - It’s a working example of how complex problems can be crowdsourced to create successful outcomes. Figure 1. The Foldit interface Strategic partnerships in game design Any one of these points could support a column by itself. Ultimately, though, the most interesting part to me about the whole project is the strategic partnership it must have taken to get the game off the ground. Foldit is the product of collaboration between a biologist at the Howard Hughes Medical Institute and programmers at the University of Washington. Because the majority of the reporting on this project has come from journals or science reporters, there’s not a lot of published information on how that partnership came into being. The participants may simply have known one another socially or otherwise. The biologist, David Baker, thought of the concept. He wanted it to be a game, and he had the subject matter expertise to explain the rules behind proteins. The programmers, it’s pretty reasonable to assume, brought project management, interface design, and game strategies to the mix in addition to their programming skills. Working backwards from the final result, it’s also possible to make some guesses about what kinds of roles didn’t have a seat at the table on this project. Given that the solutions to these puzzles could potentially help treat diseases, a fundraiser might have been able to offer some strategies to bring revenue to the project, and maybe even increase players’ engagement. For instance, sponsors might offer donations for every set number of points players earn on the site. Conclusion (and more to come) Unrecognized opportunities for strategic partnerships often exist within our own organizations. In next month’s column, I’ll look at some ways to spot the opportunities, and methods to build strategic partnerships. Reference"Protein-folding game taps power of worldwide audience to solve difficult puzzles." EurekAlert! - Science News. August 4, 2010. Web. Accessed 10 Jan. 2012. http://www.eurekalert.org/pub_releases/2010-08/hhmi-pgt080310.php.
<urn:uuid:da9c8f87-1731-4643-a317-c76bb3643e6a>
CC-MAIN-2016-26
http://www.learningsolutionsmag.com/articles/823/the-human-factor-serious-game-strategic-partnership
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945265
987
3.21875
3
Biobatteries recharge more quickly than conventional batteries, they use renewable energy, and they do not explode or leak toxic chemicals. The innovation is the work of a team led by Joseph Wang, Distinguished Professor and Chair of Nanoengineering at the University of California. They presented their novel biobattery approach at a recent meeting of the American Chemical Society in San Francisco. Compared with conventional batteries, biobatteries have several advantages: they recharge more quickly, they use renewable energy (in this case body sweat), and they do not explode or leak toxic chemicals. Prof. Wang says their work shows "the first examples of epidermal electrochemical biosensing and biofuel cells that could potentially be used for a wide range of future applications." As we sweat, we produce lactate, "a very important indicator of how you are doing during exercise," says Dr. Wenzhao Jia, a postdoctoral student in Prof. Wang's lab. Generally, the more intensely we exercise the more lactate we produce, as aerobic respiration is not enough to produce the energy we need, and anaerobic respiration kicks in. Anaerobic respiration converts glucose or glycogen to lactic acid, generating energy in the process. Professional athletes monitor their lactate levels to evaluate their fitness and training performance. Doctors also asses lactate levels during exercise to test patients for heart or lung disease, and other conditions marked by unusually high lactate. Non-invasive, real-time measure of lactate levels during exercise Dr. Jia and her colleagues have developed a faster, easier and non-invasive way to measure lactate during exercise in real time. Before their innovation, the only way to do this was by taking blood samples at regular intervals during exercise and sending them away for analysis. The new sensor, which can be imprinted onto a temporary tattoo, contains an enzyme that produces a weak electrical current by stripping electrons from lactate molecules. The scientists tested the new device on 10 healthy volunteers. They applied the temporary tattoos to the upper arms of the volunteers and measured how much electrical current they produced as they exercised. The volunteers exercised on stationary bikes for 30 minutes, with resistance gradually increasing over the period. The sensors allowed the scientists to monitor sweat lactate levels as they changed with exercise intensity. Biobattery uses lactate from sweat to generate power The team then developed the technology a stage further and made a sweat-powered biobattery. They used the enzyme that strips the lactate of electrons to act as the anode, and used a chemical that accepts the electrons to be the cathode. Electrons moving from an anode to a cathode is the basic principle on which a battery works. To see how the device works, play the video below. The team tested the biobattery on 15 volunteers exercising on stationary bikes. As before, the device was incorporated within a temporary tattoo applied to their upper arms. The different volunteers produced varying amounts of power in their tattoo biobatteries. Curiously, the less fit volunteers appeared to produce the most power. Those who exercised only once a week produced more power than those who exercised at least three times a week. One possible explanation is that less fit people become fatigued more quickly, causing lactate-producing anaerobic respiration to kick in earlier. The less fit volunteers produced around 70 μW per square cm (cm2) of skin. Dr. Jia says this is not a large amount of current, but they are working on how to to enhance it so it could eventually be enough to power small electronic devices: "Right now, we can get a maximum of 70 μW per cm2, but our electrodes are only 2 by 3 millimeters in size and generate about 4 μW - a bit small to generate enough power to run a watch, for example, which requires at least 10 μW." She says they also need to find a way to store the generated current. The National Science Foundation and Office of Naval Research are funding the work. Medical News Today also recently reported how an engineer at Stanford University is working on wireless powering of medical implants deep inside the body.
<urn:uuid:7cec7862-fd60-469a-ba27-2a4674ca6858>
CC-MAIN-2016-26
http://www.medicalnewstoday.com/articles/281172.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955636
862
3.515625
4
Augmented reality (AR), the physical world we can touch and feel mixed with computer-generated imagery, has been called 'mildly terrifying'. Essentially an optical illusion, AR is a 'sixth-sense' hybrid of real and virtual worlds, where the physical and digital co-exist and interact in real time. With an AR headset, information about streets, buildings, areas and the objects they are home to can be displayed as a data layer superimposed on the wearer's real-world view. Like so much technical wizardry, AR was foreshadowed by science fiction. It is featured in Terminator and Star Trek, in which the Emergency Medical Hologram programme generates an artificial doctor when a human medic is unavailable. It also surfaces in the William Gibson novel Spook Country. 'Geohacking', as Gibson calls it, appears as GPS-governed 3D graphics that mesh with real-world landscapes. Despite seeming futuristic, AR could have some fairly prosaic applications; it could be enlisted to virtually rebuild lost historic edifices and simulate planned construction projects; or render virtual objects for museums, exhibitions and theme parks. AR has military (fighter pilots and tank drivers are familiar with the concept) and emergency-services potential, and, what's more, it is accessible. Augmented reality does not necessarily need clunky goggles to work; the iPhone, Nokia and Android operating system-based mobile handsets could all host AR applications. In June, Dutch 'strategic creative consultancy' SPRXMobile introduced what it claims to be the world's first AR-based Web browser, Layar, for Android-compatible mobile devices. The browser, which was available in the Netherlands before being rolled out to the rest of the world in August, works on mobile phones with camera, GPS and compass functions, such as the Android-based HTC Magic. Layar displays real-time digital information on top of what is seen through the camera lens of a phone. Partners, such as Wikipedia or local stores, provide location co-ordinates and relevant information. With content layers programmed, a user can look at the camera screen and see an augmented view of the scene ahead of him: blinking dots on flats that are for sale, and their price, for example; pull-down reviews of side-street bars; the position of ATMs; places with jobs going; and, with a scary stalker-type application, the location of individuals. Other frills include Gibson-style 'geotagged' photographs enhanced by geographical identification 'metadata'. AR seems devilishly clever but Layar may be a layer of information too far for many. We already have our heads in the digital clouds and may feel close to information overload. More data could lead to befuddlement or even disaster; imagine the dangers posed by drivers whose attention is constantly being diverted by Spook Country-style signs. Resistance towards AR will probably fade, though, because, unlike 3D, it is useful; a roaming technology 'fab app'.
<urn:uuid:e1872c19-321f-4946-b32f-94cda5b7d65f>
CC-MAIN-2016-26
http://www.scmp.com/article/695728/hidden-extras
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933732
619
2.59375
3
Communities developing resources and competencies for using their languages Foundational understanding for language development work of all kinds Publications, fonts and computer tools for language development, translation and research SIL offers training in disciplines relevant to sustainable language development. 7,105 languages are spoken or signed. CLICK for map of world languages & regional websites. SIL's dedication to language development past and present In three languages of Chad, representing 1 million speakers between them, readers (even those fluent in French) were having trouble understanding their language in written form. So the language committees sent some of their key people to attend an SIL workshop on phonology and writing systems. The approach was very practical and after 3 weeks, the participants had not only identified the main points of the phonology of their language, but they also had the results of their investigation into possible ways of changing their writing system. For all three languages, this meant a change in how tone was written. One language had not been writing tone, another had written tone marks everywhere which had been overwhelming and a third had been writing the letter h to represent tone changes, but in an inconsistent way. The participants found the main possibilities for a consistent writing system which was also in line with Government guidelines, and these were presented to their communities in the forms of posters which were displayed at the closing ceremony. After the ceremony, the guests were given the option of going to the refreshments table or examining the posters – and they all chose to look at the posters, such was their enthusiasm for developing their languages. These languages are now making the final decisions for a new writing system – one that can be read with ease. Return to Stories from the Field
<urn:uuid:69e29c66-1f3a-44d5-9d2f-4cfda0415977>
CC-MAIN-2016-26
http://www.sil.org/linguistics/too-few-or-too-many-accents
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979691
343
3.046875
3
Leggo My Gecko Question – what do your toothbrush and a gecko have in common? 1. They both come in green. 2. They both fit in the palm of your hand. 3. They both get into every nook and cranny. Correct answer – all of the above. While you might choose to ignore choices 1 and 2 and buy an oversized toothbrush that comes in pink, you can understand why choice #3 is important for your toothbrush. Each individual bristle makes contact with your teeth so that it can clean all of the surfaces, including below the gum line and in between. That’s one of the main reasons why we use a toothbrush and not a squeegee. (Other reasons are beyond the scope of this article). But what’s so special about a gecko getting into those hard to reach places like your ceiling or under your table? Two answers. The first is that obviously it has to have a good contact with the wall if it expects to be able to climb straight up. But rodents, insects and, yes, even Spiderman can do that. Why is a gecko different? Rodents and insects are able to climb because they have sharp claws that dig into the crevices of a wall that lets them grab and hold on. Geckos have super-serrated hands and legs and hold on with millions of microscopic hairs that divide into billions of split ends. Each hair is so small that when a gecko makes contact, almost 100% of its foot surface actually “touches” the wall. These minute split ends practically combine with the wall through the molecular force called the “Van der Waals” force. It doesn’t stick and it doesn’t dig in – it actually bonds. And then, almost magically, the gecko reverses the pull on his feet and just lets go. Now I know what you’re thinking. “Leapin’ lizards! If I had a toothbrush with all those fine hairs, I’d never get cavities. It would clean every microscopic square inch of my teeth”. Not quite. You see, once it touched your tooth, one false move and, forget about - goodbye bacteria, it’s goodbye tooth! Which brings us to answer #2. By replicating the unique structure of a gecko’s foot pads, scientists are developing vice-like adhesives that take hold merely by touching a surface. And when moved in the opposite direction, simply slip off with no trace of adhesion. Geckos make it look easy. And you thought they only sold car insurance! Subscribe to our blog via email or RSS to get more posts like this one.
<urn:uuid:dfa7fd64-4e35-4c53-b718-40a5a1d21bfa>
CC-MAIN-2016-26
http://www.simpletoremember.com/jewish/blog/leggo-my-gecko/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94765
581
3
3
Event Planning consists of coordinating every detail of meetings and conventions, from the speakers and meeting location to arranging for printed materials and audio-visual Event planning begins with determining the objective that the sponsoring organization wants to achieve. Planners choose speakers, entertainment, and content, and arrange the program to present the organization's information in the most is a process by which an event is planned, prepared, and produced. The Optimist’s Point of View the energizing art of choreographing people and activities in order to create a show that creates memories of a Pessimist’s Point of View the stressful work of planning meetings or events, and can be a very demanding career choice. supports business objectives, including management functions, corporate communications, training, marketing, incentives, employee relations, and customer relations, scheduled alone or in conjunction with other events. Trade Shows and Exhibits An event bringing buyers and sellers and interested persons together to view and/or sell products, services, and other resources to a specific industry or the general public, scheduled alone or in conjunction with other events. 5 Questions to Ask When Creating an Event Concept 1. Why is the event being held? 2. Who are the event's stakeholders? 3. When is the event taking place? 4. Where will the event be staged? 5. What is the event content or product? Step 1: Stick to your brand Step 2: Brainstorm ideas Step 3: Apply the theme
<urn:uuid:5f329476-fd31-4564-aef1-d53b21bb2b2e>
CC-MAIN-2016-26
http://www.slideshare.net/florlynmatildo/events-management-the-basics
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879702
349
2.609375
3
Excel for Chemists: A Comprehensive Guide, with CD-ROM, 3rd Edition September 2011, ©2011 "Excel for Chemists should be part of any academic library offering courses and programs in Chemistry." "I highly recommend the book; treat yourself to it; assign it to a class; give it as a gift." The newly revised step-by-step guide to using the scientific calculating power of Excel to perform a variety of chemical calculations Chemists across all subdisciplines use Excel to record data in tabular form, but few have learned to take full advantage of the program. Featuring clear step-by-step instructions, Excel for Chemists illustrates how to use the scientific calculating power of Excel to perform a variety of chemical calculations. Including a CD-ROM for Windows, this new edition provides chemists and students with a detailed guide to using the current versions of Excel (Excel 2007 and 2010) as well as Excel 2003. Additional features in this third edition include: - How to perform a variety of chemical calculations by creating advanced spreadsheet formulas or by using Excel’s built-in tools - How to automate repetitive tasks by programming Excel’s Visual Basic for Applications - New chapters show how to import data from other language versions of Excel, and how to create automatic procedures - The accompanying CD contains a number of Excel macros to facilitate chemical calculations, including molecular weight, nonlinear regression statistics, and data interpolation - Several appendices provide extensive lists of useful shortcut keys and function descriptions Before You Begin xxvii PART I THE BASICS Chapter 1 Working with Excel 2007 or Excel 2010 3 Chapter 2 Working with Excel 2003 79 Chapter 3 Excel Formulas and Functions 137 Chapter 4 Excel 2007/2010 Charts 177 Chapter 5 Excel 2003 Charts 209 PART II ADVANCED SPREADSHEET TOPICS Chapter 6 Advanced Worksheet Formulas 233 Chapter 7 Array Formulas 267 Chapter 8 Advanced Charting Techniques 289 Chapter 9 Using Excel's Database Features 327 Chapter 10 Importing Data into Excel 349 Chapter 11 Adding Controls to a Spreadsheet 365 Chapter 12 Other Language Versions of Excel 385 PART III SPREADSHEET MATHEMATICS Chapter 13 Mathematical Methods for Spreadsheet Calculations 403 Chapter 14 Linear Regression and Curve Fitting 435 Chapter 15 Nonlinear Regression Using the Solver 463 PART IV EXCEL'S VISUAL BASIC FOR APPLICATIONS Chapter 16 Visual Basic for Applications: An Introduction 491 Chapter 17 Programming with VBA 503 Chapter 18 Working with Arrays in VBA 543 PART V SOME APPLICATIONS OF VBA Chapter 19 Command Macros 557 Chapter 20 Custom Functions 571 Chapter 21 Automatic Procedures 589 Chapter 22 Custom Menus 595 Chapter 23 Custom Toolbars and Toolbuttons 607 PART VI APPENDICES Appendix A What's Where in Excel 2007/2010 629 Appendix B Selected Worksheet Functions by Category 63 3 Appendix C Alphabetical List of Selected Worksheet Functions 639 Appendix D Renamed Functions in Excel 2010 661 Appendix E Selected Visual Basic Keywords by Category 663 Appendix F Alphabetical List of Selected Visual Basic Keywords 667 Appendix G Selected Excel 4 Macro Functions 689 Appendix H Shortcut Keys by Keystroke 693 Appendix I Selected Shortcut Keys by Category 703 Appendix J ASCII Codes 707 Appendix K Contents of the CD-ROM 709 - Coverage of the current versions of Excel (2007 and 2010), along with coverage of Excel 2003 - Covers the current versions of Excel (Excel 2007 and 2010), as well as Excel 2003 and many other new applications - Illustrates how to perform a variety of chemical calculations, from creating advanced spreadsheet formulas to using Excel's built-in tools to creating advanced macros via Excel's Visual Basic Includes a CD-ROM for both Macintosh and Windows with many useful spreadsheet templates, macros, and other tools. Demonstrates step-by-step how to program Excel to perform appropriate tasks, automate repetitive data processing tasks, and prepare integrated documents by transferring data and graphics. Provides many shortcuts and tips on speeding, simplifying, and improving the use of Excel. Contains many illustrations and examples of chemical applications, including "How-to" boxes outlining details for accomplishing complex tasks in Excel. Explanations are clear and easily accessible, requiring little or no background in computer science. - Designed to help students in chemistry understand thhe full capacity of Excel - Logically ordered from basic to advanced applications “Finally this new edition, provides chemists and students a detailed guide and examples how to apply the current versions of Excel to their needs. It should be added to the shelves of those using this program within their scientific work.” (Materials and Corrosion, 1 November 2012) Buy Both and Save 25%! Excel for Chemists: A Comprehensive Guide, with CD-ROM, 3rd Edition (US $68.95) -and- Excel for Scientists and Engineers: Numerical Methods (US $81.95) Total List Price: US $150.90 Discounted Price: US $113.17 (Save: US $37.73)
<urn:uuid:9978b162-312c-47e2-a320-793357e2fea2>
CC-MAIN-2016-26
http://www.wiley.com/WileyCDA/WileyTitle/productCd-047038123X.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.815113
1,116
2.625
3
Web Design and Development Certificate The goal of this Web Design course is to teach students the fundamentals of design and to have them apply those principles to a series of Web-based projects which culminate for the successful student in an engaging and professional-looking website. The course will introduce students to state-of-the-art tools and expose them to current techniques, practices and trends. This Web Design course is intended for those new to and those with some experience in Web Design. Interested students must submit an online Skills Assessment that tests their Language, Computer, Reading Comprehension, and Math skills. For further information and instructions, click here. How long will the course take to complete? Approximately nine months. Successful students will progress through four modules of varying length with approximately one week break between modules. Modules must be taken in order and are not available as separate classes. Is this one course, or a series of courses? This is one course with one price, composed of four distinct, but related modules. What are the four course modules? The four modules are: Module 1 – Discovery This module exposes students to design principles, usability, the Web development process and file basics. Learn how to scope a project, design applying usability-based ideas and create assets for your sites, including logos and other content. This module lays the foundation for the entire course. Module 2 – Design Begin implementing the design principles learned in Module 1 to Web page layout. This is accomplished through a demonstrated ability to code in HTML and CSS. Be exposed to HTML5 and CSS3 as standards on which to build responsive pages, using semantic markup and styling. Module 3 – Development Module 4 – Deployment Design can capture visitors’ attention, but the Web currently hosts millions of sites. Therefore, successful Web designers must also be adept with industry tools and practices such as search engine optimization and Web analytics to drive traffic to their sites. Learn CMS basics, how to use version control for your code and gain confidence in committing changes to public repositories. Is this a Web development class? What software tools will be taught or used? The Adobe Creative Suite will be the primary tool. Programs include Dreamweaver, Photoshop, Illustrator and Flash. The course will rely more heavily on some of these tools than others, and the prospective student should not expect to proficiently master these packages. Another tool used in this class is Firebug, which the course teaches as a debugging package. What is the cost? The cost of the program is $5,699.00. There are additional expenses for books and CS6 software that may be purchased at the KSU Center Bookstore. Are there prerequisites? Successful students must have Internet access and must be proficient in navigating on a PC. Students not comfortable in navigating the file system, screens, and keyboard shortcuts of common PC use may fall quickly behind. To give students, and us, an indication of their readiness, we require that those interested in enrolling in the course complete an online prerequisite assessment and short application. Further instructions can be found here. Another benefit from the recent KSU/SPSU consolidation is the expansion of our Technology programs. With that comes our new AutoCAD Essentials Certificate program. AutoCAD is the most widely used... read more→ Our newly developed program, Residential Landscape Design, has been possible largely in part to John Barry. He has been a residential landscape architect for almost 40 years and is ready to pass... read more→
<urn:uuid:7f0bc91f-cc1c-410b-b42b-7bb20b784455>
CC-MAIN-2016-26
http://ccpe.kennesaw.edu/professional/technology/web-design/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918917
729
2.8125
3
New Mexico became the 47th state of the union on January 6, 1912. It was followed shortly after by Arizona in February. To discover more about New Mexico's march to statehood and where to start on your genealogical research try these two sites: Rocky Mountain Archive: http://rmoa.unm.edu/index.php New Mexico Tourism Dept: http://newmexico.org/
<urn:uuid:46079645-4b66-4767-97d8-337ac40e4f0b>
CC-MAIN-2016-26
http://everydaygenealogycalendar.blogspot.com/2011/01/new-mexico-becomes-state.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937253
88
2.78125
3
We love our fresh fruits, vegetables and nuts in California. They are healthy for us and for our economy; California leads the nation with agricultural revenues of over 44 billion dollars annually, and produces nearly half of the fruits, nuts and vegetables grown in the U.S. But modern agriculture relies heavily on fumigants to produce this bounty in California and elsewhere in the United States. Fumigants are a form of pesticide typically applied to the soil before crops are planted. They essentially sterilize the soil, killing the worms and pests so the strawberries, tomatoes and other high value crops can survive. Fumigants are by definition poisons, so California law requires regulatory approval of new fumigants in a process called registration. The problem is that it hasn’t worked so well of late in California. A new report by UCLA’s Sustainable Technology & Policy Program (STPP) identifies a variety of deficits in the registration process and makes recommendations to improve pesticide regulation in California. The report, “Risk and Decision: Evaluating Pesticide Approval in California,” uses one fumigant—methyl iodide or MeI—and the story of its registration for use on strawberries as a case study. MeI (used in combination with another fumigant chloropicrin) was introduced as a substitute for methyl bromide, a widely used fumigant slated for phase out by 2015 due to its ozone-depleting nature. While the methyl iodide/chloropicrin mixture was a promising alternative in terms of performance, it raised substantial human health issues, including neurotoxicity, carcinogenicity, and developmental toxicity. The high volatility and high application rates used for soil fumigation guarantee significant exposure for workers and those living and working near a fumigation site. Yet the California Department of Pesticide Regulation (DPR) approved its use in December 2010 despite opposition from a wide range of scientists, environmental, and farm worker groups. The report examined the risk governance approach used during methyl iodide’s approval, comparing it to best practices in regulatory settings, including risk assessment practices as developed by the National Research Council. We drew upon letters, hearing transcripts, reports, internal DPR memos, and other documents and analyzed the scientific, social and legal dimensions of pesticide registration in California on the ground. Our evaluation identified a number a substantial deficits in the registration process, including: - considering only the risks of methyl iodide, rather than focusing on cumulative exposure to the methyl iodide/chloropicrin mixture that would be used in practice; - Refusing to evaluate safer chemical and non-chemical alternatives to the fumigant as required by law; and - Revising the scientific conclusions of the risk assessment regarding acceptable exposure levels under circumstances that suggest that the revised levels were selected so as to support economically acceptable mitigation measures. The report concludes with a set of recommendations for reform focused upon four principles: - Realistic Framing and Assessment of Risk - Use of Best Available Science/Data and Exercise of Caution - Embracing Prevention of Risk - Engaging in Transparent, Interactive Decision-Making The manufacturer voluntarily withdrew methyl iodide products from the U.S. market in March 2012 citing economic conditions. However, DPR never revised its conclusions, and the deficiencies in the approval process apparently remain. The report can be found here.
<urn:uuid:13641017-9a16-45a6-a76c-fee9c53dc557>
CC-MAIN-2016-26
http://legal-planet.org/2013/09/24/pesticide-registration-time-upgrade/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933692
697
2.875
3
Cooper Union Address (February 27, 1860) Abraham Lincoln Transcript MR. PRESIDENT AND FELLOW-CITIZENS OF NEW-YORK: The facts with which I shall deal this evening are mainly old and familiar; nor is there anything new in the general use I shall make of them. If there shall be any novelty, it will be in the mode of presenting the facts, and the inferences and observations following that presentation.In his speech last autumn, at Columbus, Ohio, as reported in The New York Times, Senator Douglas said:"Our fathers, when they framed the Government under which we live, understood this question just as well, and even better, than we do now."I fully indorse this, and I adopt it as a text for this discourse. I so adopt it because it furnishes a precise and an agreed starting point for a discussion between Republicans and that wing of the Democracy headed by Senator Douglas. It simply leaves the inquiry: "What was the understanding those fathers had of the question mentioned?""Our fathers, when they framed the Government under which we live, understood this question just as well, and even better, than we do now."I fully indorse this, and I adopt it as a text for this discourse. I so adopt it because it furnishes a precise and an agreed starting point for a discussion between Republicans and that wing of the Democracy headed by Senator Douglas. It simply leaves the inquiry: "What was the understanding those fathers had of the question mentioned?"What is the frame of government under which we live?The answer must be: "The Constitution of the United States." That Constitution consists of the original, framed in 1787, (and under which the present government first went into operation,) and 12 subsequently framed amendments, the first 10 of which were framed in 1789.Who were our fathers that framed the Constitution? I suppose the "39" who signed the original instrument may be fairly called our fathers who framed that part of the present government. It is almost exactly true to say they framed it, and it is altogether true to say they fairly represented the opinion and sentiment of the whole nation at that time. Their names, being familiar to nearly all, and accessible to quite all, need not now be repeated.I take these "39" for the present, as being "our fathers who framed the government under which we live."What is the question which, according to the text, those fathers understood "just as well, and even better than we do now?"Upon this, Senator Douglas holds the affirmative, and Republicans the negative. This affirmation and denial form an issue; and this issue—this question—is precisely what the text declares our fathers understood "better than we."Let us now inquire whether the "39," or any of them, ever acted upon this question; and if they did, how they acted upon it—how they expressed that better understanding?In 1784, three years before the Constitution—the United States then owning the Northwestern Territory, and no other, the Congress of the Confederation had before them the question of prohibiting slavery in that Territory; and four of the "39," who afterward framed the Constitution, were in that Congress, and voted on that question. Of these, Roger Sherman, Thomas Mifflin, and Hugh Williamson voted for the prohibition, thus showing that, in their understanding, no line dividing local from federal authority, nor anything else, properly forbade the Federal Government to control as to slavery in federal territory. The other of the four—James M. Henry—voted against the prohibition, showing that, for some cause, he thought it improper to vote for it.In 1787, still before the Constitution, but while the Convention was in session framing it, and while the Northwestern Territory still was the only territory owned by the United States, the same question of prohibiting slavery in the territory again came before the Congress of the Confederation; and two more of the "39" who afterward signed the Constitution, were in that Congress, and voted on the question. They were William Blount and William Few; and they both voted for the prohibition—thus showing that, in their understanding, no line dividing local from federal authority, nor anything else, properly forbade the federal government to control as to slavery in federal territory. This time the prohibition became a law, being part of what is now well known as the Ordinance of '87.The question of federal control of slavery in the territories, seems not to have been directly before the Convention which framed the original Constitution; and hence it is not recorded that the "39," or any of them, while engaged on that instrument, expressed any opinion of that precise question.This shows that, in their understanding, no line dividing local from federal authority, nor anything in the Constitution, properly forbade Congress to prohibit slavery in the federal territory; else both their fidelity to correct principle, and their oath to support the Constitution, would have constrained them to oppose the prohibition.Again, George Washington, another of the "39," was then President of the United States, and, as such, approved and signed the bill; thus completing its validity as a law, and thus showing that, in his understanding, no line dividing local from federal authority, nor anything in the Constitution, forbade the federal government, to control as to slavery in federal territory.No great while after the adoption of the original Constitution, North Carolina ceded to the federal government the country now constituting the State of Tennessee; and a few years later Georgia ceded that which now constitutes the States of Mississippi and Alabama. In both deeds of cession it was made a condition by the ceding states that the federal government should not prohibit slavery in the ceded country. Besides this, slavery was then actually in the ceded country. Under these circumstances, Congress, on taking charge of these countries, did not absolutely prohibit slavery within them. But they did interfere with it—take control of it—even there, to a certain extent. In 1798, Congress organized the Territory of Mississippi. In the act of organization, they prohibited the bringing of slaves into the Territory, from any place without the United States, by fine, and giving freedom to slaves so brought. This act passed both branches of Congress without yeas and nays. In that Congress were three of the "39" who framed the original Constitution. They were John Langdon, George Read and Abraham Baldwin. They all, probably, voted for it. Certainly they would have placed their opposition to it upon record, if, in their understanding, any line dividing local from federal authority, or anything in the Constitution, properly forbade the federal government to control as to slavery in federal territory.In 1803, the federal government purchased the Louisiana country. Our former territorial acquisitions came from certain of our own states; but this Louisiana country was acquired from a foreign nation. In 1804, Congress gave a territorial organization to that part of it which now constitutes the State of Louisiana. New Orleans, lying within that part, was an old and comparatively large city. There were other considerable towns and settlements, and slavery was extensively and thoroughly intermingled with the people. Congress did not, in the Territorial Act, prohibit slavery; but they did interfere with it—take control of it—in a more marked and extensive way than they did in the case of Mississippi. The substance of the provision therein made, in relation to slaves, was:This act also was passed without yeas and nays. In the Congress which passed it, there were two of the "39." They were Abraham Baldwin and Jonathan Dayton. As stated in the case of Mississippi, it is probable they both voted for it. They would not have allowed it to pass without recording their opposition to it, if, in their understanding, it violated either the line properly dividing local from federal authority, or any provision of the Constitution.In 1819–20, came and passed the Missouri question. Many votes were taken, by yeas and nays, in both branches of Congress, upon the various phases of the general question. Two of the "39"—Rufus King and Charles Pinckney—were members of that Congress. Mr. King steadily voted for slavery prohibition and against all compromises, while Mr. Pinckney as steadily voted against slavery prohibition and against all compromises. By this, Mr. King showed that, in his understanding, no line dividing local from federal authority, nor anything in the Constitution, was violated by Congress prohibiting slavery in federal territory; while Mr. Pinckney, by his votes, showed that, in his understanding, there was some sufficient reason for opposing such prohibition in that case.The cases I have mentioned are the only acts of the "39," or of any of them, upon the direct issue, which I have been able to discover.To enumerate the persons who thus acted, as being four in 1784, two in 1787, 17 in 1789, three in 1798, two in 1804, and two in 1819–20—there would be 30 of them. But this would be counting John Langdon, Roger Sherman, William Few, Rufus King, and George Read, each twice, and Abraham Baldwin, three times. The true number of those of the "39" whom I have shown to have acted upon the question, which, by the text, they understood better than we, is 23, leaving 16 not shown to have acted upon it in any way.Here, then, we have 23 out of our 39 fathers "who framed the government under which we live," who have, upon their official responsibility and their corporal oaths, acted upon the very question which the text affirms they "understood just as well, and even better than we do now;" and 21 of them—a clear majority of the whole "39"—so acting upon it as to make them guilty of gross political impropriety and wilful perjury, if, in their understanding, any proper division between local and federal authority, or anything in the Constitution they had made themselves, and sworn to support, forbade the federal government to control as to slavery in the federal territories. Thus the 21 acted; and, as actions speak louder than words, so actions, under such responsibility, speak still louder.Two of the 23 voted against Congressional prohibition of slavery in the federal territories, in the instances in which they acted upon the question. But for what reasons they so voted is not known. They may have done so because they thought a proper division of local from federal authority, or some provision or principle of the Constitution, stood in the way; or they may, without any such question, have voted against the prohibition, on what appeared to them to be sufficient grounds of expediency. No one who has sworn to support the Constitution, can conscientiously vote for what he understands to be an unconstitutional measure, however expedient he may think it; but one may and ought to vote against a measure which he deems constitutional, if, at the same time, he deems it inexpedient. It, therefore, would be unsafe to set down even the two who voted against the prohibition, as having done so because, in their understanding, any proper division of local from federal authority, or anything in the Constitution, forbade the federal government to control as to slavery in federal territory.The remaining 16 of the "39," so far as I have discovered, have left no record of their understanding upon the direct question of federal control of slavery in the federal territories. But there is much reason to believe that their understanding upon that question would not have appeared different from that of their 23 compeers, had it been manifested at all.For the purpose of adhering rigidly to the text, I have purposely omitted whatever understanding may have been manifested by any person, however distinguished, other than the 39 fathers who framed the original Constitution; and, for the same reason, I have also omitted whatever understanding may have been manifested by any of the "39" even, on any other phase of the general question of slavery. If we should look into their acts and declarations on those other phases, as the foreign slave trade, and the morality and policy of slavery generally, it would appear to us that on the direct question of federal control of slavery in federal territories, the 16, if they had acted at all, would probably have acted just as the 23 did. Among that 16 were several of the most noted anti-slavery men of those times—as Dr. Franklin, Alexander Hamilton and Gouverneur Morris—while there was not one now known to have been otherwise, unless it may be John Rutledge, of South Carolina.The sum of the whole is, that of our 39 fathers who framed the original Constitution, 21—a clear majority of the whole—certainly understood that no proper division of local from federal authority, nor any part of the Constitution, forbade the federal government to control slavery in the federal territories; while all the rest probably had the same understanding. Such, unquestionably, was the understanding of our fathers who framed the original Constitution; and the text affirms that they understood the question "better than we."But, so far, I have been considering the understanding of the question manifested by the framers of the original Constitution. In and by the original instrument, a mode was provided for amending it; and, as I have already stated, the present frame of "the government under which we live" consists of that original, and 12 amendatory articles framed and adopted since. Those who now insist that federal control of slavery in federal territories violates the Constitution, point us to the provisions which they suppose it thus violates; and, as I understand, they all fix upon provisions in these amendatory articles, and not in the original instrument. The Supreme Court, in the Dred Scott case, plant themselves upon the fifth amendment, which provides that no person shall be deprived of "life, liberty or property without due process of law;" while Senator Douglas and his peculiar adherents plant themselves upon the tenth amendment, providing that "the powers not delegated to the United States by the Constitution," "are reserved to the States respectively, or to the people."Now, it so happens that these amendments were framed by the first Congress which sat under the Constitution—the identical Congress which passed the act already mentioned, enforcing the prohibition of slavery in the Northwestern Territory. Not only was it the same Congress, but they were the identical, same individual men who, at the same session, and at the same time within the session, had under consideration, and in progress toward maturity, these Constitutional amendments, and this act prohibiting slavery in all the territory the nation then owned. The Constitutional amendments were introduced before, and passed after the act enforcing the Ordinance of '87; so that, during the whole pendency of the act to enforce the Ordinance, the Constitutional amendments were also pending.The 76 members of that Congress, including 16 of the framers of the original Constitution, as before stated, were preeminently our fathers who framed that part of "the government under which we live," which is now claimed as forbidding the federal government to control slavery in the federal territories.Is it not a little presumptuous in any one at this day to affirm that the two things which that Congress deliberately framed, and carried to maturity at the same time, are absolutely inconsistent with each other? And does not such affirmation become impudently absurd when coupled with the other affirmation from the same mouth, that those who did the two things, alleged to be inconsistent, understood whether they really were inconsistent better than we—better than he who affirms that they are inconsistent?It is surely safe to assume that the 39 framers of the original Constitution, and the 76 members of the Congress which framed the amendments thereto, taken together, do certainly include those who may be fairly called "our fathers who framed the government under which we live." And so assuming, I defy any man to show that any one of them ever, in his whole life, declared that, in his understanding, any proper division of local from federal authority, or any part of the Constitution, forbade the federal government to control as to slavery in the federal territories. I go a step further. I defy any one to show that any living man in the whole world ever did, prior to the beginning of the present century, (and I might almost say prior to the beginning of the last half of the present century,) declare that, in his understanding, any proper division of local from federal authority, or any part of the Constitution, forbade the federal government to control as to slavery in the federal territories. To those who now so declare, I give, not only "our fathers who framed the government under which we live," but with them all other living men within the century in which it was framed, among whom to search, and they shall not be able to find the evidence of a single man agreeing with them.Now, and here, let me guard a little against being misunderstood. I do not mean to say we are bound to follow implicitly in whatever our fathers did. To do so, would be to discard all the lights of current experience—to reject all progress—all improvement. What I do say is, that if we would supplant the opinions and policy of our fathers in any case, we should do so upon evidence so conclusive, and argument so clear, that even their great authority, fairly considered and weighed, cannot stand; and most surely not in a case whereof we ourselves declare they understood the question better than we.If any man at this day sincerely believes that a proper division of local from federal authority, or any part of the Constitution, forbids the federal government to control as to slavery in the federal territories, he is right to say so, and to enforce his position by all truthful evidence and fair argument which he can. But he has no right to mislead others, who have less access to history, and less leisure to study it, into the false belief that "our fathers, who framed the government under which we live," were of the same opinion—thus substituting falsehood and deception for truthful evidence and fair argument. If any man at this day sincerely believes "our fathers who framed the government under which we live," used and applied principles, in other cases, which ought to have led them to understand that a proper division of local from federal authority or some part of the Constitution, forbids the federal government to control as to slavery in the federal territories, he is right to say so. But he should, at the same time, brave the responsibility of declaring that, in his opinion, he understands their principles better than they did themselves; and especially should he not shirk that responsibility by asserting that they "understood the question just as well, and even better, than we do now."And now, if they would listen—as I suppose they will not—I would address a few words to the Southern people.I would say to them: You consider yourselves a reasonable and a just people; and I consider that in the general qualities of reason and justice you are not inferior to any other people. Still, when you speak of us Republicans, you do so only to denounce us as reptiles, or, at the best, as no better than outlaws. You will grant a hearing to pirates or murderers, but nothing like it to "Black Republicans." In all your contentions with one another, each of you deems an unconditional condemnation of "Black Republicanism" as the first thing to be attended to. Indeed, such condemnation of us seems to be an indispensable prerequisite—license, so to speak—among you to be admitted or permitted to speak at all. Now, can you, or not, be prevailed upon to pause and to consider whether this is quite just to us, or even to yourselves? Bring forward your charges and specifications, and then be patient long enough to hear us deny or justify.You say we are sectional. We deny it. That makes an issue; and the burden of proof is upon you. You produce your proof; and what is it? Why, that our party has no existence in your section—gets no votes in your section. The fact is substantially true; but does it prove the issue? If it does, then in case we should, without change of principle, begin to get votes in your section, we should thereby cease to be sectional. You cannot escape this conclusion; and yet, are you willing to abide by it? If you are, you will probably soon find that we have ceased to be sectional, for we shall get votes in your section this very year. You will then begin to discover, as the truth plainly is, that your proof does not touch the issue. The fact that we get no votes in your section, is a fact of your making, and not of ours. And if there be fault in that fact, that fault is primarily yours, and remains so until you show that we repel you by some wrong principle or practice. If we do repel you by any wrong principle or practice, the fault is ours; but this brings you to where you ought to have started—to a discussion of the right or wrong of our principle. If our principle, put in practice, would wrong your section for the benefit of ours, or for any other object, then our principle, and we with it, are sectional, and are justly opposed and denounced as such. Meet us, then, on the question of whether our principle, put in practice, would wrong your section; and so meet us as if it were possible that something may be said on our side. Do you accept the challenge? No! Then you really believe that the principle which "our fathers who framed the government under which we live" thought so clearly right as to adopt it, and indorse it again and again, upon their official oaths, is in fact so clearly wrong as to demand your condemnation without a moment's consideration.Some of you delight to flaunt in our faces the warning against sectional parties given by Washington in his Farewell Address. Less than eight years before Washington gave that warning, he had, as President of the United States, approved and signed an act of Congress, enforcing the prohibition of slavery in the Northwestern Territory, which act embodied the policy of the government upon that subject up to and at the very moment he penned that warning; and about one year after he penned it, he wrote La Fayette that he considered that prohibition a wise measure, expressing in the same connection his hope that we should at some time have a confederacy of free states.Bearing this in mind, and seeing that sectionalism has since arisen upon this same subject, is that warning a weapon in your hands against us, or in our hands against you? Could Washington himself speak, would he cast the blame of that sectionalism upon us, who sustain his policy, or upon you who repudiate it? We respect that warning of Washington, and we commend it to you, together with his example pointing to the right application of it.But you say you are conservative—eminently conservative—while we are revolutionary, destructive, or something of the sort. What is conservatism? Is it not adherence to the old and tried, against the new and untried? We stick to, contend for, the identical old policy on the point in controversy which was adopted by "our fathers who framed the government under which we live;" while you with one accord reject, and scout, and spit upon that old policy, and insist upon substituting something new. True, you disagree among yourselves as to what that substitute shall be. You are divided on new propositions and plans, but you are unanimous in rejecting and denouncing the old policy of the fathers. Some of you are for reviving the foreign slave trade; some for a Congressional Slave-Code for the Territories; some for Congress forbidding the Territories to prohibit Slavery within their limits; some for maintaining Slavery in the Territories through the judiciary; some for the "gur-reat pur-rinciple" that "if one man would enslave another, no third man should object," fantastically called "Popular Sovereignty;" but never a man among you in favor of federal prohibition of slavery in federal territories, according to the practice of "our fathers who framed the government under which we live." Not one of all your various plans can show a precedent or an advocate in the century within which our government originated. Consider, then, whether your claim of conservatism for yourselves, and your charge of destructiveness against us, are based on the most clear and stable foundations.Again, you say we have made the slavery question more prominent than it formerly was. We deny it. We admit that it is more prominent, but we deny that we made it so. It was not we, but you, who discarded the old policy of the fathers. We resisted, and still resist, your innovation; and thence comes the greater prominence of the question. Would you have that question reduced to its former proportions? Go back to that old policy. What has been will be again, under the same conditions. If you would have the peace of the old times, readopt the precepts and policy of the old times.You charge that we stir up insurrections among your slaves. We deny it; and what is your proof? Harper's Ferry! John Brown!! John Brown was no Republican; and you have failed to implicate a single Republican in his Harper's Ferry enterprise. If any member of our party is guilty in that matter, you know it or you do not know it. If you do know it, you are inexcusable for not designating the man and proving the fact. If you do not know it, you are inexcusable for asserting it, and especially for persisting in the assertion after you have tried and failed to make the proof. You need not be told that persisting in a charge which one does not know to be true, is simply malicious slander.Some of you admit that no Republican designedly aided or encouraged the Harper's Ferry affair; but still insist that our doctrines and declarations necessarily lead to such results. We do not believe it. We know we hold to no doctrine, and make no declaration, which were not held to and made by "our fathers who framed the government under which we live." You never dealt fairly by us in relation to this affair. When it occurred, some important state elections were near at hand, and you were in evident glee with the belief that, by charging the blame upon us, you could get an advantage of us in those elections. The elections came, and your expectations were not quite fulfilled. Every Republican man knew that, as to himself at least, your charge was a slander, and he was not much inclined by it to cast his vote in your favor. Republican doctrines and declarations are accompanied with a continual protest against any interference whatever with your slaves, or with you about your slaves. Surely, this does not encourage them to revolt. True, we do, in common with "our fathers, who framed the government under which we live," declare our belief that slavery is wrong; but the slaves do not hear us declare even this. For anything we say or do, the slaves would scarcely know there is a Republican party. I believe they would not, in fact, generally know it but for your misrepresentations of us, in their hearing. In your political contests among yourselves, each faction charges the other with sympathy with Black Republicanism; and then, to give point to the charge, defines Black Republicanism to simply be insurrection, blood and thunder among the slaves.Mr. Jefferson did not mean to say, nor do I, that the power of emancipation is in the federal government. He spoke of Virginia; and, as to the power of emancipation, I speak of the slaveholding states only. The federal government, however, as we insist, has the power of restraining the extension of the institution—the power to insure that a slave insurrection shall never occur on any American soil which is now free from slavery.John Brown's effort was peculiar. It was not a slave insurrection. It was an attempt by white men to get up a revolt among slaves, in which the slaves refused to participate. In fact, it was so absurd that the slaves, with all their ignorance, saw plainly enough it could not succeed. That affair, in its philosophy, corresponds with the many attempts, related in history, at the assassination of kings and emperors. An enthusiast broods over the oppression of a people till he fancies himself commissioned by Heaven to liberate them. He ventures the attempt, which ends in little else than his own execution. Orsini's attempt on Louis Napoleon, and John Brown's attempt at Harper's Ferry were, in their philosophy, precisely the same. The eagerness to cast blame on old England in the one case, and on New England in the other, does not disprove the sameness of the two things.And how much would it avail you, if you could, by the use of John Brown, Helper's Book, and the like, break up the Republican organization? Human action can be modified to some extent, but human nature cannot be changed. There is a judgment and a feeling against slavery in this nation, which cast at least a million and a half of votes. You cannot destroy that judgment and feeling—that sentiment—by breaking up the political organization which rallies around it. You can scarcely scatter and disperse an army which has been formed into order in the face of your heaviest fire; but if you could, how much would you gain by forcing the sentiment which created it out of the peaceful channel of the ballot-box, into some other channel? What would that other channel probably be? Would the number of John Browns be lessened or enlarged by the operation?But you will break up the Union rather than submit to a denial of your Constitutional rights.That has a somewhat reckless sound; but it would be palliated, if not fully justified, were we proposing, by the mere force of numbers, to deprive you of some right, plainly written down in the Constitution. But we are proposing no such thing.When you make these declarations, you have a specific and well-understood allusion to an assumed Constitutional right of yours, to take slaves into the federal territories, and to hold them there as property. But no such right is specifically written in the Constitution. That instrument is literally silent about any such right. We, on the contrary, deny that such a right has any existence in the Constitution, even by implication.Your purpose, then, plainly stated, is, that you will destroy the government, unless you be allowed to construe and enforce the Constitution as you please, on all points in dispute between you and us. You will rule or ruin in all events.This, plainly stated, is your language. Perhaps you will say the Supreme Court has decided the disputed Constitutional question in your favor. Not quite so. But waiving the lawyer's distinction between dictum and decision, the Court have decided the question for you in a sort of way. The Court have substantially said, it is your Constitutional right to take slaves into the federal territories, and to hold them there as property. When I say the decision was made in a sort of way, I mean it was made in a divided Court, by a bare majority of the judges, and they not quite agreeing with one another in the reasons for making it; that it is so made as that its avowed supporters disagree with one another about its meaning, and that it was mainly based upon a mistaken statement of fact—the statement in the opinion that "the right of property in a slave is distinctly and expressly affirmed in the Constitution."To show all this, is easy and certain. When this obvious mistake of the judges shall be brought to their notice, is it not reasonable to expect that they will withdraw the mistaken statement, and reconsider the conclusion based upon it?And then it is to be remembered that "our fathers, who framed the government under which we live"—the men who made the Constitution—decided this same Constitutional question in our favor, long ago—decided it without division among themselves, when making the decision; without division among themselves about the meaning of it after it was made, and, so far as any evidence is left, without basing it upon any mistaken statement of facts.Under all these circumstances, do you really feel yourselves justified to break up this government, unless such a court decision as yours is, shall be at once submitted to as a conclusive and final rule of political action? But you will not abide the election of a Republican President! In that supposed event, you say, you will destroy the Union; and then, you say, the great crime of having destroyed it will be upon us! That is cool. A highwayman holds a pistol to my ear, and mutters through his teeth, "Stand and deliver, or I shall kill you, and then you will be a murderer!"To be sure, what the robber demanded of me—my money—was my own; and I had a clear right to keep it; but it was no more my own than my vote is my own; and the threat of death to me, to extort my money, and the threat of destruction to the Union, to extort my vote, can scarcely be distinguished in principle.Will they be satisfied if the Territories be unconditionally surrendered to them? We know they will not. In all their present complaints against us, the Territories are scarcely mentioned. Invasions and insurrections are the rage now. Will it satisfy them, if, in the future, we have nothing to do with invasions and insurrections? We know it will not. We so know, because we know we never had anything to do with invasions and insurrections; and yet this total abstaining does not exempt us from the charge and the denunciation.The question recurs, what will satisfy them? Simply this: We must not only let them alone, but we must, somehow, convince them that we do let them alone. This, we know by experience, is no easy task. We have been so trying to convince them from the very beginning of our organization, but with no success. In all our platforms and speeches we have constantly protested our purpose to let them alone; but this has had no tendency to convince them. Alike unavailing to convince them, is the fact that they have never detected a man of us in any attempt to disturb them.Wrong as we think slavery is, we can yet afford to let it alone where it is, because that much is due to the necessity arising from its actual presence in the nation; but can we, while our votes will prevent it, allow it to spread into the National Territories, and to overrun us here in these Free States? If our sense of duty forbids this, then let us stand by our duty, fearlessly and effectively. Let us be diverted by none of those sophistical contrivances wherewith we are so industriously plied and belabored—contrivances such as groping for some middle ground between the right and the wrong, vain as the search for a man who should be neither a living man nor a dead man—such as a policy of "don't care" on a question about which all true men do care—such as Union appeals beseeching true Union men to yield to Disunionists, reversing the divine rule, and calling, not the sinners, but the righteous to repentance—such as invocations to Washington, imploring men to unsay what Washington said, and undo what Washington did.Neither let us be slandered from our duty by false accusations against us, nor frightened from it by menaces of destruction to the government nor of dungeons to ourselves. LET US HAVE FAITH THAT RIGHT MAKES MIGHT, AND IN THAT FAITH, LET US, TO THE END, DARE TO DO OUR DUTY AS WE UNDERSTAND IT.
<urn:uuid:e2c7abc4-f839-476e-a925-08abfc6155b0>
CC-MAIN-2016-26
http://millercenter.org/president/lincoln/speeches/speech-3505
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975131
7,376
2.828125
3
CITRA, Fla. — Sometimes, the old-fashioned ways are the best ways. Back before chemical pesticides and herbicides, farmers had to come up with ways to kill the weeds that took over their fields. One method used “back in the day” was letting pigs loose in fields that were not being used for crops for a season and allowing the pigs to do what they do naturally: dig up the roots of weeds and fertilize the land. In the last year, Greg MacDonald, a weed science researcher with the University of Florida’s Institute of Food and Agricultural Sciences, decided to give the method a try to combat nutsedge, a weed that looks like grass and is so resilient it can sprout up through plastic row-crop coverings and even the plastic lining of above-ground pools. “It forms huge numbers of tubers per plant and comes back year after year,” MacDonald said. After Dr. Daniel Colvin, the director of the Plant Science Research and Education Unit in Citra suggested it, MacDonald built pens and brought in domesticated pigs. ”Old-timers were practicing these methods, but nobody’s ever done any research on it,” Colvin said, recalling the farmers he knew as a boy using the pigs after the summer peanut crop had been picked. “You’d come in the next year and have almost no weeds at all.” In addition to feeding them regular swine feed, the pigs were allowed to root up the tubers in fields that had been heavily infested with this major weed. “In the last year, they reduced the nutsedge by 48 percent,” MacDonald said. He could calculate the reduction by pulling multiple soil samples throughout the field, counting the number of tubers in the sample before they moved in the pigs and then three months later. This method of weed control could be used in organic farms, he said. And while he did not test for fertilizer levels in the soil, MacDonald said it is certainly an added benefit. By Kimberly Moore Wilmoth, 352-294-3302, firstname.lastname@example.org Source: Greg MacDonald, 352-294-1594, email@example.com Photo Caption: Professor of Agronomy and Weed Science Greg MacDonald with his pigs. UF/IFAS
<urn:uuid:df91f8d8-984f-482d-8d7f-44fe89842eef>
CC-MAIN-2016-26
http://news.ifas.ufl.edu/2015/07/ufifas-researchers-use-pigs-to-root-out-problem-weeds/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961792
499
3.046875
3
There’s a fair bit in the physics magazines at the moment on superconductivity. http://Physicsworld.com has some interesting articles, for example this one by Ted Forgan and an interview with Frank Wilczek. Superconductivity has its hundreth birthday this year. In 1911, in Leiden, Netherlands, Heike Onnes and Gilles Holst discovered that mercury lost its electrical resistance at 4.2 K. This followed Onnes earlier development of a technique to liquify helium; he was honoured for this development with the 1913 Nobel Prize for Physics. Since then, researchers have been interested in both how to use superconductors (e.g. how they interact with magnetic fields, as exemplified by the classroom levitation experiment), just why superconductors superconduct (from which we have learnt a lot about electrons and quantum mechanics) and how to make superconductors that are superconductive at higher temperatures. Room temperature superconductors would be an astonishing breakthrough, opening up vast possibilities, but they still remain a dream at the moment. Experimentally, the stride forward that made superconductivity more than just a quirky bit of physics was the 1986-87 work in ceramics. A variety of compounds containing yttrium, barium, copper and oxygen (known as YBCO) are superconducting at temperatures above the boiling point of nitrogen (77 K). Liquid nitrogen is easy and cheap to get hold of, and so these superconductors are easily studied. Unfortunately, however, since then the record temperature has only inched upwards (standing at 138 K for atmospheric pressure, and that record has stood since1993) and it looks as if another quantum leap is required to get to room temperature. But there are options other than YBCO that have been studied recently. There are plenty of researchers working on superconductors, for example in New Zealand we have a well respected. group at IRL, led by Buckley and Tallon. People have looked at organic materials and very recently iron compounds. Just perhaps, someday soon, someone will hit on something that pushes the superconducting temperature up another 100 Kelvin or so. That would really be Nobel Prize stuff.
<urn:uuid:83693876-f5f9-4de8-87da-509214501404>
CC-MAIN-2016-26
http://sciblogs.co.nz/physics-stop/2011/05/03/superconductivity-turns-one-hundred/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954748
454
2.96875
3
Researchers from Michigan Technological University have released a paper detailing their failure to find evidence of time travel on search engines and social media sites. Time travelers probably aren’t fans of social media, according to two researchers. In a recently released paper, aptly-titled “Searching the Internet for evidence of time travelers”, Robert J Nemiroff and Teresa Wilson outline their methods of detection of online time-travelers, which include searching for the use of predetermined previously-unused hashtags and phrases on Twitter, Facebook and Google+. Similar searches were also carried out on search engines Google and Bing as part of the pair’s research. To successfully detect a time-traveler, the couple determined two phrases that could be reasonably expected to have never been mentioned or used anywhere on the Internet until a specific date, before which any usage would have to be either coincidental, accidental or knowledge of future events that could possibly be attributed to a time-traveler attempting to contact humans of the past. The two phrases were decided as ‘Pope Francis’ and ‘Comet ISON’; the first of which had only been mentioned once before March 2013 (but in an ‘overly speculative and not prescient’ manner), and the second hadn’t been mentioned at all before September 2012. As such, if either term were found to be mentioned before those respective times, each instance would be investigated as possible proof of time travel. In a similar way, the researchers asked people of the future to include one of two hashtags in a tweet before August 2013 (assuming Twitter was still existent in the future), #icanchangethepast2 and #icannotchangethepast2, which had both been deemed to have no usage until that point. Next, the researches took to Google, Bing and Yahoo! to determine if any time-traveler from the future had returned to the past and searched for any of the terms before they were mentioned. “For example, a time traveler might have been trying to collect historical information that did not survive into the future, or might have searched for a prescient term because they erroneously thought that a given event had already occurred, or searched to see whether a given event was yet to occur.” Of these efforts, the scientists noted that “although numerous searches were uncovered, none occurred sufficiently early to be considered prescient”. The paper went on to note that Google Trends (the publicly-accessible service offered by Google that allows searching through search terms by volume over time) yields results only where there was a ‘significant’ search volume. Therefore, it is possible that any such searches from humans of the future would not appear in this service. Other methods of detection outlined in the paper include searching for instances of predetermined phrases on websites and blogs before a specific time, and checking for e-mail sent to an address controlled by the team between November 2008 (a month after the address was created) and August 2013. So, does this mean no time travel? “Although the negative results reported here may indicate that time travelers from the future are not among us and cannot communicate with us over the modern day Internet, they are by no means proof. There are many reasons for this. First, it may be physically impossible for time travelers to leave any lasting remnants of their stay in the past, including even non-corporeal informational remnants on the Internet. Next, it may be physically impossible for us to find such information as that would violate some yet-unknown law of physics, possibly similar to the Chronology Protection Conjecture. Furthermore, time travelers may not want to be found, and may be good at covering their tracks.” Source: Searching the Internet for evidence of time travelers Image: Toni Verdú Carbó [Flickr]
<urn:uuid:8b2be627-17a3-4d22-a035-40e2561167f0>
CC-MAIN-2016-26
http://vr-zone.com/articles/researchers-fail-find-signs-time-travel-social-media-sites/68530.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968732
795
2.546875
3
HUNTSVILLE, Ala. (WHNT) - Flashcards can be a great study tool, especially when they're created digitally, which allows them to be easily shared. Teacher Wendy Ferraez and student Blake Williams both recommended Quizlet.com for WHNT News 19's "Click Picks" Back to School apps week. Quizlet is a free resource offering simple tools, including a flashcard creator, to aid in studying. Students can make, practice and share groups of flashcards on various subjects - everything from vocabulary to language to science terms. Teachers too, can craft a set of flashcards or study tools for a particular class. The flashcards tool allows users to study by term, definition or both. There are also audio speller tools and a test feature, offering a mini-quiz on the material in question in both written and multiple choice format. Watch WHNT News 19 This Morning from 4:30 a.m. - 7:00 a.m. to see all of the featured apps during the week of July 28th.
<urn:uuid:0b77949a-3b06-46e7-bdb6-fbd6e9a34f8b>
CC-MAIN-2016-26
http://whnt.com/2014/07/30/wednesdays-click-picks-back-to-school-app-quizlet/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929868
223
3.125
3
Who We Are Indian cuisine is distinguished by its sophisticated use of spices and herbs and the influence of the longstanding and widespread practice of vegetarianism in Indian society. Indian cuisine is the general term for the wide variety of cooking styles from India. In reality, India hosts an even greater number of distinct regional cuisines than the entire European continent. Indian food is almost always prepared with fresh ingredients along with delicate mixtures of many different fresh and dried spices and the exact recipes often vary greatly from one household to the next. Food is an integral part of India’s culture, with cuisines differing according to community, region, and state. Indian cuisine is characterized by a great variety of foods, spices, and cooking techniques. Furthermore, each religion, region, and caste has left its own influence on Indian food. Many recipes first emerged when Vedic Hindus predominantly inhabited India. Later, Mughals, Christians, British, Buddhists, Portuguese, and others had their influence. Vegetarianism came to prominence during the rule of Ashoka, one of the greatest of Indian rulers who was a promoter of Buddhism. In India, food, culture, relition, and regional festivals are all closely related. The staples of Indian cuisine are rice, atta (whole wheat flour), and at least five dozen varieties of pulses, the most important of which are chana (Bengal gram), toor (pigeon pea or red gram), urad (black gram) and mung (green gram). Chana is used in different forms, may be whole or processed in a mill that removes the skin, e.g. dhuli moong or dhuli urad, and is sometimes mixed with rice and khichri (a food that is excellent for digestion and similar to the chickpea, but smaller and more flavorful). Pulses are used almost exclusively in the form of dal, except chana which is often cooked whole for breakfast and is processed into flour (besan). The most important spices in Indian cuisine are chili pepper, black mustard seed (rai), cumin (jeera), turmeric, fenugreek, ginger, coriander and asafoetida (hing). Another very important spice is garam masala, which is usually a powder of five or more dried spices, and is commonly comprised of cardamom, cinnamon, and clove. Some leaves are commonly used, like bay leaf, coriander leaf and mint leaf. The common use of curry leaves is typical of South Indian cuisine. In sweet dishes, cardamom, cinnamon, nutmeg, saffron, and rose petal essence are used. In Indian cuisine, curry refers not to a spice, but to any dish eaten with rice, or more commonly, any dish with a gravy base. Indian spices are often heated in a pan with oil to intensify the flavor before adding other ingredients. Curry Phenomenon: Simple dry powders, such as red chili powder and curry powder, often replace in Western countries, such as Great Britain and the United States, the complex formulations of fresh and dried spices. The word Curry comes from the Tamil kari (type of thick sauce). North Indian cuisine is distinguished by the higher proportion-wise use of dairy products; milk, paneer (cottage cheese), ghee (clarified butter), and yoghurt are all common ingredients. North Indian gravies are typically dairy-based and employ thickening agents such as cashew or poppy seed paste. Other common ingredients include chilies, saffrom, and nuts. South Indian cuisine is distinguished by a greater emphasis on rice as the staple grain, the liberal use of coconut and curry leaves, particularly coconut oil, and the ubiquity of sambar and rasam (also called saaru) at meals. South Indian cooking is even more vegetarian-friendly than North Indian cooking. North Indian cooking features the use of the tandoor, a large and cylindrical coal-fired oven, for baking breads such as naan and khakhra; main courses like tandoori chicken also cook in it. Another important feature of North Indian cuisine are flat breads. These come in many different forms, such as naan, paratha, roti, puri, bhatoora, and kulcha. The samosa is a typical north Indian snack. These days it is common to get it in other parts of India as well. The most common (and authentic) samosa is filled with boiled, fried, and mashed potato, although it is possible to find other fillings. The dosa, idli, vada, bonda, and bajji are typical South Indian snacks. Hot/Spice Factor: Indian food is usually perceived as ‘spicy’ or ‘hot’. While some of that is true, most Indian foods are not necessarily ‘spicy’ or ‘hot’. Chili peppers similar to cayenne chili peppers, commonly referred to as “chili(es)” in Indian cookbooks and lexicon, is the common culprit for making the dishes fiery hot. If you do not prefer the heat, stay away from any dishes that contain liberal use of chilies or tone down the chili amount to suit your taste. The other ingredient to be careful with is peppercorn, most commonly referred to as “pepper” in the Indian recipes and cookbooks. Cloves and cinnamon also increase the heat. Indian cuisine is a combination of wonderful and subtle flavors. They vary as much as the climates and languages of India, and are as exotic as the people of India. India is a country with about eighteen different languages and sixteen hundred plus dialects, so food varies vastly from region to region. Our humble attempt is to bring you the most popular dishes from the north western states of India. We have carefully selected dishes from other popular cooking styles or regions of India with a hope you would enjoy them to the fullest. With our most recent menu we have also made an attempt to introduce some contemporary style Indian dishes. On our menu each dish will have its own distinctive flavor and aroma, and most of our dishes can be prepared to your desired level of spice choice. The flavors we offer come from masterful blend of mixing fresh ground spices on our premises, and traditional sauce making techniques. The blending and preparation of spices is an ancient art and is indispensable to Indian cuisine. The result is delightful flavors, those which cannot be attained by the use of “Curry Powder”. We do hope you will have a great dining experience and should you need any further assistance or help with wine pairings for your food, please feel free to ask.
<urn:uuid:86a80896-ec5f-4a51-ae0c-449603738e13>
CC-MAIN-2016-26
http://www.ckcuisine.com/about.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953695
1,402
3.140625
3
Planck time article, this Fuhz page will hopefully provide the answers to the who, what where and why on the Planck time topic. At the bottom of the page we often provide links to external documents relating to Planck time which may also help your research. Every effort is made to ensure the content on this page is as accurate and error free as possible, however whenever researching information that requires the utmost accuracy such as a term paper it is always best to cross reference facts with numerous sources. The Planck time is the unique combination of the gravitational constantG, the special-relativistic constantc, and the quantum constantħ, to produce a constant with units of time. Because the Planck time comes from dimensional analysis, which ignores constant factors, there is no reason to believe that exactly one unit of Planck time has any special physical significance. Rather, the Planck time represents a rough time scale at which quantum gravitational effects are likely to become important.clarification needed The nature of those effects, and the exact time scale at which they would occur, would need to be derived from an actual theory of quantum gravity. All scientific experiments and human experiences occur over time scales that are dozens of orders of magnitude longer than the Planck time, making any events happening at the Planck scale hard to detect. As of May 2010[update], the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 2.2 × 1026 Planck times.
<urn:uuid:86c7e918-45a4-4950-8634-1da04ecabb75>
CC-MAIN-2016-26
http://www.fuhz.com/Planck_time
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937247
316
3.6875
4
The study investigates how Geographic Information System (GIS) spatial analysis and modelling improves demographic data analysis in the planning process (where demographic information and spatial analysis are strongly interrelated); i.e. seek for new ways and how the available 2D, 2.5D and 3D GIS spatial analysis and modelling methods can be manipulated to produce a set of techniques that can be used to derive different demographic variables and quantities for planning analysis. A principal motivation for the GIS Demographic research is that planning analysis uses a lot attribute data about humans; where planning analyses require an indication as to how the total population and how selected composition groups are and will be spatially distributed in the study area making demographic data the main source of data into the planning process; and that most of the GI science planning analysis research is about GIS development rather than about GIS use, without a strong theoretical link between the two, to advance the GI science and to be useful in planning practice has to take the perspective of GI use as a move to direct way of providing a link between the science and planning practice, is the research theme of systematic evaluation GIS demographic spatial analysis and modelling. In planning any area the growth potentials must be expressed in terms of the population it is expected to sustain – the size of population, its composition, and characteristics, and its spatial distribution. Although population data is collected at the point level (individuals and household), it is always aggregated to existing spatial entities (i.e. administrative units) to allow tabulations according to various data attributes and demographic analysis to be carried out using statistical techniques. The human Geographical dimensions of the information in demographic analysis are being forgotten most of the time, geography is only used to collect data; as a result information is lost, or hidden or details are difficult to extract, in additional being unable to view and analysis spatially in a way that produce demographic variables and quantities that are in line with planning analysis inputs. To easily understand and fully utilize all of this demographic information there is need to carry spatial analyses at disaggregate level and to be linked to their locations to help in equity development. These problems correspond directly to the two key strengths of GIS - manipulation and display of spatially referenced data (Tomlin, 1990; Langford & Unwin, 1994; Chou, 1997; Chrisman, 1997; and DeMers, 1997, 2000), this is further facilitated by GIS’s capability to test and manipulate variables faster and as it is less expensive to test models rather than reality, and can predict consequences of proposed activities through simulation, which helps to pick "best" alternative. Thus employ the techniques of 2D GIS, 2.5D GIS (DEM and DTM), and new techniques in form of three-Dimensional Demographic Model (3D-DM) in demographic data analysis and modelling, visualisation and interpretation; where other applications such as mining, hydrology, and environmental modelling have crossed over the 2D boundary into 3D modelling. This shift of application within GIS environment is necessary and is in line with other works by Yeh (1999) where he integrates GIS in planning; research by Lee (1995) in his PhD thesis at University of Washington where he develops a Methodology for Generating Alternative Land Use Plans Using GIS Modelling Techniques; and also GIS field came from the fields of spatial statistics, database management, and cartography. Geographical information system (GIS) has been variously defined in many ways and by many people including Aronoff (1989), Huxhold (1991), ESRI (1992, 1994, 1998), Burrough (1986), Clarke (1986), Healey, et al. (1998), DeMers (1997, 2000), etc. But in Chrisman (1997) we find one of the most general definitions which was developed by consensus among 30 specialists: Geographical information system - A system of hardware, software, data, people, organization and institutional arrangements for collecting, storing, analysing, and disseminating information about areas of the earth. (Ducter and Kjerne, 1989) The term "spatial analysis" encompasses a wide range of techniques for analysing, computing, visualizing, simplifying, and theorizing about geographic data. Methods of spatial analysis can be as simple as taking measurements from a map or as sophisticated as complex geocomputational procedures based on numerical analysis. Spatial analysis is statistical description or explanation of either Locational or attribute information or both (Goodchild, 1987). From Fischer, et al (1996) and Chou (1997) the spatial analysis include techniques such as spatial querying, point-in-polygon operation, buffering, overlaying, intersection, dissolving, proximity analysis, etc Modelling, in GIS, it is used generally to refer to any operation involving the representation and manipulation of spatial data, particularly in composition of new features and coverages through the process of overlay (Burrough, 1986 and Tomlin 1990). It also has another meaning in the mainstream of system sciences, modelling involves simulation based on processes, which give rise to system structures (Batty, et al. 1994). In this research both definitions of modelling are employed. Demography is the study of human populations with an emphasis on statistical analysis i.e. statistical characteristics (Plane, et al, 1994). Data describing a human population is referred to as demographic data. Involving primary the measurement of the size, growth, density, distribution, and diminution of the numbers of people, the proportions living, being born or dying within some area or region and the related functions of fertility, mortality and marriage (Cox, 1970). Demographics is often used in singular, meaning the application of demographic information and methods in business, planning, and public administration; and it is also seen in plural, referring to the demographic information itself (Merrick and Tordella, 1988). The major difference between demographics and the field of demography generally is that the later is concerned more with producing new knowledge and understanding of human behaviour, where as the former is concerned more with the use of existing knowledge and techniques to identify and solve problems (Weeks, 1994 pp. 477). The main contention in this research is Demographic spatial analysis and modelling in two-dimensional (2D), two and half dimensional (2.5D), and 3D GIS, i.e. which aspects of demographic characterisation can be accomplished by 2D, 2.5D, 3D GIS, or a combination of them to produce demographic variables and quantities which are in line with planning analysis input using procedures that are common and easier to planners. Past research confirms that the GI-based tools developed by vendors and/or academics are for various reasons under-utilised (Harris 1989; Harris and Batty 1993; Holmberg 1994; lee, 1995 Klosterman 1997). Among reasons for under-utilisation of GIS in planning is incompatibility of the mostly generic GI products with the tasks and functions performed by urban and regional planners as it is one thing to have digital geographic information (Murray, 1999), but a far more challenging issue is how this information can be analysis/modelling and understood (what does the demographic data indicate or suggest and what are the implications) in planning environments. Demographic data for planning analysis has been traditional analysed by statistical techniques where various models have been developed like Population Analysis Spreadsheets (PAS) for Excel, population change (plane, et al, 1994), spread model (Klosterman, et al 1994) has been used. Most of these models although they can account for change in demography, they lack the spatial aspect (it is not possible to geographically view and analysis the patterns) (Klosterman, et al, 1993); tends to ignore the demographic spatial dimension and if covered only at aggregate level; But there are approaches from different fields trying or which have taken advantage of GIS’s spatial analysis capability in order to incorporate the spatial (geographical) aspect, this has proved useful for understanding physical and environmental processes, the socio-economic dynamics are still hard to model and/or simulate, in terms of population analysis, the use of GIS in demographic data is not fully utilised, although it is rapidly expanding, as David Martin reports, the 1991 census of population was the first in United kingdom (UK) to be conducted in what might be called the ‘GIS era’; the 2001 census geography is designed by and for GIS (David Martin, 1997), and many others areas of integration like in Tiger (US census), PopMap - An Information and Decision Support System for Population Activities (UN web site). In terms of 2D GIS various approaches have been proposed which include integration of spatial analysis methods in GIS that has lead to a new exploratory analysis (Goodchild, 1987; Haining, 1990; Fotheringham and Rogerson, 1993; Openshaw 1994b, c; and Openshaw et al. 1996) this has followed Fotheringham, et al (1994); Fisher, et al (1996); Carver (1997); Chou (1997); and DeMers (1997, 2000) that GIS is an incomplete set of spatial analytical tools, in many cases we are obliged test or combine GIS tools with statistical analysis and others in order to accomplish spatial analysis. Lot of research going on the linkage between spatial statistical analysis and geographic information systems, in Fotheringham, et al. (1994) the linkage has been basically suggested in three different ways. The first strategy is GIS and statistical packages like SAS and SPSS can be maintained as two separate packages and simply exchange data between the two systems, where write information from a GIS into a file and read this into statistical package to carry out analysis, then read back by GIS. In Carver (1997) we see that to export spatial data from the GIS to standard statistical systems is not an adequate solution, because the nature of spatial data requires specific spatial analytical functions. But Anselin et al. (1993) have combined SpaceStat, a program for the analysis of spatial data, with the Arc/Info using this approach. The second strategy is GIS functions can be embedded within spatial analysis or modelling. Although in caver (1997) it is noted that embedding GIS functions into a spatial statistical package seems to be an overwhelming exercise and not really realistic, examples which have taken place include XLisp-Stat Package which extend the geographical data handling and mapping facilities of a package designed for statistical programming (Tierney, 1991 and Openshaw et al. 1996). The third strategy, is Spatial analysis can be fully integrated within the GIS software. That a full integration of spatial analysis tools into a GIS seems most promising (Hansen, 1996), and that using this strategy we can utilize the interactivity between maps, charts and spatial statistics to get a good feeling of patterns and relationships within the data; examples include Arc/S-Plus (Arc/Info is linked to S-Plus), Spacestat integration with ArcView GIS by Anselin, Openshaw’s Geographical Analysis Machine (GAM) (Openshaw et al. 1987). Specialized GIS packages directed specifically at spatial analysis have emerged (Bailey and Gatrell (1995), Fisher et al. (1996), Haining (1990), Anselin and Getis (1993); and Anselin (1996, 1999), a good example is IDRISI for windows (IDRISI for windows, 1998). But all these are not directed towards micro demographic analysis from the planner’s point of view i.e. the inherent inability of the existing methods to provide useful results in planning analysis and the difficulty the planner often experiences in understanding what the results mean in relation to planning analysis is the issue here, not the integration. The principle need is to develop a style from the existing techniques and documentation of spatial analysis for GIS demographic analysis the planner (as a user of GIS) can use, not to force a planner to the methods that were created by experts for experts (Openshaw et al. 1996). Also knowing that GIS is not only 2D this has to be extended to 2.5D and 3D GIS. The idea that population can most appropriately be mapped and modelled as a surface (2.5D) is not new, Schmid and MacCannell (1955) discussed the construction of contour-based maps of population density, while Nordbeck and Rystedt (1970) demonstrated that population density can be viewed as a continuously varying reference interval surface. Tobler (1979) presented a method for pycnophylactic (volume-preserving) interpolation of values from irregular zones into surface form, and Goodchild et al. (1993) review a number of approaches to areal interpolation, noting that the process can be viewed as involving the estimation of an underlying population surface; other developments are by Martin and Bracken (1991), with the latest developments being population geocoding, analysis, and modelling using grid by Martin (1999). All handle population surfaces by considering a value for a point as being representative for total for an area (or ratio based on total). Another shorting coming is that demographic variables are represented by two entities and it is these which differentiate them, for example gender, it either male or female; marital status is either single or married; etc. when these entities are modelled in GIS with the latest development in GIS surface analysis and modelling, we are able only to show their spatial locations and extents but not their spatial quantities. But for a planner is always looking how much and how demographic characteristics vary as move from one location to the next, that does not provide total solution for s/his needs. This is further made more complex by the fact that demographic quantities are required at different level of aggregation, thus the problem ranges from representing micro demographic data to aggregated data; also another factor, which comes in play, is the combination of the surface and the quantities (volumetric) analysis and modelling, which leads to the need for 3D GIS demographic analysis. The central theme (thesis) is how demographic spatial analysis for planning analysis can be achieved in GIS by looking at two-dimensional, two and half dimensional, and three-dimensional spatial analysis and modelling to produce variables and quantities which are in line with planning analysis both at aggregated and disaggregated level. The study is to investigate how GIS in terms of 2D, 2.5D, and 3D spatial analysis and modelling improves the demographic data analysis in the planning process; aim being to come out with documentation of set of techniques so that their inclusion in GIS may be facilitated. The following a priori (testable) objectives are formulated: · To asses the current demographic data analysis and GIS (2D, 2.5D or 3D GIS) and carry out demographic spatial analysis and modelling. · To model demographic characteristics into a three dimensional demographic model (3D-DM) to derive useful information for demographic characterisation and demographic quantities. · To document how micro demographic data can be spatially analysed in GIS 2D, 2.5D and 3D) to produce variables and quantities which are in line with planning analysis · Outline GIS demographic spatial analysis and modelling procedures in planning to come up with appropriate terminologies as well as enabling the use of them. In meeting these objectives, a number of continuing themes become apparent; these are outlined in section 1.3 (Research approach i.e. Proposed Methodology). The approach starts by a review of literature of problem at hand, then the main body which has been divided into three components a) overview, b) approach, and c) application; end with a conclusion giving the discussion and future work. The details are given under scope of research and have been divided into chapters (see thesis layout and figure 1.3). Review of population (demographic) data in planning is the beginning point, which is followed by demographic statistical spatial analysis (DSSA) i.e. statistical spatial analysis methods for demographic analysis; methods of GIS data analysis and modelling; then a look at GIS in planning analysis; this leads to first task of this research i.e. GIS Demographic spatial analysis which involve looking and comparing DSSA and GISSA (Geographical information system spatial analysis i.e. GIS spatial analysis methods in use), what demographic analysis using GIS requires; how GISSA can be manipulated with an eye on results which are always expected from DSSA, to get GIS demographic spatial analysis (GISDSA) which can help to simplify and enhance demographic data analysis; at this stage the concentration is in the convectional 2D GIS and all is done following the Model of carrying out GIS Demographic Spatial Analysis (figure 1.1). Figure 1.1 Model of carrying out GIS Demographic Spatial Analysis Then proceed to Demographic analysis and modelling in 2.5D (surface analysis and characterisation). Look at the shorting coming of 2D GIS demographic analysis; then introduce the new demographic surface terms, their representation, and the derivation of parameters from the demographic surface and their interpretation. From 2.5D GIS Demographic analysis and modelling, introduce 3D GIS Demographic spatial analysis and modelling. This is accomplished by employing the techniques from terrain analysis and modelling (DEM and DTM) in form of three-Dimensional Demographic Model (3D-DM) for demographic data representation, interpretation, visualisation and analysis. But before embarking on a detailed description of the nature of 3D GIS modelling its scope is defined by addressing a number of underlying questions. What should a characterisation of demographics in terms of surface attempt to achieve? How should demographic surface be modelled? Modelling the Third Dimension encompasses the following general tasks (see figure 1.2): Figure 1.2: Three Dimensional Demographic Modelling Tasks · 3D-DM generation: reading demographic from the database, formation of relations among the diverse observations (model construction); · 3D-DM manipulation: modification and refinement of 3D-DMs, derivation of intermediate models; · 3D-DM interpretation: 3D-DM analysis, information extraction from 3D-DMs; · 3D-DM visualisation: graphical rendering of 3D-DMs and derived information; and · 3D-DM application: development of appropriate application models for planning purposes. 3D-DM application in planning forms the context for 3D-Demographic modelling as each particular utilisation has its specific functional requirements relative to the other demographic modelling tasks. To accomplish the objectives using the outlined methodology, use two types of data set to test techniques: One at very micro level collected from the Heritage area, in Georgetown, Penang state, Malaysia; The main information requirements are 1) the cadastral GIS of the study area, 2) buildings: floor space, number of floors, ownership, building location, and area etc 3) People: employment, age, sex, ethnic grouping, number of children, family members, etc and 4) Land use: shopping points, housing, recreation, etc. The demographic data collected from the study area; the cadastral data for the area already exist in GIS format obtained from Assoc. Prof. Dr. Lee Lik Meng, the building and land use data in GIS format from Penang state planning office (Jabatan Perancangan Bandar dan Desa) Penang, Malaysia. The other is population census data collected by Malaysia population and housing census office in 1991 census, which they publish at aggregated level of Mukim (parish). The following will be used: ArcView to provide a graphical user interface (GUI) for direct interaction to view and edit geo-feature objects; ArcView Avenue (Customisation and Application Development for ArcView) programming environment, ArcView Spatial Analyst extension, ArcView 3D extension, Microsoft Access (Relational database management system (RDBMS)), and SPSS for statistical analysis. The link between these software packages is done using Microsoft Open Database connectivity and other import and export functions within these packages i.e. use SPSS Data Driver 32 (SPSS Data source 32) and ArcView SQL connection to use SPSS Data files in ArcView, SPSS Data source 32 and Microsoft Access to read SPSS files into Microsoft Access, using Microsoft Access Database and ArcView SQL connection to Database, to analysis data in SPSS use SPSS database capture using dbase files to read ArcView database files and Microsoft Access Database to read from the database. The references being used include Books and periodicals, lectures, seminars, and discussions, computer software packages, and the Internet sites about GIS, planning, demography and population . All the work (thesis, references, links, and other research outcomes) are being hosted on School of Housing, Building, and planning (HBP) web site under thesis section http://www.hbp.usm.my/thesis/heritageGIS This thesis consists of six chapters including the introduction, which highlights the research background, motivation, Problem statement and objectives of the research, methodology and the scope of the research. Chapter 2 provides a global overview of demographic data in planning analysis; the concerns and methods of Demographic statistical spatial analysis (DSSA), GIS demographic data analysis and modelling which includes methods of geographical information system spatial analysis (GISSA), GIS in planning. It highlights the relative suitability of certain methods to particular applications and contrasts their differences, strengths and weaknesses; then GIS demographic analysis. It ends by introducing multi dimensional GIS for Demographic modelling where it outlines 2D, 2.5D, and 3D GIS to give an insight. Chapter 3, starts with 2D GIS demographic analysis, then move on to examining the advances and work done in terms of demographic surface analysis and modelling, then derivation and interpretation of surface parameters; then introduce the aspect of 3D GISs; It discusses the need and criteria for a demographic surface representation, analysis, modelling using 3D GIS, which leads to chapter four dealing with 3D demographic analysis and modelling. In Chapter 4, a complete modelling approach is described in details. Starting from demographic 3D modelling problems, definition and documentation of demographic terms to be used, analysed and modelled; then details 3D GIS (data structures, georeferencing, etc), the modelling of the third dimension taking field demographic data as the input, representation of Demographics as 3D spatial objects, describes the use of spatial tessellation using Voronoi diagram and triangular irregular networks (TIN) to construct and represent demographic characteristics where the issues of interpolation and extrapolation come into play; conversion of data points into triangular irregular networks, a new method to improve the modelling quality of these TINs is described i.e. Algorithms for generating triangular irregular networks in three dimensions (3D TIN) will be developed to come out with the 3D TIN to be used in the 3D-DM, demographic quantitative modelling. Chapter 5 gives visualisation of demographic data - here the concentration is using the existing GIS, Scientific, and Geographical visualisation techniques. Then, uncertainty in GIS analysis where fuzziness is considered in the methods of Demographic analysis and modelling not in errors in data obtaining/observation and storage; Followed by the evolving the GIS DM, where the concern is about integration the developed models with other datasets. This chapter concludes by looking at a possible structure of utilising these techniques in planning analysis. Finally, Chapter 6 discuses the merits and limitations of this approach and a comparison with closely related current work by summarising the research, documents and recapitulates the main issues (results) and the study contributions obtained throughout the research and highlights areas of future work. Figure 1.3 shows the main partitions of this thesis and a visual overview of its contents. Figure 1.3: visual overview of the main portions of the thesis For references quoted in the thesis without year of publication are only from the internet due to the fact that some authors don’t indicate years and these pages are always changing, but that does not mean all internet references don’t have years, some do have.
<urn:uuid:b6d6573c-bcc6-4a09-ba45-8e2ceec259f7>
CC-MAIN-2016-26
http://www.hbp.usm.my/thesis/HeritageGIS/thesis/Chapter1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908778
5,054
2.625
3
Leopard Gecko Shedding Normal Leopard gecko shedding (synonyms Leopard gecko molting and Leopard gecko ecdysis), is a physiological process where the old skin is removed to give way for newer, usually larger one. The process of normal Leopard gecko ecdysis is dependant on a combination of environmental factors including the Leopard gecko humidity, hydration and It is normal to see Leopard gecko eating problems during the shedding process. Some leopard geckos will also show which is also completely normal. Leopard Gecko Shedding Problems Leopard gecko shedding problems, or so-called dysecdysis, is defined by abnormal shedding. A healthy, well looked after Leopard geckos are supposed to shed in one large piece. Normal shedding is age dependant and should take place periodically. Abnormal shedding is seen where the skin comes off in pieces or not at all, or when the Leopard gecko shows Leopard gecko dysecdysis can be due to various factors. Most shedding problems occur when there is a deviation in the main environmental factors. Other contributing factors include age, Leopard gecko parasites, stress, some Leopard gecko injuries and excessive Leopard gecko handling during the shedding period. Leopard Gecko Dysecdysis Contributing Factors gecko temperatures can lead to chronic stress, retarded growth and possible Leopard gecko disease. All this prevent a Leopard gecko from getting bigger in size which will cause shedding problems. The humidity is probably the single factor leading to most Leopard gecko shedding problems. A low humidity, or a too dry environment prevents old skin from loosening properly. A Leopard gecko hide box with a suitable moist substrate should be part of the cage Leopard Gecko Shedding Problem Complications The main abnormal shedding complication is certainly the formation of constricting bands around the toes, legs and tail. This serves as a tourniquet which prevent the blood flow to the distal areas leading to necrosis and possible permanent loss of Other Leopard gecko shedding problem complications include secondary skin infections and mouth infections where the retained skin serves as growing sites for various bacteria. Leopard Gecko Shedding Problem Treatment Dysecdysis is not a primary disease, but rather a symptom of an underlying cause. Leopard gecko shedding problems must preferable be treated under the instructions of an experienced veterinarian. It is important to keep thorough records of your Leopard gecko sheddings to serve as history for the vet. The treatment of Leopard gecko shedding problems is threefold namely to treat the clinical signs of dysecdysis, to treat the underlying cause and to treat the The physical nature of the condition, i.e. the pieces of skin that are not sloughed naturally are treated with short, lukewarm water baths. A soapy disinfectant can also be used to aid in the process. The pieces of constricted skin around the extremities must be GENTLY removed by hand. The most common underlying causes are incorrect Leopard gecko humidity and Leopard gecko temperatures. These two environmental factors must be corrected promptly. A less common problem is Leopard gecko mites. Leopard gecko shedding problem complications are treated with any or a combination of the following: Parenteral fluids (drip), antibiotics, antiparasitics. Sometimes amputation of toes, legs or the tail is advised where necrotic constrictions took place.
<urn:uuid:5ad782ce-8ad5-4344-bff5-125a70a7e03b>
CC-MAIN-2016-26
http://www.leopardgeckos.co.za/health-leopard-gecko-shedding-problems.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887804
780
3.203125
3
Cardiac output: intrinsic, neural and endocrine effects Heart – a Double Pump The cardiovascular system, the body’s pressurized blood re-circulation system is powered by a double pump. It is composed of two filling and two pumping chambers, the atria and the ventricles. The right ventricle, a low-pressure output system drives blood into the thin walled arteries of the lung, the pulmonary arteries. In the lung carbon dioxide, a waste product of metabolism is removed from blood and oxygen is added. Oxygenated blood returns to the left chambers of the heart through the pulmonary veins and is pumped by the left ventricle into the aorta with sufficient force to carry it to distant parts of the body. The model for building tension by cycling of actin and myosin cross bridges is the same for cardiac muscle and skeletal muscle. However, cardiac muscle is organized somewhat differently. Unlike straight-line skeletal muscle cells with their many nuclei per cell, cardiac muscle cells contain only one nucleus that is located centrally and the cells are branched. Myosin filaments are fewer and thicker in cardiac muscle than in skeletal muscle. Cardiac muscle has larger T-tubules that do not form triads with the sarcoplasmic reticulum. Protein structures called intercalated discs tie cardiac muscle cells to each other creating a network. Ease of movement of ions and other material within the network is aided by the presence of gap junctions between cells. There are two types of muscle cells in the heart, contracting muscle cells and conducting muscle cells. Contracting muscle cells make up most of the atria and ventricles and generate the force and pressure required to eject blood. The conducting muscle cells contribute little to the generation of force. Their function is to spread action potentials that trigger muscle contraction over the entire heart. Cardiac Muscle Electrical Activity Contraction of cardiac muscle requires action potentials like skeletal muscle. But, in the heart action potentials originate in the conducting muscle cells rather than at a neuron synapse. Muscle cell action potentials use the same type of chemistry as neuron and skeletal muscle action potentials. A review of action potentials can be found at “Neurons: Where Does Their Electricity Come From?” There are some differences however between heart and skeletal muscle action potentials. Heart muscle action potentials last much longer, 150-300 milliseconds compared to 1-2 milliseconds for skeletal muscle. Heart muscle action potentials display variable shape because of the participation of voltage-gated chloride [Cl–] and calcium [Ca++] channels in addition to the sodium [Na+] and potassium [K+] channels used by skeletal muscle and neurons. The time course and shape of action potentials differs among ventricle, atrium, and pacemaker muscle. Yet, in all cases it is the amount of Ca++ entering during an action potential that governs the force generated and the pace of heart pumping. This is because entering Ca++ is the trigger in heart muscle that releases stored calcium from the sarcoplasmic reticulum, which in turn initiates cross bridge formation between actin and myosin filaments and muscle shortening. The primary pacemaker cells of the human heart are located at the sinoatrial [SA] node. Other conducting muscle cells with pacemaker capability exist along the tract of conducting muscle from the SA node to the tip of the ventricles. But it is the fastest pacemaker, the SA node that normally determines heart rate. Inflowing ions during a pacemaker potential spread through muscle gap junctions to activate the tract of conducting muscle cells. Likewise, contracting muscle cells along the conducting tract are brought to threshold triggering their action potentials. The timing of the spread of action potential ions through the conducting tract and into contracting muscle brings large areas of contracting muscle to threshold simultaneously. Frank-Starling Law of the Heart Pressure in the arteries depends upon the force of ventricular contraction and the amount of blood ejected. The Frank-Starling Law of the heart states that the volume of blood ejected by the ventricle depends upon the volume present at the end of the filling period. This relationship between stretch of heart muscle during filling and force of contraction ensures that the amount of blood ejected by the heart matches venous return to the heart. Unlike skeletal muscle, increasing cardiac muscle length increases sensitivity of troponin-C to Ca++. This means that cross bridges between actin and myosin occur at lower concentrations of Ca++. Stretching of cardiac muscle also increases the amount of Ca++ released from sarcoplasmic reticulum stores when the next action potential arrives. Autonomic Control of Blood Pressure Blood is driven through the vascular system of arteries and veins by the difference in blood pressure between the arterial and venous sides of the circulation. Mean arterial pressure, the driving force behind blood flow is maintained at a set point of about 100 mmHg [millimeters of mercury] by a continuously active neural feedback loop. Baroreceptor Reflex Afferent to brain Stem Neural sensory receptors for blood pressure, baroreceptors, are located in the walls of the aortic arch and the carotid sinus. The carotid sinus is where the common carotid bifurcates into the internal carotid artery and the external carotid artery to the brain and face, respectively. Baroreceptors are neuron afferents that respond to pressure and mechanical stretch of the arteries. They fire constantly and are particularly sensitive to the rate of change of arterial pressure. Their firing rate increases with increased arterial stretch and decreases with decreased pressure or arterial stretch. Baroreceptor afferent signals travel to the brain stem in cranial nerve IX, the glossopharyngeal nerve [carotid sinus afferent], and cranial nerve X, the vagus nerve [aortic arch afferent]. Their destination is the nucleus tractus soltaris in the brain stem which is also constantly active. This brain nucleus interprets changes in firing rates of the baroreceptor afferent and changes its own firing rate accordingly. Sympathetic and Parasympathetic Efferent Activity The nucleus tractus solataris signals to the cardiovascular centers in the brain stem that control sympathetic and parasympathetic activity. Efferent sympathetic neurons synapse first in the spinal cord, then in the spinal ganglion and finally in the heart. Efferent parasympathetic neurons travel back to the heart in the vagus nerve. The sympathetic and parasympathetic brain centers work in a coordinated fashion to move blood pressure back to the mean arterial pressure set point of about 100 mmHg. Increased sympathetic activity, induced by low pressure in the large arteries, increases heart rate and contractility of cardiac muscle. It restricts blood flow to surface arterioles and mobilizes increased venous return. The net result is increased cardiac output and a rise in pressure in the large arteries. Parasympathetic firing of the vagus nerve, in response to high pressure in the large arteries, decreases rate and contractility of the heart. A corresponding decrease in sympathetic activity opens blood flow at the peripheral arterioles. The net result is decreased cardiac output and a fall in pressure in the large arteries. Preservation of Blood Volume Another component of maintaining an acceptable mean arterial pressure is preserving sufficient blood volume. Preservation of blood volume requires a response of the endocrine system to supplement the reflex response of the neural baroreceptors. Insufficient Blood Volume When blood volume is low there is insufficient venous return and decreased arterial pressure. To regain volume and pressure the vasculature must increase its water and salt [Na+] content. Increasing blood water and Na+ is accomplished by the kidney nephrons. When the kidney senses that Na+ is too low in the filtrate flowing into the distal tubules of the nephrons, an indicator of low blood pressure due to low volume, it secretes a molecule named renin into the blood of the peritubular capillaries. Low filtrate Na+ indicates low blood volume because water follows Na+ due to osmotic pressure gradients created by Na+ molecules. Renin sets in motion the serial conversion of blood molecules to form a hormone named angiotensin II. Angiotensin II has several effects. At the glomerulus it boosts the filtration rate. It decreases the diameter of the efferent arteriole hindering blood flow out of the glomerulus, thereby further increasing pressure in the capillaries. Angiotensin II also improves reabsorption of Na+ and water at the proximal convoluted tubule and stimulates release of another hormone named aldosterone from the adrenal gland, which sits on top of the kidney. Aldosterone works to augment the action of angiotensin II. Aldosterone’s effect is at the distal tubule and collecting duct of the nephron. There it promotes re-absorption of Na+ into the surrounding blood capillaries. Water is drawn to the blood by the osmotic gradient created by the increase in blood Na+. Blood volume and blood pressure are returned to normal augmenting the sympathetic response of the large artery baroreceptors. High Blood Volume When blood expands to a greater than normal volume, pressure in the large arteries and in the atria of the heart rises. Large artery baroreceptors change their activity and the cardiovascular control centers in the brain stem respond. In addition, mechanical stretch of the wall of the atrium causes release of a hormone from cardiac muscle cells named atrial natriuretic peptide [ANP]. ANP travels to the kidney and increases the kidney’s blood filtration rate by altering the diameter of the glomerular arterioles. With higher filtration pressure, more water and Na+ move into the nephrons’ tubules. Large amounts of Na+ in the distal tubules decreases renin secretion by the kidney. In turn, angiotnesin II and aldosterone secretion shuts down. In the absence of those hormones, ANP works to decrease Na+ and water re-absorption at the distal tubules and collecting ducts. Urinary Na+ and water increase and blood volume decreases. Juxtaglomerular Apparatus and Glomerulus, OpenStax College, via WikiMedia ANP activity augments the drop in sympathetic and increase in parasympathetic activity initiated by the baroreceptor reflex of the large arteries. ANP causes relaxation of smooth muscle of the arterioles and venules by inhibiting basal release of sympathetic norepinephrine, which opens blood flow to the surface. A Common Theme in Physiology Although the heart is autonomous with regard to regular pacing of its contractions and therefore its blood output, it still depends greatly on fine tuning of its operation by the nervous, renal and endocrine systems. The interdependence of the nervous, cardiovascular and endocrine systems is a common theme in physiology. Action potential and renal physiology are found at: Do you have questions? Please put your questions in the comment box or send them to me by email at DrReece@MedicalScienceNavigator.com. I read and reply to all comments and email. If you find this article useful, please share it with your fellow students. Click a social media button below. Online Course Coming Soon: 30-Day Challenge: Craft a Plan for Learning Physiology Margaret Thompson Reece PhD, physiologist, former Senior Scientist and Laboratory Director at academic medical centers in California, New York and Massachusetts and CSO at Serometrix LLC is now CEO at Reece Biomedical Consulting LLC. Dr. Reece is passionate about helping students, online and in person, pursue careers in life sciences. Her books “Physiology: Custom-Designed Chemistry” (2012), “Inside the Closed World of the Brain” (2015) and upcoming “Step-by-step Guide for Study of Physiology” (2016) are written for those new to life science. Dr. Reece offers a free 30 minute “how-to-get-started” phone conference to students struggling with human anatomy and physiology. Schedule an appointment by email at DrReece@MedicalScienceNavigator.com.by
<urn:uuid:e66c2566-6902-4cc1-9ea1-e676e5bf4a4f>
CC-MAIN-2016-26
http://www.medicalsciencenavigator.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897352
2,587
3.578125
4
How did Charles Darwin's theory of evolution evolve? A look at the state of science leading up to Darwin's voyage on the Beagle. What scientists influenced his thinking, and what did he see that others before him had not? How has Darwin's theory of descent with modification itself been modified? We'll talk with historian and Pulitzer Prize winner Edward Larson, author of Evolution: the Remarkable History of a Scientific Theory. Edward J. Larson *Author of Evolution: the Remarkable History of a Scientific Theory *Winner, 1998 Pulitzer Prize in history *Richard B. Russell professor of history and Talmadge professor of law at the University of Georgia *Biology teacher at Jersey Village High School in Houston *Outstanding Biology Teacher, National Association of Biology Teachers, 1993 *Past president, Texas Association of Biology Teachers
<urn:uuid:4f64f1ae-055b-442c-af0f-5cad1c2090ac>
CC-MAIN-2016-26
http://www.npr.org/templates/story/story.php?storyId=1905478
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924712
169
3.15625
3
Wanganui Intermediate school had an array of student science projects on display for their science fair last week. Twelve-year-old Taylor Watson completed a project on how boats stay afloat. "I went on a cruise with my family last year and we went to Australia, New Caledonia and Vanuatu," he said. "I wondered about how the ship stayed afloat and I was going to do some research, but I was too busy having a good time." Taylor says he thought about it again when he needed to do the assignment and undertook some practical research as well as searching online. "I made some boats out of clay but they all sank so I tried making them with cardboard and covered them with duct tape and added twenty-cent coins to see how many I could put in before they sank. "Then I tried floating them in different water temperatures, and they sank faster when the water was warmer." Teacher Colin Withers says the group of students were given an assignment to complete an investigation using a number of variables. "They were asked to do repeat trials and keep a log book of their observations, then write a conclusion analysis and use statistics," he said. There was a vast range of experiments including one which tested gender preference for different coloured jelly beans. Mr Withers said the students not only increased their range of scientific knowledge, they also learned perseverance and self motivation. Some of the exhibits will be submitted for the regional science fair, which will be held at Rutherford Intermediate Hall on September 26.
<urn:uuid:c7f29f3d-089d-4f86-a526-02b2b543d8ad>
CC-MAIN-2016-26
http://www.nzherald.co.nz/wanganui-chronicle/news/article.cfm?c_id=1503426&objectid=11307036
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984535
318
2.8125
3
January 14, 2009 Overweight Star Formation Through “˜Stellar Cannibalism’ Researchers have discovered that the mysterious overweight stars known as blue stragglers are the result of "Ëstellar cannibalism' where plasma is gradually pulled from one star to another to form a massive, unusually hot star that appears younger than it is. The process takes place in binary stars "“ star systems consisting of two stars orbiting around their common centre of mass. This helps to resolve a long standing mystery in stellar evolution. The research, which is part funded by the UK's Science and Technology Facilities Council (STFC) and carried out by scientists at Southampton University and the McMaster University in Canada, is published in the journal Nature on Thursday 15 January.Blue stragglers are found throughout the Universe in globular clusters - collections of about 100, 000 stars, tightly bound by gravity. According to conventional theories, the massive blue stragglers found in these clusters should have died long ago because all stars in a cluster are born at the same time and should therefore be at a similar phase. These massive rogue stars, however, appear to be much younger than the other stars and are found in virtually every observed cluster. Dr Christian Knigge from Southampton University, who led the study, comments: "The origin of blue stragglers has been a long-standing mystery. The only thing that was clear is that at least two stars must be involved in the creation of every single blue straggler, because isolated stars this massive simply should not exist in these clusters." Professor Alison Sills from the McMaster University explains further: "We've known of these stellar anomalies for 55 years now. Over time two main theories have emerged: that blue stragglers were created through collisions with other stars; or that one star in a binary system was "Ëreborn' by pulling matter off its companion." The researchers looked at blue stragglers in 56 globular clusters. They found that the total number of blue stragglers in a given cluster did not correlate with predicted collision rate "“ dispelling the theory that blue stragglers are created through collisions with other stars. They did, however, discover a connection between the total mass contained in the core of the globular cluster and the number of blue stragglers observed within in. Since more massive cores also contain more binary stars, they were able to infer a relationship between blue stragglers and binaries in globular clusters. They also showed that this conclusion is supported by preliminary observations that directly measured the abundance of binary stars in cluster cores. All of this points to "stellar cannibalism" as the primary mechanism for blue straggler formation. Dr Knigge says: "This is the strongest and most direct evidence to date that most blue stragglers, even those found in the cluster cores, are the offspring of two binary stars. In our future work we will want to determine whether the binary parents of blue stragglers evolve mostly in isolation, or whether dynamical encounters with other stars in the clusters are required somewhere along the line in order to explain our results." This discovery comes as the world celebrates the International Year of Astronomy in 2009. Image Caption: This picture compares a ground view (left) of globular cluster 47 Tucana and a Hubble Space Telescope shot (right) of the same thing. Notice the blue stragglers, which are circled in (somewhat difficult to see) yellow. NASA On the Net: - Science and Technology Facilities Council - Southampton University - McMaster University - International Year of Astronomy in 2009
<urn:uuid:c2122613-420d-4f05-af38-48663016e9fc>
CC-MAIN-2016-26
http://www.redorbit.com/news/space/1623289/overweight_star_formation_through_stellar_cannibalism/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950012
742
3.1875
3
03/01/2011 03:00:00 AM EST Tuesday March 1, 2011 We spend a lot of time and money making our homes more energy efficient. Whether adding insulation, upgrading windows, replacing incandescent light bulbs, or replacing appliances, efforts we make to use less energy save us money and help the environment. But what about where we live? The U.S. Environmental Protection Agency (EPA) has just released a report on "location efficiency" -- the idea that where we live has an impact on our energy consumption. The findings are clear and profound. In conventional suburban development, an average American home uses 108 million BTUs (British Thermal Units -- a measure of energy consumption) per year for operation (heating, cooling, lighting, etc.). But that same house uses 132 million BTUs per year in transportation energy use -- for a total of 240 million BTU/year. In other words, for that average home, 55 percent of its total energy use is for transportation, and 45 percent is for operations. Now, if the house is located in a "transit-oriented development" (a pedestrian-friendly place where residents can walk to restaurants, basic services, and public transit), the transportation energy use drops to 39 million BTUs per year -- just 26 percent of that home’s total annual energy use of 147 million BTU/year. The study was conducted by Jonathan Rose Companies, which has long championed "Smart Growth" and affordable housing. You can read about this new study on The authors of the report examined the relative energy benefits of energy-efficient design and location for homes. They show that a family living in a conventional (non-energy-efficient) home in a transit-oriented neighborhood will spend a lot less on total energy than a family living in a 20 percent more energy-efficient home (built to Energy Star standards) in a conventional suburban development. Our own publication, Environmental Building News, drew similar conclusions with commercial office buildings in an article "Driving to Green Buildings: The Transportation Energy Intensity of Buildings," in September 2007. In that article, we reported that for a typical American office building, 30 percent more energy is consumed getting workers to and from the building than the building itself uses for operating -- and if the office building is built to modern energy codes (ASHRAE 90.1 - 2004), the transportation energy use is 2.3 times the operating energy use. All this is significant, as pointed out in an EPA press release, because buildings and transportation together account for 70 percent of U.S. energy consumption and 62 percent of greenhouse gas emissions. Most statistics about U.S. energy use by buildings, including those quoted by Vice President Al Gore and Architecture 2030 founder Ed Mazria, FAIA, consider only building operations, not how we get to and from those buildings. As the Jonathan Rose Companies study for EPA and our own research in 2007 point out, where we build can be even more important than how we build. It’s little surprise, then -- especially with gasoline prices trending upward -- that real estate values in transit-oriented areas have been holding their own or rising, while real estate values in automobile-dependent suburbia have been falling. In a 2010 report, "Foreclosing the Dream: How America’s Housing Crisis is Changing Our Cities and Suburbs," William Lucy, Ph.D., a professor of urban and environmental planning at the University of Virginia School of Architecture, shows that foreclosures during 2008 and 2009 have been occurring more frequently in the car-dependent outer suburbs than in central cities and closer-in suburbs. "Location is more important than ever, and how location is interpreted has changed," argues Lucy. He believes that there is desire by homeowners for more convenient locations, smaller units, and less driving hassle. These factors are all affecting property values. What’s the old saying? "Fool me once, shame on you; fool me twice, shame on me." Americans, like automakers, are waking up to the reality of unstable and rising gasoline prices, and this is influencing their choices in home buying. That’s a good thing. Alex Wilson is the founder of BuildingGreen, Inc. in Brattleboro. Archives of this column can be found at www.BuildingGreen.com (click on "Blogs" then "Energy Solutions").
<urn:uuid:401fbde5-9ec8-4db3-8ad3-981138889931>
CC-MAIN-2016-26
http://www.reformer.com/ci_17506651?source=most_emailed
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938051
898
2.875
3
While many people in Superstorm Sandy’s path readied themselves for the long-haul indoors, avid birdwatchers braved blustering winds to catch a glimpse of some uncommon species. Sandy’s 40- to 50-mph gusts brought more than just heavy rain to Central New York. A species from a remote North Atlantic island and one from the high arctic were spotted Monday and Tuesday at Cayuga Lake. While many migrating birds tend to stay put during rough weather, those normally found over deep-ocean waters are often displaced during the storm, said Brian Sullivan, eBird project leader at Cornell University’s Lab of Ornithology. “Think of the storm as a sponge moving across the south Atlantic, and toward landfall it picks up all the birds that can’t get out of its way,” Sullivan said. “It’s a really intense circulation.” Strong winds scattered Leach’s Storm-Petrels across Sandy’s path in Eastern Pennsylvania, New Jersey and New York. The small seabirds breed on remote islands in the cold, northern areas of the Atlantic and Pacific oceans and it is not usual found on land., Sullivan said. It was seen near Cayuga Lake. A rarer sighting was reported on Cayuga Lake. A Ross’s Gull, a pinkish, dove-like bird rarely seen outside the arctic, was spotted in the area. “This one is an odd one. This bird doesn’t make it to the lower 48 every year,” Sullivan said, referring to its displacement to the Eastern seaboard. Ross’s Gulls breed in the high arctic of North America and northeast Siberia. More common sightings of uncommon species scattered across the storm’s swath include red phalaropes and pomarine jaegers, also found in arctic regions. “It’s kind of an exciting time for birders, but also a dangerous one,” Sullivan said. “But it’s a little bit sad to see these birds getting transported far from their normal areas.” Sullivan and his team at Cornell study the relation of birds to strong weather events, particularly storms. Reporting your finds Any unusual bird sightings can be reported to an online directory at www.ebird.org, which gathers 3 million to 5 million bird observations each month.
<urn:uuid:8e87a2ad-ff44-4ed7-a994-710f7d0f3e5c>
CC-MAIN-2016-26
http://www.syracuse.com/news/index.ssf/2012/10/sandy_blows_birds_to_cny_that.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938873
511
2.953125
3
Once thought to be eradicated, whooping cough is making a comeback, up as much as 400% in some states. News cases are being reported in Florida, with an estimated 274 cases this year, 52 in Central Florida. Properly known as Pertussis, the vaccine for whooping cough was once a widely given immunization until 1980, but then began a sharp drop in use as people forgot about the fatal illness. States in the Northwest like Washington are seeing thousands of new cases, along with New York, Idaho and Wisconsin. The illness is most common in infants and toddlers, but can also affect adults, as is now the case. Most people no longer are immunized against whooping cough, and even less are aware that a booster shot is needed once a person reached 21 years of age. Common symptoms include cold-like attributes and persistent coughing and wheezing and a high temperature and occasionally lethargy. The illness can be fatal, especially in infants or those with weak immune systems.
<urn:uuid:9cd79d73-0017-4a2e-9263-0201d0c9a5fa>
CC-MAIN-2016-26
http://www.thefloridanewsjournal.com/2012/08/07/whooping-cough-making-comeback-new-cases-florida
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97587
208
2.828125
3
Oct 9, 2012 The time has come to let the thermonuclear Sun theory go. Everything has a natural explanation. The Moon is not a god, but a great rock, and the Sun a hot rock. — Anaxagoras, Greek philosopher circa 550 BCE Hypothetically, how does the Sun produce heat and light enough to sustain life on our planet at a mean distance of 149,476,000 kilometers? It is apparently not a hot rock, so what is it? According to spectrographic analysis, the Sun is composed primarily of hydrogen gas (71%), with 27% helium and the remainder thought to be minute percentages of oxygen, nitrogen, sulfur, carbon, and six other elements. Although every element on Earth can be seen in a spectrogram of the Sun, those 12 make up 99.9% of its mass. The Sun is 1,390,000 kilometers in diameter, with a mass approximation of 1.98 X 10^30 kilograms, although that figure is speculative. The temperature measured at its surface is 5575 Celsius and is estimated by conventional heliophysicists to be as high as 15,600,000 Celsius in its core. As standard models suggest, the Sun must generate outward radiation pressure or gravity would compress it into a relatively tiny ball. The theory states that an energy source must exist inside the Sun, acting as a counter force to gravitational contraction. The thermonuclear Sun came about because it seemed to Sir Arthur Eddington in his classic work, The Internal Constitution of the Stars, that only nuclear fusion could produce radiative energy sufficient to prevent the Sun from collapsing “under its own weight”. Since the processes by which scientists describe those fusion reactions were not mathematically modeled until years after Eddington’s theory, it was more a statement of faith at the time than it was a result of experimental research. Supposedly, when the Sun condensed out of the nebular cloud that was its nursery, the gases were compressed by gravity without losing much heat to space so that the core could reach a temperature greater than 10 million Celsius. At that temperature, hydrogen atoms are thought to be disrupted into individual protons and electrons, leaving the protons free to collide with one another. It is these initial proton collisions, it is said, that are the first step in a reaction called the proton-proton (p-p) chain. According to theory, when protons collide at those high temperatures, they are moving fast enough to fuse into other particles: deuterium, a positron and a neutrino. Deuterium is a proton-neutron combination, while a positron is a positively charged electron. Neutrinos are similar to electrons, except they do not carry an electric charge, and are almost massless. Being neutral, they are not affected by the electromagnetic forces that affect electrons. The second stage in the p-p reaction is the formation of a helium-3 nucleus when the deuterium captures another proton, while at the same time emitting a gamma ray. A helium-4 nucleus and two neutrinos are the end results of the reaction, although it can follow one of many different reaction paths. In reality, as Electric Universe theorist Wal Thornhill points out, stars reside within plasma sheaths perhaps as great as a light-day in extent. They are the borders between theelectrical influence of the stars and the currents flowing through the galaxy. “The Sun’s plasma sheath, or ‘heliosphere’ is about 100 times more distant than the Earth is from the Sun. To give an idea of the immensity of the heliosphere, all of the stars in the Milky Way could fit inside a sphere encompassed by the orbit of Pluto. The Sun’s heliosphere could accommodate the stars from 8 Milky Ways! It is clear from the behavior of its relatively cool photosphere that the Sun is an anode, or positively charged electrode, in a galactic discharge.” As Donald Scott describes, the Sun is controlled electronically via a transistor-like effect. This explains several phenomena not included in thermonuclear theory: 1. Why coronal hotspots appear in the lower corona above sunspots. 2. Why the corona changes shape from times of active to quiet Sun. 3. The solar wind’s flow rate depends on the voltage (energy) rise from the Sun’s interior up to the photospheric tufts. 4. The initial velocity (and temperature) of the solar wind ions depends on the voltage (energy) drop from the tufts down to the lower corona. 5. That transistor action can cut off the solar wind flow. The stars receive their power from outside, not inside. Any nuclear reactions are taking place on the surface of the Sun and not in its core. The solar wind is an electric current connecting the Sun with its family of planets and with its galactic clan, so the 90-year-old theory of fusion firing the solar furnace needs to be reexamined. Now Available – Seeking the Third Story DVD 2 Lectures by David Talbott According to author David Talbott, all of human history can be seen as just two stories. First, came the story of ancient mythology, when towering gods were said to have ruled the world. Then came the story of science, emerging from a growing distrust of the myths and a new emphasis on direct observation and reason. But a third story is possible, according to Talbott, one that sees the underlying provocation of the myths in extraordinary electrical events occurring close to the Earth. To be believed, a third story must be more coherent and more meaningful than either archaic religious mythologies or the modern mythologies of popular science.
<urn:uuid:0f1acd3e-746b-435a-b35e-e5dcfed74b99>
CC-MAIN-2016-26
http://www.thunderbolts.info/wp/2012/10/08/firing-fusion-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942855
1,196
3.6875
4
This interesting social interaction speech by Jan Chipchase characterizes the effects of technology on human behavior. It is given after time spent researching and conducting experiments on this phenomenon in everyday life. Probably one of the most visible occurrences of this is people having their heads in their phones, even while performing tasks. This keynote talks about nine trends in technology that will shape human interaction. As technological devices become smaller and less obstructive it will be more seamless for humans to use. It will no longer interfere in the way that it does now. For instance, if a person is wearing a large headset device, it may deter others from interacting with said person. This social interaction speech identifies that future technologies will need to focus on the way to limit negative social interference. Tecnology's Effect on Humans More Stats +/- A Broad Social Network Analysis Social Media's Impact on Television Habitually Human Motivators Misuse of Social Platforms The Customer Revolution Jan Chipchase Keynotes The keynotes by Jan Chipchase discuss brand innovation and his experiences as an innovator and... Get inspired by our collection of 2,000+ keynote speaker videos and 100+ courses of innovative content.
<urn:uuid:b52bc589-0075-4e79-a0ff-430896978f85>
CC-MAIN-2016-26
http://www.trendhunter.com/keynote/social-interaction-speech
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912733
247
2.84375
3
Specific, well-designed goals of an exercise program should drive the interventions, because specific interventions will likely produce specific outcomes. The most common types of exercise are aerobic, strength training, and flexibility regimens. However, given the heterogeneity of cancer types, a one-size-fits-all approach to exercise and cancer is unlikely to be effective. For example, for patients in whom cardiac output may have been compromised, a carefully planned exercise program with the goal of improving cardio-respiratory fitness may be in order. However, another group of patients may need strength training to combat the effects of muscle wasting and inactivity. The healthcare provider and patient should determine whether the goal is to - Alleviate symptoms. - Improve functional capacity. - Restore muscle function. The goals for the target population will define the exercise prescription, which has five components (Whaley, Brubacker, & Otto, 2006). - Mode: What type of exercise will be performed? - Intensity: How strenuously will the patient be asked to perform the exercise? - Duration: How long will the exercise sessions last, or how many repetitions of a certain exercise are required? - Frequency: How many days per week will the patient exercise? - Progression: What is the point at which optimal benefit is achieved at one level of exercise, requiring increased weights, repetitions, or intensity to achieve further results? Another factor to consider in an exercise prescription is whether the patient will be supervised or unsupervised while exercising. Although many published guidelines, such as those from the Centers for Disease Control and Prevention, recommend exercise programs for healthy people, adapting guidelines for people with cancer may pose some challenges. For example, patients who are at risk for infection, such as those with neutropenia, should not share equipment or may not be well enough to attend a public exercise class; patients with nontunneled vascular access devices may not be able to participate in water exercise activities. Testing patients for their level of exercise tolerance prior to initiating an exercise program will increase the likelihood of success. Testing may consist of health questionnaires, lab tests, physical examinations, aerobic capacity, and muscular strength. The goal of the program will greatly determine the direction of the testing. For example, if the goal is to improve muscle strength and stability, then the cardio-respiratory testing may not be as vigorous as it might be for programs with alternate goals. Testing should be modified for patients for whom metastatic disease, symptomatology, or other cancer-related conditions may prevent an accurate result. The American College of Sports Medicine's Guidelines for Exercise Testing and Prescription is one resource that may be used to develop testing plans (Whaley et al., 2006). A beautifully planned exercise program will not be effective if a patient is unable or unwilling to adhere to it. Even for healthy people, adhering to an exercise regimen can be a challenge; adding the challenges related to a multifaceted cancer diagnosis can make adherence all the more difficult. Adherence may be influenced by symptoms such as nausea or fatigue. Other factors that may influence adherence may not be related to cancer, such as time to perform the activities, transportation to and from a supervised program, or simply lack of interest. Monitoring patient adherence is a particularly challenging task. Because a simple record of time spent in the exercise session does not necessarily represent adequate performance of the exercise prescription, not surprisingly, many studies do not report adherence rates, or the data are confounding. Because so many factors can affect adherence, it is important to provide as many reinforcers as possible to encourage patient participation. Ideas include - Encouraging partners to exercise together - Ensuring that goals are realistic and achievable - Varying the exercise routine to prevent boredom - Identifying an exercise plan that is enjoyable to the patient - Encouraging journal entries of exercise completed to be able to see efforts and improvements - Maintaining weekly contact with the patient (if the program is home-based) - Helping patients identify and overcome barriers to exercising - Identifying opportunities for incorporating exercise into daily activities (Adkins, 2009; Hacker, 2009). Evaluation of improvement in an exercise prescription is again closely related to the exercise goals. Some potential evaluation end points include Biologic end points - Body fat - Heart rate - Blood pressure - Body composition - Distance walked or steps climbed - Strength in legs, arms, hands - Aerobic capacity - Time needed to perform activities of daily living Symptoms: presence and/or severity of - Sleep disturbances Quality-of-life perceptions, including - Life satisfaction - Role performance - Cognitive functioning - Social functioning - Satisfaction with overall health. For more information, read the Centers for Disease Control and Prevention Exercise Guidelines. Adkins, B.W. (2009). Maximizing exercise in breast cancer survivors. Clinical Journal of Oncology Nursing, 13, 695–700. doi: 10.1188/09.CJON.695-700 Hacker, E. (2009). Exercise and quality of life: Strengthening the connections. Clinical Journal of Oncology Nursing, 13, 31–39. doi: 10.1188/09.CJON.31-39 Whaley, M., Brubacker, P., & Otto, R. (Eds). (2006). American College of Medicine’s guidelines for exercise testing and prescription (7th ed.). Philadelphia, PA: Lippincott, Williams and Wilkins.
<urn:uuid:5a33e5e9-8beb-425f-8cc3-aafdcf1b5abc>
CC-MAIN-2016-26
https://www.ons.org/practice-resources/clinical-practice/set-individualized-goals-when-designing-exercise-programs
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906635
1,152
3.046875
3
The spiral galaxy , NGC4402, is currently falling towards the centre of the Virgo cluster (downwards in this image). The bowed and truncated disk, and the concentration of dust and gas to one side of the galaxy are all indicators that ram pressure stripping is forcing gas out of the galaxy.Credit : H. Crowl (Yale University) and WIYN/NOAO/AURA/NSF Galaxy clusters are permeated by hot, X-ray emitting gas known as the intra-cluster medium. As individual galaxies move within such clusters, they experience this intra-cluster gas as a ‘wind’ – much like the wind experienced by a moving bicyclist, even on a still day. ‘Ram pressure stripping’ occurs if this wind is strong enough to overcome the gravitational potential of the galaxy to remove the gas contained within it. Evidence for ram pressure stripping can be found in many galaxy clusters. For example, NGC 4402 (right), which is currently falling into the Virgo cluster, shows several clear indicators that ram pressure stripping is at work: - The disk of dust and gas appears bowed. This indicates that the galaxy is having trouble holding onto the loosely bound dust and gas in the outer regions of the disk against the pressure of the ‘wind’. - The stellar disk (blue) appears to extend well beyond the star forming disk of dust and gas. This observation suggests that the loosely bound dust and gas in the outer regions of the disk has been stripped from the galaxy after the formation of these stars. - Streamers of dust and gas can be seen trailing behind the motion of the galaxy, obscuring and reddening the stars behind (top of the galaxy in the image). At the same time, the ‘wind’ has pushed the dust and gas that would normally be found ahead of the motion of the galaxy up into the galaxy itself. This has revealed bright blue stars along the leading edge of the galaxy (bottom of the galaxy in the image). The result of ram pressure stripping is a galaxy which contains very little cold gas. This effectively halts star formation in the galaxy, supporting the belief that ram pressure stripping could be one of the processes responsible for the morphology density relation.
<urn:uuid:a4c0cb43-0375-4758-b328-5f02be65bad8>
CC-MAIN-2016-26
http://astronomy.swin.edu.au/cosmos/R/ram+pressure+stripping
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937816
469
3.265625
3
Denali, also known as Mount McKinley, in Alaska, is the tallest mountain in North America, at 6,189 metres (many sources incorrectly give a height of 6,194 metres which was based on a 1956 survey that was superseded in 1989). In fact, measured from its base on a 2000-foot plateau in Denali National Park to its summit, it is taller than Mount Everest; Everest rests atop the Tibetan Plateau, and therefore its base is already at an altitude of 5,200 metres. Along with Mauna Kea, Denali can make a plausible claim to be the world's tallest mountain (measured from base to summit); Mauna Kea is larger, but the greater part of it is underwater. On land, Denali has no equal as a free-standing mountain. Why not Mount McKinley? Wikipedia has decided to use the name more commonly known around the world, which was officially given to the mountain in 1896 in recognition of William McKinley, who was then the Governor of Ohio, and went on to become the 25th President of the United States. The name was given by William Dickey, a gold prospector who was part of the Cook Inlet Stampede in 1896, who wrote an article for the New York Sun describing the mountain and claiming it as North America's tallest. His reasoning for the name (as then-Governor McKinley seems never to have visited Alaska) had more to do with scoring points off his prospecting rivals than wanting to immortalize McKinley: "When later asked why he named the mountain after McKinley, Dickey replied that the verbal bludgeoning he had received from free silver partisans had inspired him to retaliate with the name of the gold standard champion." - Terris Moore, Mt. McKinley: The Pioneer Climbs The name was always controversial - the native Athabaskan people already had a much better and more evocative name, Denali, which means "The Great One", and after Hudson Stuck became the first to reach the summit in 1913 he published his memoir of the ascent under the title The Ascent of Denali, making a heartfelt plea for this name in his preface: "Forefront in this book, because forefront in the author's heart and desire, must stand a plea for the restoration to the greatest mountain in North America of its immemorial native name." In 1980 the State of Alaska officially adopted Denali as the mountain's name, and renamed Mount McKinley National Park to Denali National Park. However, the United States Board of Geographic Names has not recognized the change, maintaining the name McKinley due to its wide recognition and in order to distinguish the mountain easily from the National Park on maps. The controversy continues, with different sources insisting on using one name or the other, but it seems certain that over time, as the name Denali becomes more widely known, the native name will be adopted as it has been for the majority of great mountains around the world. Although Denali is much smaller than the "8000ers", the world's highest and most famous peaks, it is not by any means easy to climb. As previously mentioned, in real terms it is taller than Everest, in the sense that climbers will have much further to go from the base. In addition, it is among the coldest mountains in the world (only the Antarctic peaks have a lower average summit temperature), with a recorded temperature extreme of -100° Fahrenheit (-73.3° Celsius). Due to its latitude (its exact location is 63° 07' N, 151° 01' W) there is also much less oxygen available than there would be at a comparable height closer to the equator, due to the troposphere being thinner at the Earth's poles; therefore, it carries as high a risk of altitude sickness as any of the bigger Himalayan peaks. Denali has been subject to massive glaciation and ice erosion, and is still the source of five major glaciers - the Peters Glacier on the northwest, the Muldrow Glacier to the northeast, the Traleika Glacier on the east, the Ruth Glacier to the southeast and the Kahlitna Glacier to the southwest. These glaciers comprise part of most of the major routes up the mountain, and also a lot of its dangers, as cracks and fissures may appear and disappear in the ice, trapping unwary climbers. It has two major summits - the South is the main one, and the North summit rises to 5,934 metres; the North summit is rarely climbed as it is reached via a different route. The first attempt on Denali was in 1903 by James Wickersham, a district judge for Alaska, by an exceptionally difficult and dangerous route along the Peters Glacier and the North Face (which was subsequently named the Wickersham Wall). He didn't make it to the summit, but he didn't die either, which considering the avalanche danger on that route can be considered a victory. This route wasn't successfully climbed until 1963. Frederick Cook, a famous explorer, tried again in 1906. Cook, who was embroiled for much of his life in a bitter feud with rival explorer Robert Peary over which of them reached the North Pole first, claimed to have reached Denali's summit, but his evidence was highly suspicious and is now considered to have been refuted. Incidentally, the North Pole issue between himself and Peary has also since been resolved: both of their stories have been completely discredited. Denali was almost summited in 1910 by a group of local Alaskan men collectively known as the Sourdough Expedition. They had absolutely no climbing experience and practically no equipment, and spent 3 months on the mountain. They claimed afterwards to have reached both summits, which has since been proven false, but 2 of their members may well have reached the North Summit, reportedly carrying "a bag of doughnuts, a thermos of cocoa each and a 14-foot spruce pole" (Wikipedia). Yet another attempt was made in 1912, when the Parker-Browne Expedition was forced to turn back within a few hundred metres of the summit due to the weather. As it turned out, this was a blessing in disguise - a few hours after they descended, an earthquake shattered the glacier that their route had followed, and it's likely that if they had pressed on to the summit and then returned, they would have been caught in the destruction and killed. Finally in 1913 Hudson Stuck led a successful team to the summit - the first climber to reach the top was Walter Harper, a native Alaskan. They used a route up the Muldrow Glacier which is still frequently used in modern times. They were also able to establish that there was a large pole near the North Summit, which seemed to verify that the Sourdough Expedition really did make it that far. Although still dangerous, Denali doesn't claim as many lives as it used to, and the standard route via the Western Buttress, which was worked out in 1951 after Bradford Washburn studied arial photographs of the entire massif, is relatively safe and free of technical difficulty. There have been 96 deaths on the mountain as of 2006; since more than 1,000 people come to climb the mountain every year, this isn't too bad, and the official fatality rate (deaths compared to successful summits) is approximately 3%, lower than almost all of the Himalayan mountains. Fatalities have mostly been due to injuries sustained in falls, and these have been greatly reduced by the introduction of a registration and screening system for climbers in 1995. However, Denali should never be climbed casually or taken for granted - as recently as 1967, 7 climbers were killed in a storm near the summit. "The fact that the West Buttress route is not technically difficult should not obscure the need to plan for extreme survival situations. Of course, some climbers manage to get up and down in perfectly nice, but rare period of weather; when back home, they encourage others to climb this 'easy walkup' of a mountain. Little do they realize that it was only by sheer luck they weren't trying to keep their tent up in the middle of the night in a 60mph wind at 40° below zero, with boots on and ice axe ready in case the tent suddenly imploded." — Peter H. Hackett, M.D., from Surviving Denali by Jonathan Waterman The above statistics only apply to the relatively mild climbing months of the summer season. In winter, from November to April, Denali becomes one of the harshest and most extreme environments on the planet, presenting massive dangers to even the most experienced of climbers. The wind can reach 100mph or more when the jet stream descends the sides of the mountain and funnels through Denali Pass and other narrow areas at a truly freakish speed. Add to this the fact that temperatures range from -30F to -70F and it's no surprise that many people have been killed trying to climb Denali in winter. References and Further Reading: The Story of McKinley: http://www.sitnews.net/JuneAllen/MtMckinley/091403_mt_mckinley.html Fatality Statistics: http://www.ncbi.nlm.nih.gov/pubmed/18331224 Denali National Park: http://en.wikipedia.org/wiki/Denali_National_Park_and_Preserve
<urn:uuid:3fcfd698-4d42-4fa8-9fcd-05224526c84a>
CC-MAIN-2016-26
http://everything2.com/title/Denali
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978447
1,940
3.546875
4
In the years after the war of 1812 , the United States initiated a series of banking reforms meant to control the ebb and flow of the economy in order to stave off the slumps that occur every few decades. As Martin Van Buren began his presidency the financial bigwigs were in an uproar over the rumors of an oncoming financial stagnation. A number of ideas were advanced for solving this problem, but the economy continued to decline during the administration. In addition to a trade deficit, banks were offering credit to all comers and land speculation was out of control. Hard currency was leaving the country at high rates while businessmen were handing out credit in their own personal scrip. Throughout the 1830s, land offices were experiencing amazing growth in business, and fly by night banking operations were shooting up like dot-coms at a Stanford graduation party. In 1836, land purchases increased ten times over the previous year. This led Andrew Jackson to release the Specie Circular, which was an Executive Order forcing all land offices to accept payment only in gold or silver. Since most banks were operating on the credit of other investors, and had no hard backing of their own, the level of borrowing declined preciptiously. Numerous loans defaulted and the level of land purchasing plummeted. Soon scores of financial houses collapsed rapidly, taking with them the fortunes of thousands, leading to a general collapse of prices and a shockwave throughout the economy. The banking industry, and the economy in general, would not recover for almost six years.
<urn:uuid:5e873f7c-01dc-4480-b7e1-f87d3047570a>
CC-MAIN-2016-26
http://everything2.com/title/panic+of+1837
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981847
319
4.09375
4
Over population, poor economic and living conditions on the Island of Puerto Rico fueled the Puerto Rican northern migration after WW II. How else could these sons and daughters of “Borinquen” justify leaving their homes, families and birth place if not to provide a better life for their families? Yes, they left their heart and soul in Puerto Rico but not their pride and dignity. It was not a pot of gold or a hand out that they came for; rather, it was something that their beloved homeland could not provide; a job. They found jobs upon arrival, but they also found poverty, discrimination, (Ethnic and racial) poor housing conditions, and looming in the horizon, old man winter and a hopefully a longer life. In 1940 the probable life expectancy in Puerto Rico was 46.10 years; in the continental United States, it was 62.9 years. Although New York City received the most attention, New Jersey, Pennsylvania, New York, Delaware, Maryland, Massachusetts, New Hampshire and Michigan. hired many seasonal farm contract workers. Derogatorily speaking, some of these men were referred to as “tomateros” (tomato pickers). It is estimated that these so called tomateros sent or returned home with over a million dollars per season. Amen for tomatoes. When you add the funds that were sent by the dish washers, porters and factory workers you can understand why the Puerto Rican government encouraged its citizens especially those in the lower economic ladder to migrate up north to find work. American companies were eager to lure Puerto Rican men and women for the low paying jobs especially in the agriculture, hotel and restaurant industries where not speaking English was a problem. Once the contracts expired, no one was required to return to Puerto Rico and many did not. Many young women trained to work as domestics and were sent north to work in private homes also under contract. To this day, some still keep in touch with the children that they helped reared. They even danced at their weddings. However, there were those who left to find work on their own, or with a promise of a job by relatives already working in la factoría/fabrica. Many women worked in the garment industry or at home as sawing machine operators. You knew who they were because they were the ones that would interrupt a stickball game because you would want to help (you had better) them carry the shopping bags full of items for them to sew at home. (Piece work) They were also the only ones with a commercial Singer sewing machine at home, and of course, they were the ones you would go to when you needed stitching done. Because they did not forget those who stayed behind, these heroes and heroines would pool their money and send for more relatives. This cycle would continue until all families were united and the children attending school, since children’s education was a must. The passing of the torch for a better tomorrow began with the children’s education. Some parents would work day and night just to make it happen because it did not take them long to realize that the streets of New York were not paved with gold. It took hard work to accomplish their dreams, and work hard they did. Perhaps the most famous ship that participated in the Puerto Rican migration would have to be the SS Marine Tiger. It was also one of the largest, capable of carrying more than one thousand passengers per voyage, more if you count the stowaways. The fact that the SS Marine Tiger was one of the last passenger ships to operate between San Juan and New York probably had much to do with its name retention. Also, new arrivals, regardless of what ship they arrived on, were referred to as “Ese es un Marine Tiger.” Several other participating steam ships were: Borinquen, Brazos/San Lorenzo, Carabobo, Caracas, Carolina, Coamo, Ponce, Porto Rico, Puerto Rico, and San Jacinto. Some were also classified as cargo passenger ships, which helps explain why the early arrivals were associated with the term “banana boat passengers.” These passengers, and those that followed, would have one thing in common; they did not forget those who stayed behind in Puerto Rico. Growing up on Hopkins Street in my old neighborhood, the Williamsburg section of Brooklyn, New York, it was a common ritual that on any given Saturday morning, a trip for my parents and others to the post office on Debevoise Street between Graham Avenue and Humboldt Street was a must for the sole purpose of buying a money order (giro) or sending a package (paquete de ropa) of clothes, or sometimes both, back home to help feed and dress those left behind. These are the same people that I would see when they were on their way to night school (at JHS 148) with their composition note books to learn how to speak and write in English. Today, I look back to my childhood and refer to it as the good old days. Were they, and for who? I was so busy going to school, playing stickball, kicking the can, three steps to the King, Johnny on the pony and all the other games that we New York Ricans would play in the streets and back yards of Nueva yorl. (sic) How could they have been the good old days when our parents were so busy struggling to make ends meet? What do we really know about yesterday? What do we really know about the sacrifices that were made by our parents on our behalf? To think that I would thank Santa Clause and not my parents for a wonderful Christmas makes me wonder. Still, I don’t complain in front of my children either and my granddaughter still pretends to believe in Santa and I want it to last as long as it can. I have often wondered why these factory/contract workers and the stay at home (baby sitting) moms are not mentioned when we talk about Puerto Rican heroes. How many people in Puerto Rico today have a college education because those money orders and clothes from up north made it possible for them to make it through high school? How many of those money orders paid for a new tin roof, an extra room or a new home, or created a healthier living environment, especially for the new born? According to the Department of Health, in 1945 there were (in Puerto Rico) 86,680 births and 28,837 deaths, including 8,064 children not surviving pass their first year of life. I would like to think that those money orders and clothes helped improve the living standards for many in la Isla del encanto. This migration was really about ordinary people doing extraordinary things for the betterment of their family and their homeland. We must never forget that their dream for a better tomorrow did not or will not die with them, therefore, it is our responsibility to honor their memory and make their dream become a reality. Welcome to Pasajeros a Nueva York where you will meet some of the true heroes of the Puerto Rican migration to el norte (up north). I dedicate this article to my parents and definitely my heroes Carlos Blondet native of Santa Isabel, and Pura Pollo y Texidor native of Salinas (Aguirre).You can find a monument dedicated to them en la Calle Barcelona in the town of Guayama; it’s the house they helped rebuilt with those money orders for my maternal grandmother, Blasina Texidor. Bendición mami, bendición papi. The dream lives on because it’s in my blood. Some reports describe the housing as being substandard but as I remember, my mother with our help kept a decent looking apartment. I can still remember the floral wall paper, the floral carpeta (linoleum) the floral slip covers and who can forget the figurines on every available space including el chinero (curio), Does any one remember the curtain stretcher with what seem like a million nails around the edges? National Center for Health Statistics, National Vital Statistics Report, Vol. 52, no. 3, Sept. 18, 2003, Web: www.cdc.gov/nchs Esperanza de vida en Puerto Rico Historia de Coamo, La Villa Añeja. p. 419 Ramón Rivera Bermúdez, Imprenta Costa, Inc. Coamo, Puerto Rico. 1980 Persons of Puerto Rican birth in New York City and the United States According to the U.S. Bureau of the Census The estimate for Michigan in 1948 is five thousand farm workers to work in the Beet fields Al Norte: Agricultural Workers in the Great Lakes Region, 1917-1970 Dennis Nodín Valdés p. 122 The International Ladies Garment Workers Union reports in 1947 a membership of over 5,000 Puerto Rican women workers. The total of Puerto Rican pupils attending public schools in New York City in 1947 is 24,989. One school, PS 168 located at Throop Avenue and Bartlett Street in Brooklyn had over 500 and I was one of them. The estimate for parochial schools is about 3,000 According to the New Cork City Board of Education, in the school year of 1946/1947, 3,536 Puerto Ricans were registered in adult classes. p.39 Items As reported in the book, The Puerto Rican Experience (Puerto Ricans in New York City) The report of the Committee on Puerto Ricans in New York City of the Welfare Council of New York City. Arno Press, New York - 1947 The Puerto Rican Northern Migration Created Heroes for Us to Honor was first published by The Puerto Rican/Hispanic Genealogical Society, Inc., in EL COQUI DE AYER, November-December 2004, Volume 9, Issue 6 An excellent place to begin researching Puerto Rican migration would be The Center (El Centro) for Puerto Rican Studies at Hunter College in New York City. Also recommended is Miguel Hernández’ (former President of The Puerto Rican/Hispanic Genealogical Society, Inc.) article, “From the Island to the Continent: Ships that Brought Our Ancestors to NY” which appeared in EL COQUI DE AYER, May-June 1999, Volume 4, Copyright © October 2001 - 2010, Dalia Morales Revised: October 15, 2010 Puerto Rico: My Ancestors and Their Descendants
<urn:uuid:1c0a43db-a5fa-43db-aac3-558f50a79386>
CC-MAIN-2016-26
http://freepages.genealogy.rootsweb.ancestry.com/~prraices/commentaries.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967104
2,213
3.640625
4
SQL Server has three different storage terms which you need to know when dealing with physical file storage. These are files, file groups and disks. While each one of these is totally different, then work together giving the SQL Server database engine a very flexible way to define where the data is actually stored. Disks are the physical storage within the server. If you look at the front of the server they will be the little devices which are inserted into the front of the server and typically have lights on them that flash. In larger environments where there are lots of servers you may have a storage array (also called a SAN). In it’s simplest terms the storage array is a bunch of disks that are in one large box. The space on these small disks is put together in various ways and parts of this space are allocated to specific servers. While the storage array may have anywhere from a couple of terabytes to thousands of terabytes the specific servers will only have access to a very small percentage of this space. In either case, either using local disks in the front of the server or a large storage array the concept is the same as far as SQL Server is concerned. The storage that is connected to the server is presented as a drive letter in Windows. If you were to log onto the console of the server you would see an computer which would look very familiar to you. This is because the computer running SQL Server in the server room or data center is running Windows just like the computer which is on your desk and just like the computer at your home (assuming that you aren’t using Linux or a Mac). If you opened the My Computer on a server you would see hard drives just like you do on your home computer. On the server there are probably more hard drives than your workstation or home computer and the drives on the server probably have a lot more space than the ones on your workstation, but the idea is still very much the same. Any files that are needed are written to the disks and then accessed by users or by programs, with Microsoft SQL Server simply being a large complex program. The files which the SQL Server database uses are called MDF, NDF and LDF files (for the most part). These are physical files which sit inside the disks (see above). A database will have at least two files, one of which is the MDF file which holds all of the data and the second which is the LDF file which holds the transaction log. In it’s simplest form the transaction log is a complete record of all the changes which have happened to the database since the database was first created. We manage the size of the transaction log through backups (which I’ll talk about in a later post) so for now we’ll just say that the transaction log holds the record of everything that has changed within the MDF data file. If a row is added that is written to the transaction log first. If a row is deleted that change is written to the transaction log before it is actually deleted from the database. The NDF database files are simply additional data files. While the database must have one MDF and one LDF data file the use of NDF data files is optional. There is no performance benefit to using NDF data files instead of just a MDF data file (so this is only the case 99.9% of the time but that’s good enough for this post). The only time you’ll get a performance benefit from using NDF data files is if the MDF data file is on one physical hard drive and the NDF is on a different physical hard drive. Each specific file exists on one and only one hard drive. If you need to store the data on multiple hard drives then you’ll need multiple data files. File groups are simply logical groupings of files. We use file groups so that specific tables can be larger than any one specific hard drive. If we didn’t have file groups when we created a table and we told the SQL Server where to store that table we would have to tell it which physical data file to store the data in. This would mean that the tables would be limited by the size of the largest disk which was available. Because we want to be able to spread our data across multiple physical files we have file groups. The file groups allow for multiple files to be put into a single group. SQL Server will then create the table within that file group putting portions of the database on all the files within the file group. This allows us to have tables which are very large as we are only limited by the number of disks we can attach to the server and the size of those disks. When we create a new table (or index, or service broker queue, or any other object which is a physical object which holds data) we specify which file group we want to create the object within. This allows us to leverage any files which are members of the file group. If there are multiple files the object will be spread over all the physical files. If there is just one file today when we create the table but then we add another file to the file group later there’s nothing that we need to do. The SQL Server will automatically begin using the new file as well as the old file. When you add a new file to a file group SQL Server doesn’t rebalance the data across the database files, you have to do this manually by rebuilding all the indexes, but new rows would be written to the new file. When SQL Server is writing data to the files within the file group it uses something called proportional fill. This means that the SQL Server will do it’s best to keep the amount of free space within the data files the same. While it isn’t perfect at doing there as there are a lot of things that can effect it’s ability to do this, the SQL Server will do it’s best. Hopefully this helped explain these three complimentary yet different concepts. The recovery models tell the SQL Server how much transaction log data to keep within the transaction log, and how much data recovery there is. SQL Server has three different recovery models which are SIMPLE, BULK_LOGGED and FULL. Simple Recovery Model Simple recovery model is the most basic recovery model and it provides the least amount of protection. The simple recovery model supports only full and differential backups meaning that you can only restore the database to the state that the database was in when either a full backup or differential backup was restored. Typically the simple recovery model is only used for databases like data warehouses or other databases when the data within the database can be reloaded from another source easily. The reason that this is acceptable for data warehouses is because data warehouses are not the production system of record for the data as they are loaded from the production system of record on some schedule, either nightly, weekly, hourly, etc. Many people that assume that using the snapshot recovery model means that the transaction log isn’t kept. This is not the case. For most transactions the normal transaction log information is written to the transaction log. The different between the simple recovery model and the other recovery models is that with the simple recovery model the SQL Server database doesn’t wait for the transaction log to be backed up before removing the data from the transaction log. Instead it waits for the data pages in the MDF and NDF files to be fully written to disk. Once that happens the SQL Server will mark the rows within the transaction log as being able to be reused. Full Recovery Model In the full recovery model the database engine keeps all the transactions within the transaction log file (the LDF file) until they are backed up to a transaction log backup. With the full recovery model because we are taking transaction log backups we are able to restore to any moment in time so long as that moment in time started after the database was created and is before now and that we still have the backups for that period of time. If for example we take full backups on Monday morning at midnight and transaction log backups every hour at 10 minutes passed the hour, in the event of a database crash we can restore the SQL Server database to any point in time provided that we still have the full backup from the Monday before and all the transaction log backups taken between that full backup up to the transaction log backup which was taken after the problem happened. With this example if we wanted to restore the database to Tuesday afternoon at 1:45:00pm we could do that by restoring the full backup from Monday then restoring the transaction log backups starting at ten minutes after midnight on Monday up through the 2:10:00pm backup on Tuesday. While restoring the final transaction log backup we would simply tell the restore database command that we wanted to stop the restore process at 1:45:00pm on that Tuesday and the restore would simply stop at that point. Bulk Logged Recovery Model The bulk logged recovery model is very similar to the full recovery model. Most of the transactions are fully logged within the database’s transaction log. There are a few commands which aren’t such as bulk inserts using the BCP command line tool or the BULK INSERT T-SQL statement as well as a few others. All normal INSERT, UPDATE and DELETE statements are fully logged just like in the full recovery model. The bulk logged recovery model allows for point in time restores just like the full recovery model does. The difference here is that when there is a bulk operation which is bulk logged using the bulk logged recovery model not all the data changes are logged, only the allocation of space within the data files is logged. This presents a problem when the transaction log backup is being taken as the SQL Server needs to backup the records. In order to do this because we don’t have the full transaction log information for these commands the SQL Server database engine simply copies the data pages which were allocated to the transaction log file. Because of this the specific times that the database can be restored to will be limited to times when the bulk operations were not running. So if the bulk operation runs from midnight until 1am every night there wouldn’t be any way to restore to any specific point in time during that one hour window. If you need the ability to restore to specific times within that window and have the data be perfect the full recovery model must be used. Statistics are magical little objects within the database engine that have the ability to make your queries run fast or painfully slow. The reason that statistics are so important is because they tell the database engine what data exists within the database table and how much data exists. The problem with statistics comes from how often they are updated. By default in all versions and editions of SQL Server the statistics are updated when 20%+500 rows within the database table change. So if a database table has 10000 rows in it we need to change 2500 rows (2000 rows is 20% plus an additional 500 rows) for the statistics to be updated. With smaller tables like this having out of date statistics usually doesn’t cause to many problems. The problems really come into play with larger tables. For example if there are 50,000,000 rows in a table for the statistics to be automatically updated we would need to change 10,000,500 rows. Odds are it is going to take quite a while to change this number of rows. To fix this we can manually tell the SQL Server to update the statistics by using the UPDATE STATISTICS command. Within the statistic there are up to 200 values which are sampled from the column. The statistic shown below contains a few different columns. The statistic shows a series of values from the column which the statistic is built on. It also contains the count of the number of rows between that row and the next in the statistic. From this information the SQL Server is able to build the execution plan which is used to access the data. When the data within the statistic is out of date the SQL Server doesn’t make the correct assumptions about how much data there is and what the best way to access that data is. When the statistic gets updated the SQL Server is able to make better assumptions so the execution plan becomes better so the SQL Server is able to get the data faster. I get job postings emailed to me all the time from various recruiters. Usually they are, we’ll call them OK. But sometimes the requirements of just getting to the interview are just stupid. Every once and a while, usually about twice a year, I get an email that says “remote candidates were welcome, but the candidate would have to pay their own way to get to the job interview”. Now when you are talking about paying for gas to get from your house to their office that’s fine. However these jobs are often not in the city, or even the state that I live in. So let me get this straight, you want me to pay to fly out to see you, so that you can tell me that you don’t want me to work for you. That’s really not how this works. If you’ve exhausted the talent in your local city and you need to get talent from out of town, then it’s on you to pay for the travel to get the person to the interview. Even if there was a position open at a company that was in my local area, if it said this I probably wouldn’t even consider the job. This tells me that as a company you don’t respect my time and my resources. From this I assume that you won’t want to pay for any of my training so that I can better support the companies systems. I can assume that you’ll expect me to work projects on nights and weekends (I’ve got no problems with nights and weekends for emergency system down issues, but not for projects that weren’t properly planned). If you are a company that puts these sorts of silly statements in your job descriptions, and you are wondering why you can’t get any candidates stuff like this is try. When dealing with SQL Server databases we have to deal with locking, and blocking within our application databases. All to often we talk about blocking as being a bad thing. How ever in reality blocking isn’t a bad thing. The SQL Server uses blocking to ensure that only one person is accessing some part of the database at a time. Specifically blocking is used to ensure that when someone is writing data that no one else can read that specific data. While this presents as a royal pain in that users queries run slower than expected, the reality is that we don’t want users accessing incorrect data, and we don’t want to allow two users to change the same bit of data. Because of this we have locking, which then leads to blocking within the database. All of this is done to ensure that data integrity is maintained while the users are using the application so that they can ensure that the data within the database is accurate and correct. Without locking and blocking we wouldn’t have data that we could trust. The NOLOCK indexing hint gets used way, way to frequently. The place that I hate seeing it the most is in financial applications, where I see it way to often. Developers who are working on financial applications need to understand just how important not using NOLOCK is. Using NOLOCK isn’t just a go faster button, it changes the way that SQL Server lets the user read the data which they are trying to access. With the NOLOCK hint in place the user is allowed to read pages which other users already have locked for changes. This allows the` users query to get incorrect data from the query. If the user is running a long running query that is accessing lots of rows which are in the process of being accessed, the user could get duplicate rows, or missing rows. This can obviously cause all sorts of problems with the users report as the data won’t actually be accurate. In reports that internal staff are running this is not good, if this your external users which are getting incorrect data, such as account debits and credits being processed while the user is requesting data they could suddenly get all sorts of invalid data. If you are working with a financial application and you are seeing NOLOCK hints in there you’ll want to work on getting rid of them, and for the ones which must remain for some reason to make sure that the business users understand exactly how the data that they are looking at is totally incorrect and shouldn’t be trusted. If the application is using the NOLOCK hint to solve performance problems so problems need to be resolved in other ways. Typically by fixing indexing problems that exist on the tables which are causing some sort of index or table scans. As our VMware environments become larger and larger with more and more hosts and guests more thought needs to be given to the vCenter database that is typically running within a SQL Server database. With the vCenter database running within Microsoft SQL Server (which is the default) their will be lots of locking and blocking happening as the queries which the vCenter server runs aggregates the data into the summary tables. The larger the environment the more data that needs to be aggregated every 5 minutes, hours, daily, etc. Then problem here is that in order for these aggregations to run the source and destination tables have to be locked. This is normal data integrity within the SQL Server database engine. Thankfully there is a way to get out of this situation. That is to enable a setting called Snapshot Isolation level for the vCenter database. This setting changes the way that SQL Server handles concurrency by allowing people to write to the database while at the same time allowing people to read the old versions of the data pages therefor preventing locks. The SQL Server does this by making a copy of the data page when it is being modified and putting that copy into the tempdb database. Any user that attempts to run queries against the original page will instead be given the old version from the tempdb database. If you’ve seen problems with the vCenter client locking up and not returning performance data when the aggregation jobs are running, this will make these problems go away. Turning this feature on is pretty simple. In SQL Server Management Studio simply right click on the vCenter database and find the “Allow Snapshot Isolation” setting on the options tab. Change the setting from False to True and click OK (this is the AdventureWorks2012 database, but you’ll get the idea). If you’d rather change the settings via T-SQL it’s done via the ALTER DATABASE command shown below. ALTER DATABASE [vCenter] SET ALLOW_SNAPSHOT_ISOLATION ON Hopefully this will help fix some performance problems within the vCenter database. This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week. - Bootstrapping SQL Server bloggers and blog readers with Twitter! - Whiteboard Wednesday #1: Top Visualization Mistakes - The Accidental Architect - Personally Identifiable Information (PII) and Data Encryption - Traversing the Facebook Graph using Data Explorer - This weeks SQL Server person to follow on Twitter is: sqlpass also known as PASS Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny As we know with Microsoft SQL Server everything is processed from disk and loaded into the buffer pool for processing by the query engine. So what happens to the buffer pool when backups are taken? The answer is that nothing happens to the buffer pool. When SQL Server is backing up data from the disk, SQL Server simply takes the data from the data files and writes it to the backup file. During the backup process the dirty pages are written to the disk by the checkpoint process being triggered by the backup database process. Because the backup process simply reads the data files and writes them to the backup location there’s no need to cache the data in the buffer pool as this data isn’t being queried by a normal SQL query. The other day I was looking at parallel query plans on a customers system and I noticed that the bulk of the parallel queries on the system where coming from Spotlight for SQL Server. The query in question is used by spotlight to figure out when the most recent full, differential and log database backups were taken on the server. The query itself is pretty short, but it was showing a query cost of 140 on this system. A quick index created within the MSDB database solved this problem reducing the cost of the query down to 14. The query cost was reduced because a clustered index scan of the backupset table was changed into a nonclustered index scan of a much smaller index. The index I created was: CREATE INDEX mrdenny_databasename_type_backupfinishdate on backupset (database_name, type, backup_finish_date) with (fillfactor=70, online=on, data_compression=page) Now if you aren’t running Enterprise edition you’ll want to turn the online index building off, and you may need to turn the data compression off depending on the edition and version of SQL Server that you are running. If you are running SpotLight for SQL Server I’d recommend adding this index as this will fix the performance of one of the queries which SpotLight for SQL Server is running against the database engine pretty frequently. I’d recommend adding this index to all the SQL Server’s which SpotLight for SQL Server monitors.
<urn:uuid:bc625f20-0b76-4de2-8999-02652f80d541>
CC-MAIN-2016-26
http://itknowledgeexchange.techtarget.com/sql-server/page/26/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94613
4,511
3.15625
3
let T = force exerted by the tow bar on the car and caravanA car of mass 800kg is pulling a caravan of mass 1000kg along a straight, horizontal road. The caravan is connected to the car by means of a light, rigid tow bar. The car is exerting a driving force of 1270N. The resistances to the forward motion of the car and the caravan are 400N and 600N respectively. Show that the acceleration of the car and caravan is 0.15ms-2 net force on the car ... Fnet = 1270 - (T+400) 800a = 1270 - (T+400) net force on the caravan ... Fnet = T - 600 1000a = T - 600 solve the system of equations for T and a
<urn:uuid:3cca48cb-c0d1-4572-92c7-33704ec3ea07>
CC-MAIN-2016-26
http://mathhelpforum.com/math-topics/139535-m1-help-please-connected-particles.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912557
166
3.171875
3
Oil producing countries may exercise profound influence over American driving habits, but a new Harvard Kennedy School faculty research paper shows the U.S. federal and state taxes also play an important role. “Gasoline Taxes and Consumer Behavior” finds that even small changes in gasoline taxes affect consumer behavior and that taxes affect behavior even more than commensurate increases in cost caused by rising oil prices. The paper is co-authored by Shanjun Li, Cornell University; Joshua Linn, Resources for the Future; and Erich Muehlegger, associate professor of public policy at Harvard Kennedy School. The researchers specifically analyzed the short-run impacts of gasoline taxes on driver decisions – gasoline consumption, vehicle miles traveled and vehicle choices – and the ways in which those impacts differ from those incited by changes in the price of gasoline exclusive of the tax. “The purpose of our paper is to test the maintained assumption that consumers respond to gasoline tax and tax-exclusive price changes in the same way,” write the authors. “Our analysis directly estimates consumer responses to gasoline taxes by decomposing retail gasoline prices into tax and tax-exclusive components.”
<urn:uuid:a18950ef-54da-4625-ad07-d8de8327a464>
CC-MAIN-2016-26
http://news.harvard.edu/gazette/story/newsplus/taxing-gasoline-how-consumer-behavior-is-affected/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933609
237
2.515625
3
A multilayered crystal of vanadium selenide, seen in a field of view roughly as wide as a red blood cell: after enough copper atoms penetrate the uppermost layers of the crystal, a hexagonal network of nanofold tubes appears spontaneously, each tube 30 nanometers across and enclosing an empty space 4 nanometers high. There are many practical questions as well. Surface nanotube networks suggest numerous applications, including networks of pipes for the storage and transport of minute quantities of materials, or templates for the fabrication of nanowire networks. "There are many exciting follow-ups to investigate in these systems," says Dahmen, "ranging from whether and how the tubes can be filled with liquids or with metal atoms to form wires, to controlling the sizes and patterns of the networks, to understanding the atomic structure of their junctions." In previous self-assembly research..there was progress made on targeted self-assembly. Possibly these capabilities could be combined. Success in nanoscale self-assembly could make the transition to MNT easier by allowing more complex structures to be self assembled and less mechanochemistry to be required to "finish" a MNT product. More capability could be available sooner. If there was a progression in the number of mechanochemistry operations per second that are performed by a particular system or device, then by shifting more operations to more capable self-assembly the threshold for useful mechanochemisty would be reduced and the sooner MNT would be useful.
<urn:uuid:e0512a9a-179f-4149-8580-3d02aa22ef00>
CC-MAIN-2016-26
http://nextbigfuture.com/2006/02/self-assembly-advance-self-assembled.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948058
304
3.109375
3
FINAL DIAGNOSIS: STRYCHNINE TOXICITY Figure 2 shows the total ion chromatogram with the peaks annotated: Identification of some of the peaks was achieved by comparison of the electron impact spectra of compounds from the patient sample with a library of GC/MS spectra. The following drugs were definitively identified by GC/MS (Figure 2): nicotine, two nicotine metabolites, caffeine, atropine (provided at the hospital), cocaine metabolite (ecgonine methyl ester), and cocaine. The identification of parent cocaine suggests recent use of cocaine (i.e., likely within 6-12 hours of presentation). Additional peaks labeled in Figure 2 correspond to cholesterol, additional unidentified compounds (labeled as 'contaminants'), and the internal standard (IS) barbital. The largest peak is found at 30.23 minutes. The electron impact spectrum for this compound corresponds closely to a library spectrum for strychnine (Figure 3): Strychnine is a bitter, colorless, odorless crystalline alkaloid, C21H22N2O2 (Figure 4) The most common source is from the seeds of the Strychnos nux vomica tree (Figure 5), found in southern Asia and Australia. Strychnine is an extremely toxic compound and acts as an antagonist at the inhibitory or strychnine-sensitive glycine receptor (GlyR) (Figure 6), a ligand-gated chloride channel in the spinal cord and the brain. Inhibitory receptors are ligand-gated chloride channels which act to hyperpolarize neurons, making them less likely to fire action potentials. Most general anesthetics in current clinical use effectively stimulate inhibitory receptors, slowing down central nervous system transmission and leading to unconsciousness, amnesia, and analgesia. Convulsants block inhibitory receptors, creating an inhibition of inhibition, and leading to overstimulation. The glycine receptor is the major inhibitory receptor in spinal cord, but is also found in brain. Antagonists at the glycine receptor cause characteristic convulsions (example: strychnine). Strychnine can be introduced to the body by inhalation, swallowing, intravenously, or absorption through eyes or mouth. Strychnine toxicity produces some of the most dramatic and painful symptoms of any known toxic reaction, and can be fatal. For this reason, strychnine poisoning has been used often throughout literature and film. Several minutes after exposure muscles begin to contract and spasm, typically starting with the head and neck and spreading diffusely. The convulsions are nearly continuous, and can increase in intensity with the slightest stimulus. Death occurs from asphyxiation caused by paralysis of the neural pathways that control breathing, or from the sheer exhaustion due to the severity of the muscle convulsions. A patient will die within several hours of exposure. "Strychnine can lead to an atrocious death¡ Doses of ten to twenty milligrams lead to dyspnoea and unbearable feelings of anxiety. Twitching and spasms gradually develop and lead to violent tetanic seizures in which the head is bent right back to the buttocks, so that the spine may be broken. Breathing may cease for intervals of one to two minutes at a time; in this event the seizures may also stop, only to recommence at the least excitation - a loud noise or a gentle touch - until death from exhaustion finally supervenes. No death could be worse than this and no man is likely to endure greater agonies." Gustav Schenk, "The Book of Poisons" There is no specific antidote for strychnine and treatment of strychnine toxicity is purely symptomatic. An activated charcoal infusion may initially absorb any poison within the digestive tract that has not yet been absorbed into the blood. Anticonvulsants or general anesthetics including phenobarbital, diazepam, or propofol may be administered to combat convulsions. Neuromuscular agents such as vercuronium, and muscle relaxants such as dantrolene may also be used to combat muscle rigidity. If the patient survives past 24 hours, the prognosis is good and a full recovery is probable. Low doses of strychnine have stimulant effects, and historically small doses of strychnine were used in stimulant medications, laxatives, and other stomach remedies (Figure 7). The dosage for medical use ranged from roughly 1 milligram to over 6 milligrams. People have died as a result of ingesting doses within that range, however, and the use of strychnine in medicine was abandoned once safer alternatives became available. Strychnine is still used in some rodent baits, although since 1990 most baits containing strychnine have been replaced with zinc phosphide. There have been numerous noteworthy strychnine poisonings throughout literature and history. Strychnine was also introduced into popular culture when it was featured as the first of many poisons used by Agatha Christie in her debut novel The Mysterious Affair at Styles in 1921. Norman Bates used strychnine to kill his mother and her lover in the infamous Psycho thriller film. In the 1904 Olympics Thomas Hicks from the United States won the marathon event, but then collapsed, having reportedly taken a brandy tonic mixed with strychnine for the stimulant effects. A tonic laced with arsenic and strychnine was also part of the training regimen for legendary racehorse Phar Lap (Figure 8), and it has been suggested that this may have contributed to his premature death. Famous Delta Blues legend Robert Johnson drank from a whiskey bottle laced with strychnine and died. In this case, our patient was discovered in time to receive appropriate medical care and experienced a full recovery. His CPK and troponin levels normalized. A dramatic feature of the case was the marked acidemia (pH 6.55 with reference range 7.35-7.45) in the initial arterial blood gas from the patient. His hospital stay was complicated by both alcohol and cocaine withdrawal symptoms, but no lasting effects from strychnine toxicity were observed. The patient is receiving psychiatric follow up care. In conclusion, strychnine is a highly toxic substance which can still be found in rodent baits. If ingested, it can cause dangerously severe toxicity, exemplified by diffuse muscle contraction through inhibition of inhibitory nervous system receptors. Standard urine drug screens by immunoassay will not detect strychnine. However, this compound can be identified by GC/MS, as it was in this case. In this case, a rare toxic compound was identified by comparison to the known GC/MS spectrum for strychnine and the patient was treated accordingly. Contributed by Amber Henry, MD and Matthew Krasowski, MD, PhD
<urn:uuid:3fd56018-7313-4f0b-8bf3-12a1ac13df07>
CC-MAIN-2016-26
http://path.upmc.edu/cases/case550/dx.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943187
1,426
2.921875
3
Unemployment rates in June July 10, 2000 Both the number of unemployed persons, 5.6 million, and the unemployment rate, 4.0 percent, were little changed in June (seasonally adjusted). The jobless rate has been in a 3.9- to 4.1-percent range since October 1999. Unemployment rates for the major worker groups—adult men (3.2 percent), adult women (3.8 percent), teenagers (11.6 percent), whites (3.4 percent), blacks (7.9 percent), and Hispanics (5.6 percent)—showed little or no change over the month. About 1.1 million persons (not seasonally adjusted) were marginally attached to the labor force in June. These people wanted and were available to work and had looked for a job sometime in the prior 12 months. They were not counted as unemployed, however, because they had not actively searched for work in the 4 weeks preceding the survey. Bureau of Labor Statistics, U.S. Department of Labor, The Economics Daily, Unemployment rates in June on the Internet at http://www.bls.gov/opub/ted/2000/jul/wk2/art01.htm (visited June 30, 2016).
<urn:uuid:d66d0d6c-d206-4f9b-9807-4195f0459d07>
CC-MAIN-2016-26
http://stats.bls.gov/opub/ted/2000/jul/wk2/art01.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975691
261
2.625
3
This article is in response to an article in the Beaufort Observer entitled “What Needs to be Done about Abuse of Power by the Federal Government?” which in turn was based on Calvin H. Johnson’s article “The Constitution or Liberty” in The Freemen. The US Constitution was designed to accomplish two goals: Form a government vigorous enough to conduct the affairs of the nation (overcome the limitations of the Articles of Confederation), yet limited enough so that it did not endanger the rights of its citizens or encroach upon the sovereign powers of the States (other than those expressly delegated to it by them). Stanton Richmond wrote an article that claims that the drafters of the Constitution had no intention of limiting the national government’s powers to the 16 items listed in Article I, Section 8, of the Constitution. He cites the fact that they declined to incorporate a provision from the Articles of Confederation into the new Constitution which read: “Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States, in Congress assembled.” He asserts that they did so because they believed the federal government needed implied powers. Richmond is correct when he says that the Constitution needed to provide more power to the government, but the only real difference in the government provided by the Constitution is the power to regulate commerce and an expanded taxing power. In Federalist No. 61, James Madison wrote: “What are to be the objects of federal legislation? Those which are of most importance are commerce, taxation, and the militia.” Recall that we nearly lost the Revolutionary War because the provisional government could not enforce taxation from the states. It could only “ask” them to send funding for the war, and many states did not answer that request. The opening phrase to Article I reads: “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defense and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States.” Our framers did not intend this as an independent, limitless grant of power. It was included to ‘qualify’ the powers set forth in Section 8. It was not, as Mr. Richmond, indicative of a government that was intended to have both express and expansive implied powers. I will explain this more fully below, and through examples. When the Constitution was drafted in Philadelphia in September 1787, not all the states were pleased with it. Many weren’t. Many were skeptical and distrustful. In fact, a series of essays, articles, and papers appeared in papers all over the colonies criticizing the new Constitution and urging that it not be adopted. These were called the Anti-Federalist Papers. One of the most energetic criticism of the proposed new government was its enlarged taxing power. The states felt it would become so extensive and unlimited that it would cripple state legislatures and therefore swallow up the states, ultimately forming the very government that Madison initially proposed — a nationalist system (concentrated national government, with weak states). The very presence and participation of Alexander Hamilton at the Constitution Convention also gave the states great worry for he was a known monarchist. He wanted to replicate the British system in America with a powerful central government and a president who would be appointed for life. That’s why Patrick Henry declined to participate as a delegate for Virginia at the Convention. He said: “I smell a Rat in Philadelphia, tending towards the Monarchy.” In Anti-Federalist No. 33, for example, Brutus (pseudo name) wrote about the states’ fear with respect to the taxing power: ”This (taxing) power, exercised without limitation, will introduce itself into every corner of the city, and country-it will wait upon the ladies at their toilet, and will not leave them in any of their domestic concerns; it will accompany them to the ball, the play, and assembly; it will go with them when they visit, and will, on all occasions, sit beside them in their carriages, nor will it desert them even at church; it will enter the house of every gentleman, watch over his cellar, wait upon his cook in the kitchen, follow the servants into the parlor, preside over the table, and note down all he eats or drinks; it will attend him to his bedchamber, and watch him while he sleeps; it will take cognizance of the professional man in his office, or his study; it will watch the merchant in the counting-house, or in his store; it will follow the mechanic to his shop, and in his work, and will haunt him in his family, and in his bed; it will be a constant companion of the industrious farmer in all his labor, it will be with him in the house, and in the field, observe the toil of his hands, and the sweat of his brow; it will penetrate into the most obscure cottage; and finally, it will light upon the head of every person in the United States. To all these different classes of people, and in all these circumstances, in which it will attend them, the language in which it will address them, will be GIVE! GIVE! A power that has such latitude, which reaches every person in the community in every conceivable circumstance, and lays hold of every species of property they possess, and which has no bounds set to it, but the discretion of those who exercise it – I say, such a power must necessarily, from its very nature, swallow up all the power of the state governments. I shall add but one other observation on this head, which is this: It appears to me a solecism, for two men, or bodies of men, to have unlimited power respecting the same object. It contradicts the … maxim, which saith, “no man can serve two masters,” the one power or the other must prevail, or else they will destroy each other, and neither of them effect their purpose. It may be compared to two mechanic powers, acting upon the same body in opposite directions, the consequence would be, if the powers were equal, the body would remain in a state of rest, or if the force of the one was superior to that of the other, the stronger would prevail, and overcome the resistance of the weaker. But it is said, by some of the advocates of this system, that “the idea that Congress can levy taxes at pleasure is false, and the suggestion wholly unsupported. The preamble to the Constitution is declaratory of the purposes of the [our] union, and the assumption of any power not necessary to establish justice, etc., provide for the common defense, etc., will be unconstitutional. …. Besides, in the very clause which gives the power of levying duties and taxes, the purposes to which the money shall be appropriated are specified, viz., to pay the debts and provide for the common defense and general welfare.”‘ Neither the general government nor the state governments ought to be vested with all the powers proper to be exercised for promoting the ends of government. The powers are divided between them-certain ends are to be attained by the one, and certain ends by the other; and these, taken together, include all the ends of good government. This being the case, the conclusion follows, that each should be furnished with the means, to attain the ends, to which they are designed. To apply this reasoning to the case of revenue, the general government is charged with the care of providing for the payment of the debts of the United States, supporting the general government, and providing for the defense of the union. To obtain these ends, they should be furnished with means. But does it thence follow, that they should command all the revenues of the United States? Most certainly it does not. For if so, it will follow, that no means will be left to attain other ends, as necessary to the happiness of the country, as those committed to their care. The individual states have debts to discharge; their legislatures and executives are to be supported, and provision is to be made for the administration of justice in the respective states. For these objects the general government has no authority to provide; nor is it proper it should. It is clear then, that the states should have the command of such revenues, as to answer the ends they have to obtain. To say, that ‘the circumstances that endanger the safety of nations are infinite,’ and from hence to infer, that all the sources of revenue in the states should be yielded to the general government, is not conclusive reasoning: for the Congress are authorized only to control in general concerns, and not regulate local and internal ones… The peace and happiness of a community is as intimately connected with the prudent direction of their domestic affairs, and the due administration of justice among themselves, as with a competent provision for their defense against foreign invaders, and indeed more so. Upon the whole, I conceive, that there cannot be a clearer position than this, that the state governments ought to have an uncontrollable power to raise a revenue, adequate to the exigencies of their governments; and, I presume, no such power is left them by this constitution.” The Federalist Papers were the answers to the criticisms and distrusts of the Constitution by the States articulated in the series of Anti-Federalist Papers. The Federalist Papers were written by Alexander Hamilton and John Jay (both of NY) and by James Madison (of VA). It was no insignificant coincidence that these men wrote them. At the time, the two largest and most powerful states, New York and Virginia, were not supportive of the Constitution and it was feared that if these states did not ratify, the hope for a contiguous Union would be frustrated. And so these men wrote the series of papers to explain the meaning and intent of the Constitution, to offer assurances, and to dispel fears (particularly that the proposed government would take too much power from the States), with the ultimate hope that it would convince the delegates of NY and VA to ultimately ratify (which they eventually did in 1788). Alexander Hamilton answered the criticisms of anti-Federalist No. 33 in Federalist No. 31: “As revenue is the essential engine by which the means of answering the national exigencies must be procured, the power of procuring that article in its full extent must necessarily be comprehended in that of providing for those exigencies.” [Here we see the beginning of the explanation of the “Necessary and Proper” Clause, which is continued in No. 33]. In No. 31, Hamilton also assured: “The State governments, by their original constitutions, are invested with complete sovereignty.” In Federalist No. 32, he continued: “Although I am of opinion that there would be no real danger of the consequences which seem to be apprehended to the State governments from a power in the Union to control them in the levies of money, because I am persuaded that the sense of the people, the extreme hazard of provoking the resentments of the State governments, and a conviction of the utility and necessity of local administrations for local purposes, would be a complete barrier against the oppressive use of such a power; yet I am willing here to allow, in its full extent, the justness of the reasoning which requires that the individual States should possess an independent and uncontrollable authority to raise their own revenues for the supply of their own wants. And making this concession, I affirm that (with the sole exception of duties on imports and exports) they would, under the plan of the convention, retain that authority in the most absolute and unqualified sense; and that an attempt on the part of the national government to abridge them in the exercise of it, would be a violent assumption of power, unwarranted by any article or clause of its Constitution. An entire consolidation of the States into one complete national sovereignty would imply an entire subordination of the parts; and whatever powers might remain in them, would be altogether dependent on the general will. But as the plan of the convention aims only at a partial union or consolidation, the State governments would clearly retain all the rights of sovereignty which they before had, and which were not, by that act, EXCLUSIVELY delegated to the United States. This exclusive delegation, or rather this alienation, of State sovereignty, would only exist in three cases: where the Constitution in express terms granted an exclusive authority to the Union; where it granted in one instance an authority to the Union, and in another prohibited the States from exercising the like authority; and where it granted an authority to the Union, to which a similar authority in the States would be absolutely and totally CONTRADICTORY and REPUGNANT. The necessity of a concurrent jurisdiction in certain cases results from the division of the sovereign power; and the rule that all authorities, of which the States are not explicitly divested in favor of the Union, remain with them in full vigor, is not a theoretical consequence of that division, but is clearly admitted by the whole tenor of the instrument which contains the articles of the proposed Constitution. We there find that, notwithstanding the affirmative grants of general authorities, there has been the most pointed care in those cases where it was deemed improper that the like authorities should reside in the States, to insert negative clauses prohibiting the exercise of them by the States.” As stated above, Federalist No. 33 discusses the “Necessary and Proper” Clause. The “Necessary and Proper Clause” is the last clause of Article I, Section 8 and reads: “Congress shall have the power… To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof.” In No. 33, Hamilton wrote: “But it may be again asked, Who is to judge of the NECESSITY and PROPRIETY of the laws to be passed for executing the powers of the Union? I answer, first, that this question arises as well and as fully upon the simple grant of those powers as upon the declaratory clause; and I answer, in the second place, that the national government, like every other, must judge, in the first instance, of the proper exercise of its powers, and its constituents in the last. If the federal government should overpass the just bounds of its authority and make a tyrannical use of its powers, the people, whose creature it is, must appeal to the standard they have formed, and take such measures to redress the injury done to the Constitution as the exigency may suggest and prudence justify.” It should be noted that contracts often include a provision which grants authority to do those things that are necessary and proper to fulfill the obligations under the agreement. The way I read the Constitution, in light of what is explained by Hamilton in Federalist No. 31, 32, and 33, and in light of what is written in the totality by the remainder of the Federalist Papers, is this: There is some authority vested in the federal government to define the scope of its powers, but only to the extent that any laws made to govern in such areas must be both “necessary” and “proper.” The word necessary means “needful, indispensable, required..” Since there is an clear effort on the part of our drafters and authors of the Federalist Papers to assure the States that their sovereign rights would remain intact (minus those powers delegated to the federal government) and their status would not be compromised because of the creation of a federal government, it is more than reasonable to assume that the scope of federal power would be limited to the division established in the Tenth Amendment. Thus, all grants of authority to the federal government are limited by what is absolutely necessary and proper and by reserved state rights. Since the Federalist Papers were written as assurances to the States, they were justified in relying on them when they ratified the Constitution. This is just one example of an implied power or an elastic power. [See later for a brief overview of the case interpreting the “Necessary and Proper” clause – McCulloch v. Maryland (1819)]. The bottom line is that unless the federal government is given authority to legislate in a particular area or to use executive powers in certain situations, the government is prohibited and the States retain the powers (under the Tenth Amendment). But the real test of intent is what the States themselves explained in their ratification conventions, because their ratification was premised on THEIR UNDERSTANDING of its terms and intent, and what assurances were given them. The states were the “signing parties” to the Constitution which then established the federal government. The significance of the states in the process is just like a buyer signing a contract with a seller for the purchase of an expensive house. The parties ultimately sign that contract only after thorough discussion and explanation of the conditions and provisions because they know their signature will bind them to the obligations and responsibilities (and the benefits). The contract cannot take on new meaning in the future without relieving the signing parties of their obligations. A contract that is altered allows the parties to walk away. But don’t take it from me. James Madison himself urged that the true meaning of the Constitution was to be found in the state ratifying conventions, for it was there that the people, assembled in convention, were instructed with regard to what the new document meant. Jefferson agreed as well. He said: “Should you wish to know the meaning of the Constitution, consult the words of its friends.” It would be ridiculous to think the states would have assented agreement to a government with enough powers to swallow them up and destroy them. For example, let’s look at the Virginia and NC Ratification Conventions. In June 1788, at the Virginia Convention, delegate Patrick Henry spoke out aggressively against the ratification of the Constitution. He feared the government would have a tendency to concentrate power and destroy the sovereignty of the States, thereby destroying the liberty rights and interests of the people. In his opening speech, he spoke: “Liberty, the greatest of all earthly blessing — give us that precious jewel, and you may take everything else! But I am fearful I have lived long enough to become an old-fashioned fellow. Perhaps an invincible attachment to the dearest rights of man may, in these refined, enlightened days, be deemed old-fashioned; if so, I am contented to be so. I say, the time has been when every pulse of my heart beat for American liberty, and which, I believe, had a counterpart in the breast of every true American; but suspicions have gone forth — suspicions of my integrity — publicly reported that my professions are not real. Twenty-three years ago was I supposed a traitor to my country? I was then said to be the bane of sedition, because I supported the rights of my country. I may be thought suspicious when I say our privileges and rights are in danger. But, sir, a number of the people of this country are weak enough to think these things are too true. I am happy to find that the gentleman on the other side declares they are groundless. But, sir, suspicion is a virtue as long as its object is the preservation of the public good, and as long as it stays within proper bounds: should it fall on me, I am contented: conscious rectitude is a powerful consolation. I trust there are many who think my professions for the public good to be real. Let your suspicion look to both sides. There are many on the other side, who possibly may have been persuaded to the necessity of these measures, which I conceive to be dangerous to your liberty. Guard with jealous attention the public liberty. Suspect everyone who approaches that jewel. Unfortunately, nothing will preserve it but downright force. Whenever you give up that force, you are inevitably ruined. I am answered by gentlemen that, though I might speak of terrors, yet the fact was, that we were surrounded by none of the dangers I apprehended. I conceive this new government to be one of those dangers: it has produced those horrors which distress many of our best citizens. We are come hither to preserve the poor commonwealth of Virginia, if it can be possibly done: something must be done to preserve your liberty and mine. The Confederation, this same despised government, merits, in my opinion, the highest encomium: it carried us through a long and dangerous war; it rendered us victorious in that bloody conflict with a powerful nation; it has secured us a territory greater than any European monarch possesses: and shall a government which has been thus strong and vigorous, be accused of imbecility, and abandoned for want of energy? Consider what you are about to do before you part with the government. Take longer time in reckoning things; revolutions like this have happened in almost every country in Europe; similar examples are to be found in ancient Greece and ancient Rome — instances of the people losing their liberty by their own carelessness and the ambition of a few. We are cautioned by the honorable gentleman, who presides, against faction and turbulence. I acknowledge that licentiousness is dangerous, and that it ought to be provided against: I acknowledge, also, the new form of government may effectually prevent it: yet there is another thing it will as effectually do — it will oppress and ruin the people. This new plan will bring us an acquisition of strength — an army, and the militia of the states. This is an idea extremely ridiculous: gentlemen cannot be earnest. This acquisition will trample on our fallen liberty. Let my beloved Americans guard against that fatal lethargy that has pervaded the universe. Have we the means of resisting disciplined armies, when our only defence, the militia, is put into the hands of Congress? My great objection to this government is, that it does not leave us the means of defending our rights, or of waging war against tyrants. I address my most fervent prayer to prevent our adopting a system destructive to liberty.” He feared the spirit of the American Revolution, and all that our founders hoped to accomplish by its independence from Britain, would be extinguished by accepting it. He urged that provisions be added to protect the fundamental rights of freemen, such as the right of habeas corpus and a trial by jury, and the sovereign rights of the States. In fact, he even urged that Virginia join with other states and form a separate nation should further restraints not be given in the Constitution. Henry’s hostility to the Constitution served a beneficial purpose. It was necessary to put the new instrument through fire in order to test it and eventually to define it. Henry certainly put it through fire. Not only that, but he was one of our leading Founders who forced the adoption of the first ten amendments. In effect, he should be included as one of the great makers of the Constitution. North Carolina also had reservations about the Constitution as originally drafted. The first NC Convention was held in July 1788 in Hillsborough and the delegates declined to adopt the Constitution unless a Bill of Rights was added to further limit the reach of government. William Lenoir refused to support is adoption because he felt that the Constitution needed to be amended to specifically protect the sovereign rights of the States from any attempt by the government to overstep constitutional bounds and enlarge its powers. After assurances were given that a Bill of Rights would be added, NC finally ratified in December of that year. Obviously, the size and scope of the federal government was foremost on their minds. The States weren’t ready to sacrifice their powers or the liberty of its people. It is therefore important to realize the mindset of the States, the questions and concerns that entered their state debates, their reservations in roundly approving the Constitution, and the limitations applied as a function of their clarifying statements in the Conventions. These are much more pertinent to the interpretation of federal powers than any Supreme Court decision. After all, the Court itself is a branch of the government, created under the Constitution and empowered under its own judicial decisions rather than by the Constitution or constrained by the explanations given in the Federalist Papers. The question is “What can be done when the federal government oversteps its constitutional bounds?” In the past, the Supreme Court has responded by enlarging government powers. They’ve done this by applying a liberal or progressive reading to the Constitution that by all accounts was intended to be interpreted strictly. Thomas Jefferson said: ” “On every question of construction let us carry ourselves back to the time when the Constitution was adopted, recollect the spirit manifested in the debates, and instead of trying what meaning can be squeezed out of the text or invented against it, conform to the probable one which was passed.” Madison and others have given similar warnings. But Thomas Jefferson never trusted the federal judiciary. In fact, he saw the Supreme Court as part of the problem. For one, it was itself a branch of the federal government and thus not an impartial arbiter. As reason for his distrust, he pointed to several early Supreme Court cases, one in particular being McCulloch v. Maryland (1819). In that case, the Court took an expansive reading of the “Necessary and Proper” clause. Congress established a national bank and the state of Maryland challenged it as an unconstitutional exercise of power. Maryland argued that there was no power under Article I, Section 8 for Congress to establish a national bank. The government countered by asserting that it was within its taxing powers. The Supreme Court then had to interpret the breadth of the “Necessary and Proper” clause and the case pit Jefferson’s (and Madison’s) version of the Constitution against Hamilton’s version of the Constitution. Constitution did not According to Jefferson (who sided with Maryland) the establishment of a national bank was illegal under the Constitution. His argument before Congress was this: “The second general phrase is, ‘to make all laws necessary and proper for carrying into execution the enumerated powers.’ But they can all be carried into execution without a bank. A bank therefore is not necessary, and consequently not authorized by this phrase. It has been urged that a bank will give great facility or convenience in the collection of taxes. This might be true. Yet the Constitution allows only the means which are ‘necessary,’ not those which are merely ‘convenient’ for effecting the enumerated powers.” Jefferson further explained that the power to establish a national bank was addressed by the states in their ratification conventions and specifically rejected. According to the states, if the Constitution had indeed granted such a power, that would be cause enough to reject the document. Alexander Hamilton’s response to Jefferson’s interpretation went something like this: “Well that depends on what the meaning of ‘necessary’ is.” [Remember when Bill Clinton pulled that stunt]. Using tortured logic, Hamilton explained that “necessary” often means no more than “incidental, useful, or conductive to.” He argued that the government had implied powers, such as the power to establish a national bank. And so, establishing an early precedent for a disgraceful track record, the Supreme Court rejected the logic of Jefferson and the intent of the States in ratifying the Constitution and interpreted “necessary” to be “convenient.” Convenient for who? The federal government, of course. As Jefferson reasoned, the Supreme Court was a branch of the institution which engaged in a power struggle with the states. Secondly, it was comprised of human beings, who like the rest of mankind, are subject to passions, ambitions, allegiances, whims, and depravities. We wrote: “To consider the Judges of the Superior Court as the ultimate arbiters of constitutional questions would be a dangerous doctrine which would place us under the despotism of an oligarchy. They have with others, the same passion for party, for power, and for the privileges of their corps – and their power is the most dangerous as they are in office for life, and not responsible, as the other functionaries are, to the Elective control. The Constitution has elected no single tribunal. I know no safe depository of the ultimate powers of society but the people themselves.” In a letter he wrote in 1821, he wrote: “The great object of my fear is the Federal Judiciary. That body, like gravity, ever acting with noiseless foot and unalarming advance, gaining ground step by step and holding what it gains, is engulfing insidiously the special governments into the jaws of that which feeds them.” With respect to the critical division of power between the States and the government, one can easily see how “fair” the high court has been to the States and how vigorous it has been in respecting the powers that belong to them. In modern times, the Supreme Court has declared a federal law unconstitutional for violating the Tenth Amendment on only 3-4 occasions. At least that’s all I can recall at the moment. In 1992, there was New York v. US, where the Court overturned a federal law that forced states to dispose of radioactive waste as it directed. In 1997, there was Printz v. US, where the Courtoverturned parts of the Brady Handgun Violence Prevention Act. And this year, with the healthcare decision, the Court announced that the Medicaid expansion provision amounted to federal coercion of the States. The proper and constitutional remedies available to the States and to the People when the government oversteps its constitutional authority are judicial review, nullification (and interposition, a related remedy), and secession. Judicial review is untrustworthy and has been explained above. Secession is extreme, but a proper remedy, as provided in the Declaration of Independence (itself a secessionist document) and not addressed at all in the Constitution (nor it should be, for it is an inherent right of the people – the right of self-determination with respect to their form of government). Thomas Jefferson wrote a series of resolutions in 1798 (which Kentucky adopted as the Kentucky Resolutions of 1798) to declare that the Alien & Sedition Acts passed by Congress were unconstitutional, as violating the First Amendment’s guarantee of free speech or having no constitutional authority to pass the legislation. In those resolutions, Jefferson articulated the States’ remedy of Nullification, which he called “the Rightful Remedy.” Nullification is premised on the legal, government, and constitutional principle that any law passed without a proper grant of authority is null and void, and unenforceable. Nullification is the inherent right and duty of every State to declare when the federal government (their creation) has exceeded the bounds of authority delegated to it under the Constitution and then to refuse to allow that law to be enforced within its borders. The states have the power under the compact nature of the Constitution’s ratification. The States, as signing parties to the Constitution, are the proper parties to determine the extent of the powers delegated to the federal government and to decide when abuses have been committed. In ratifying the Constitution thru state ratifying conventions (to be distinct from state legislatures; ratifying conventions are more representative of the people), the States were agreeing to be bound to the conditions and obligations imposed by the document for the purpose of uniting together for common purposes and goals. It is just like the example I gave earlier of the Buyer and Seller who enter into a contract for a home. It is only the Buyer and Seller who have the clearest understanding of the meaning of that contract. If the Buyer agreed to pay $450,000.00 for the home and the Seller agreed to sell it for that price, then no party can later claim that the purchase/sale price was anything other than $450,000. The federal government, a creation of the Constitution, was established as an “agent” or “servant” of the States, and therefore not a party to the compact. The government has no legal position to define its own powers. Today, we assume the Supreme Court is the ultimate tribunal with respect to the meaning and interpretation of the Constitution. But let’s never forget that under the Federalist Papers, the Supreme Court was only supposed to offer an “opinion.” See Federalist No. 78, in which Alexander Hamilton wrote: “The judiciary is beyond comparison the weakest of the three departments of power. It can never attack with success either of the other two branches. The Executive not only dispenses the honors, but holds the sword of the community. The legislature not only commands the purse, but prescribes the rules by which the duties and rights of every citizen are to be regulated. The judiciary, on the contrary, has no influence over either the sword or the purse; no direction either of the strength or of the wealth of the society; and can take no active resolution whatever. It may truly be said to have neither FORCE nor WILL, but merely judgment; and must ultimately depend upon the aid of the executive arm even for the efficacy of its judgments.” Nullification has a constitutional basis under the Supremacy Clause (Article VI) and the Tenth Amendment. The Supremacy Clause states This Constitution, and the Laws of the United States which shall be made in pursuance thereof…shall be the supreme law of the land.” In other words, not every law that the government imposes is to be considered supreme law; only those passed pursuant to an express grant of authority are supreme. The states are free to legislate in all other areas. The Tenth Amendment states that “the powers not delegated to the federal government (see Article I, Section 8) nor prohibited to the States (Article I, Section 10) are reserved to the States.” Therefore, the States are supposed to be jealous guardians of their domain of power under the carefully-defined division of power under our federalist system, memorialized in the Tenth Amendment. The proper balance of power is the ultimate protector of our “God-given rights.” Alexander Hamilton wrote this in Federalist No. 26: “….The State legislatures, who will always be not only vigilant but suspicious and jealous guardians of the rights of the citizens against encroachments from the federal government, will constantly have their attention awake to the conduct of the national rulers, and will be ready enough, if anything improper appears, to sound the alarm to the people, and not only to be the VOICE, but, if necessary, the ARM of their discontent.” In conclusion, I am compelled to expound on the position that the federal government has implied powers. When the government takes that position, it is dangerous and leads, almost without exception, to an insidious enlargement of government powers. And that’s why I wanted to clarify that the government itself should never be the allowed to have sole power to interpret the extent of its powers. It is the responsibility of the States (thru elections and nullification) to stand guard and remind the government, from time to time, that its powers are limited under the Constitution. People who are willing to sacrifice their liberty by succumbing to the mindset that government is free to unilaterally enlarge its powers and that the Supreme Court should be the final tribunal as to the meaning and intent of the Constitution are ready for a master and deserve one. Shame on professors and constitutional groups who espouse this vision of our nation’s government system. They’ve betrayed the ideals of our American Revolution and are willing to substitute one tyrant for another. Our government is quickly becoming our master and we have become its legislatively-controlled slaves. By the way, I encourage everyone to read: Professor M. Stanton Evan’s article – ”The States and The Constitution” of July 7, 2010, in First Principles. Professor M. Stanton Evan, “The States and The Constitution,” First Principles, July 7, 2010. Referenced at: http://www.firstprinciplesjournal.com/articles.aspx?article=448 Delma Blinson, “What Needs to be Done About Abuse of Power by the Federal Government?” Beaufort Observer, September 23, 2012. Referenced at: http://www.beaufortobserver.net/1editorialbody.lasso?-token.folder=comm/2012/09/22&-token.story=262792.112112&-token.mgmtpreview=y Calvin H. Johnson, “The Constitution or Liberty, The Freemen, September 21, 2012. Referenced at: http:// http://www.thefreemanonline.org/columns/tgif/the-constitution-or-liberty/ The Anti-Federalist Papers – http://www.utulsa.edu/law/classes/rice/constitutional/antifederalist/antifed.htm The Federalist Papers – http://thomas.loc.gov/home/histdox/fedpapers.html McCulloch v. Maryland, 17 U.S. 316 (1819). Referenced at: http://www.law.cornell.edu/supct/html/historics/USSC_CR_0017_0316_ZS.html The First North Carolina Ratifying Convention, July 1788 – http://www.constitution.org/rc/rat_nc.htm Patrick Henry, speech at the Virginia Ratifying Convention, June 1788 – http://www.unc.edu/~gvanberg/Courses/Henry%20June%205.htm Diane was born in New Jersey and lived there most of her life. She attended Seton Hall University School of Law where Judge Andrew Napolitano was her constitutional law professor.She and her family moved to Greenville, NC in 2001.She is married and has 4 children.
<urn:uuid:dc0c29b3-a673-4857-b8ca-ae1c75e3e4fc>
CC-MAIN-2016-26
http://tenthamendmentcenter.com/2012/10/12/a-government-of-implied-powers/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967143
7,931
3.109375
3
We tend to assume that brains don’t go with brawn – but that assumption is turning out to be seriously flawed. As the latest video from the Head Squeeze team shows, exercising the body is one of the best ways to boost your intelligence and preserve it through old age. Consider this: one German study found that older people who enjoy mild exercise – such as gardening – are half as likely to suffer from cognitive impairment as they age. Another experiment found that pensioners asked to take a leisurely walk a few times a week scored better on attention and memory tests. But it’s not just older people: children who walk to school tend to concentrate better and get better test results than those given lifts in the car. One possible reason is that the exercise boosts the blood (and therefore oxygen) supply to the brain – which helps give it the energy to think. It might also promote the growth of neurons and perhaps encourage the release of certain neurotransmitters and growth hormones that are crucial to the brain’s overall health. All of which could contribute to better concentration and memory. In the future, some researchers are looking into specially designed “exergames” that incorporate physical activity with cognitive training to give your brain the best possible workout; early results suggest that the sum is greater than the individual parts. In the meantime, the work should at least give you one more reason not to put off that visit to the gym. For more videos subscribe to the Head Squeeze channel on YouTube. This video is part of a series produced in partnership with the European Union’s Hello Brain project, which aims to provide easy-to-understand information about the brain and brain health. If you would like to comment on this video or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.
<urn:uuid:882c761a-97f0-4528-ae72-e021a95e239e>
CC-MAIN-2016-26
http://www.bbc.com/future/story/20141010-why-exercise-boosts-iq
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960403
381
3
3
In the same way, to the casual observer, it no doubt seems that the number and kind of misfits is so great that any attempt to analyze them and classify them must meet with failure. Those, however, who have studied the problem and have met and talked with thousands of those struggling against the handicap of unloved and difficult work, find a few classes which include nearly all of them. Just as there are two fundamental reasons why men and women select wrong vocations, and a few common variations upon these two reasons, so there are just a few general ways in which people select the wrong vocations. An examination of some of these will be illuminating to the reader. THE PHYSICALLY FRAIL In the beginning of the life of the race all men hunted, fished, fought, danced, sang, and loafed. These were the only manly vocations. There were no clerks, no doctors, and, perhaps, no priests. In some races and under some conditions to-day, all of the men are hunters and fishers, or shepherds and stock-raisers, or all the men till the field. Some years ago, in our country, practically all the male population worked at the trade of agriculture, there being only a few preachers, doctors, lawyers, merchants, and clerks. In the nations of Europe to-day people are born to certain professions or born to a certain narrow circle of vocations; some people are born to manual labor, and, having once performed manual labor, are thereby firmly fixed in the class of those who earn their living by their hands; others are born in a class above that, and will suffer almost any privation rather than earn their living by manual labor. In the United States this same feeling is becoming more and more prevalent. Our physical work is nearly all of it done by those who came to us from across the sea, and native-born Americans seek vocations in some other sphere. The common school is everywhere, and education is compulsory. The high school is also to be found in all parts of the country. There are also business colleges, technical schools, academies, universities, colleges, professional schools, correspondence schools, and other educational institutions of every possible kind. These are patronized by the native-born population as well as by many of those who come to us from foreign lands. The result is that, of the first great class which we shall treat, there are comparatively few in relation to the whole population. Even though this is true, there are all too many. The first class of misfits is composed of those who are too frail for physical labor and who are not well enough educated to take their places amongst clerical or professional workers. These unfortunates do not like hard, manual work; they cannot do it well; they are outclassed in it. They do not hold any position long; they are frequently unemployed; and they are often compelled to live by their wits.
<urn:uuid:20537a5e-e622-423a-b3c1-493b213be6c1>
CC-MAIN-2016-26
http://www.bookrags.com/ebooks/12649/44.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983155
611
2.890625
3
Our brain stores material brought to it from all five of our senses. You can recall, sometimes quite vividly, visual images, sounds, smells, textures, and so on. We know that the richer the experience is in sensory terms, the more easily the brain can store it and recall it later. The higher education system has traditionally been tailored toward "left brain processing." The left lobe of the brain specializes in activities that are primarily mathmatical/verbal, sequentially arranged, and logical--precisely the types of activities that go along with listening in class, reading a textbook, taking notes, and so on. The processing that is specialized in the right lobe of the brain goes largely unused by most college students in their formal learning experiences. The right side of the brain specializes in activities that are visual and spatial, "holistic," emotional, creative, and intuitive. Many learning theorists believe that most people are inclined to use or are more comfortable using either the right or left side of the brain for some learning activities. It stands to reason (and to intuition!) that learners who actively use both lobes of the brain will be able to learn more easily and develop "richer" mental concepts that the brain should be able to retrieve when it needs to. Mind maps are tools that you can use when reviewing for a test to take advantage of right-brain processing. A mind map is a device that represents a concept in both verbal and nonverbal terms. It depends on spatial and visual cues that serve as powerful links to aid recall. This is typical left-brained material: words on the page, read from top to bottom and left to right. How much of the outline can you write below? OK. Now what can you recall about what was on the page? Unless you are good at remembering abstract concepts, you probably are able to recall just a few details. It's hard to get a "picture" in your head of the material. As you can see, a mind map makes use of a spatial arrangement that puts the main topic in the center, with branches off of it to label the main subpoints. It has the same verbal elements that the outline had, but it uses crude sketches and colors as well. How much of the mind map can you write below? What colors did you see? What images? Where were they on the page? You should be able to recall a "picture" of what you saw. Remembering the "big stick" can remind you of Roosevelt's aggressive dealings with big business, and the broom can help you recall his "sweeping" election victory and some of the other campaign issues listed on the page. Using Mind Maps Mind maps can be powerful review and test preparation tools, particularly for essay items. As you look over your notes and text assignments, try to pick out a half dozen or so major topics that you are almost sure to be tested over and construct a mind map for each one. As you create each mind map, you will be establishing relationships that will strengthen your overall understanding of the topic, and you should be able to recall the images and the content on each mind map better than you would just the class notes or textbook material. Creating Mind Maps Mind mapping does not have any hard and fast rules, but the following basic characteristics describe what works best for most students: 1. Begin by putting the main topic or point of focus in the center of the page. Starting in the center of the page allows for the greatest flexibility and helps to keep the main idea quite literally "front and center." You should also draw a box or circle around this main idea. 2. As you identify main subpoints, major elements, or "dimensions" of the topic, draw a line branching off the central topic and leading to the label for the subpoint. You can start your first branching idea anywhere on the box that encircles the main idea. The line should be at least an inch or two long and it should lead to a word or phrase that labels the subpoint. Draw a circle or box around this subpoint. Try to limit the number of subpoints to four or five. If you are coming up with more than that, perhaps it would be best to combine some or divide your overall topic into two separate maps. Limiting the number of subpoints will keep the mind map from getting too "busy" or complex. 3. Look for details that support or illustrate the subpoints and attach these to the main branching lines. Record these details in key words or short phrases. 4. Once you feel that you have "captured" the topic on the page, if the map is lopsided, too complex, or in some other way just difficult to mentally take in, you might want to do a second map to simplify or refine the topic. The structure should be balanced and so obvious that it "jumps off the page." At test time, you should be able to close your eyes and see the structure of the map in your mind's eye. 5. Personalize your mind map with colors, symbols, and simple sketches. You might use several different colored highlighters to make the main subpoints stand out visually. Sketches and symbols help bring other sensory images into the mind map. It isn't necessary that you be an artist to make these symbols useful. As long as these simple images mean something to you, they will serve their purpose.
<urn:uuid:b437d208-b441-4ac6-881f-40287fe2e69e>
CC-MAIN-2016-26
http://www.brazosport.edu/programs/Learning-Frameworks/Pages/Mind-Mapping.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941452
1,113
4.03125
4
Generally in C++ the destructor is called when objects gets destroyed. And one can explicitly call the destructors in C++. And also the objects are destroyed in reverse order that they are created in. So in C++ you have control over the destructors. In C# you can never call them, the reason is one cannot destroy an object. So who has the control over the destructor (in C#)? it's the .Net frameworks Garbage Collector (GC). GC destroys the objects only when necessary. Some situations of necessity are memory is exhausted or user explicitly calls System.GC.Collect() method. Points to remember: 1. Destructors are invoked automatically, and cannot be invoked explicitly. 2. Destructors cannot be overloaded. Thus, a class can have, at most, one destructor. 3. Destructors are not inherited. Thus, a class has no destructors other than the one, which may be declared in it. 4. Destructors cannot be used with structs. They are only used with classes. 5. An instance becomes eligible for destruction when it is no longer possible for any code to use the instance. 6. Execution of the destructor for the instance may occur at any time after the instance becomes eligible for destruction. 7. When an instance is destructed, the destructors in its inheritance chain are called, in order, from most derived to least derived.
<urn:uuid:1ad3371c-6f84-4bb7-b390-6cd4574323b9>
CC-MAIN-2016-26
http://www.dotnetspark.com/qa/36-destructor-and-finalize.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.89955
298
3.09375
3
The human rights of indigenous peoples in Guatemala are under threat due to large scale extraction of natural resources and on-going encroachment on their lands. Their conflict with the state over these issues is now impacting their security, said Pablo Ceto, an indigenous community leader and a human rights activist from Ixil, Guatemala. He shared these views during his visit to the World Council of Churches (WCC) offices on 15 October 2013 in Geneva, Switzerland. Ceto serves as director of the Fundamaya, an organisation working for the rights of indigenous peoples in Guatemala. Ceto met with staff of the WCC’s Commission of the Churches on International Affairs (CCIA). The CCIA will highlight the issues of people affected by transnational corporations and business enterprises in Guatemala at the Second United Nations Forum on Business and Human Rights to be held in Geneva from 2 to 4 December. At the forum, the CCIA will facilitate participation of human rights activists and indigenous leaders from Guatemala. During the meeting Ceto explained that despite strong opposition against land grabbing and exploitation of natural resources, state and the multinational corporations operating in the areas of mining and extraction are violating the rights of indigenous peoples. The current conflict between indigenous communities and the government is a continuation of the country’s 36-year long civil war which ended in 1996 with the signing of peace accords. The conflict claimed the lives of more than 200,000 people, out of which 80 per cent were indigenous civilians of Mayan descent. However, in the subsequent post-war period, the Guatemalan government enacted several policies aimed at making the country more financially attractive to foreign investors. The new policies of government created proliferation through transnational and national resource extraction projects in lands which belonged to the indigenous peoples, Ceto explained. Indigenous peoples of Guatemala had been striving for their right to “free, prior and informed consent” (FPIC) on mega-projects near their communities, but without any effective measures being taken by the state. It has been widely reported that the multinational corporations and the government are consistently violating indigenous people’s right to FPIC, which is affecting the security of indigenous communities. As part of the WCC’s efforts of supporting human rights in Guatemala, a CCIA delegation visited the country in November 2012 when they were informed by the indigenous leaders that a number of indigenous peoples are denied their right to their ancestral land, which continues to be a reason behind social unrest. Currently land grabbing in indigenous people’s territory has been increasing, despite Guatemala’s having ratified the International Labour Organization Convention 169, which stipulates consultation with the indigenous peoples for the use of land and their territory.
<urn:uuid:a544d813-c4d5-4f46-a102-d916c155905c>
CC-MAIN-2016-26
http://www.ekklesia.co.uk/node/19241
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957709
552
2.65625
3
Carl Vinson (1883-1981) Carl Vinson, recognized as "the father of the two-ocean navy," served twenty-five consecutive terms in the U.S. House of Representatives. Born on November 18, 1883, in Baldwin County, Vinson was one of seven children born to Edward Storey Vinson, a farmer, and Annie Morris. He attended Middle Georgia Military and Agricultural College in Milledgeville, read law with county judge Edward R. Hines, and earned a degree from Mercer University's law school in Macon in 1902. Admitted to the state bar, Vinson became a junior partner of Judge Hines in Milledgeville. After serving two terms as county court solicitor, he won a seat in the Georgia General Assembly at age twenty-five. Reelected two years later, he was chosen Speaker pro tempore during his second term. In 1912 Vinson suffered his only defeat at the hands of the voters of middle Georgia in a political career that spanned six decades. His bid for a third term in the legislature lost by five votes, apparently the result of voter backlash over reapportionment. The governor then appointed him judge of the Baldwin County court. Soon afterward, however, when the U.S. representative from the Tenth District resigned, Vinson ran for the vacant House seat. Easily defeating three wealthy opponents, he was sworn in on November 3, 1914, as the youngest member of Congress. Competent and hardworking, he became a fixture in Congress. After defeating the former Populist leader Thomas E. Watson in 1918, he rarely faced opposition. In 1921 he married Mary Green of Ohio. They had no children. She died in 1949 after a lengthy illness, and he never remarried. Although Vinson represented a landlocked district, he secured a seat on the Naval Affairs Committee in 1917. Convinced that increased spending for national defense was absolutely necessary, Franklin Roosevelt more receptive to his arguments. In 1934 Roosevelt signed the Vinson-Trammell Act, which would bring the navy to the strength permitted by the treaties of 1922 and 1930. As conditions in Europe and Asia became more ominous, Vinson wrote several bills strengthening the navy and applying aircraft in national defense. Twenty months before the Japanese bombed Pearl Harbor, an event that precipitated America's entry into World War II (1941-45), Vinson steered two bills through Congress. The first called for expanding naval aviation to 10,000 planes, training 16,000 pilots, and establishing 20 air bases; the second speeded naval construction and eased labor restrictions in the shipbuilding industry. Assessing Vinson's impact on sea power, Fleet Admiral Chester Nimitz later remarked, "I do not know where this country would have been after December 7, 1941, if it had not had the ships and the know-how to build more ships fast, for which one Vinson bill after another was responsible." A modest man of simple tastes, Vinson shunned the limelight and quietly did his duty. When Congress was in session, he lived in a modest six-room bungalow in Chevy Chase, Maryland; when it adjourned, he retreated to his 600-acre farm near Milledgeville. Unlike most of his congressional colleagues, he rarely traveled. He went to the Caribbean once in the 1920s and never traveled abroad again. He rarely set foot on an airplane or ship and never learned to drive a car. Eccentric in many ways, he smoked or chewed cheap cigars, wore his glasses on the end of his prominent nose, and spoke with a middle Georgia drawl. Although he appeared to be a country bumpkin, his shrewd political instincts, enormous common sense, and mastery of detail enabled him to dominate his committee and steer legislation through Congress. Vinson asserted, "The most expensive thing in the world is a cheap Army and Navy." During the cold war he continued to stress the need for military preparedness, especially a buildup of strategic bombers. He rammed his views through Congress, often over the objections of the president. Indeed, throughout his career he tangled with presidents, cabinet members, and top brass, whittling pompous admirals and generals down to size. When he was rumored to be in line for appointment as secretary of defense, his standard rejection was, "I'd rather run the Pentagon from up here." After serving fifty years and one month, Vinson quietly retired to his Baldwin County farm, having set the record for longevity in the House. In 1964 U.S. president Lyndon B. Johnson awarded Vinson the Presidential Medal of Freedom—the highest award that a president may bestow upon a civilian. U.S. president Richard Nixon honored Vinson in 1973 by naming the nation's third nuclear-powered carrier for him. He died in Milledgeville on June 1, 1981, at age ninety-seven. In 1983 the Institute of Public Affairs at the University of Georgia was renamed the Carl Vinson Institute of Government. The institute seeks to improve the understanding, administration, and policymaking of governments and communities by bringing the resources and expertise of the university to bear on the issues and challenges facing Georgia.
<urn:uuid:7cfc0696-709f-4e3c-95e3-f5359e9f78e2>
CC-MAIN-2016-26
http://www.georgiaencyclopedia.org/articles/government-politics/carl-vinson-1883-1981
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979056
1,066
2.6875
3
There are two types of nutrition claims on foods: nutrient content claims and health claims. These claims must also follow certain rules from Health Canada to make sure that they are consistent and not misleading. These claims are optional and may be found on some food products. Nutrient content claims describe the amount of a nutrient in a food. A good source of iron is an example of a nutrient content claim. Health claims are statements about the helpful effects of a certain food consumed within a healthy diet on a person's health. For example, a healthy diet containing foods high in potassium and low in sodium may reduce the risk of high blood pressure, a risk factor for stroke and heart disease is a health claim. A nutrient content claim can help you choose foods that contain a nutrient you may want more of. Look for words such as: A nutrient content claim can also help you choose foods that contain a nutrient you may want less of. Look for words such as: Keep in mind, because nutrient claims are optional and only highlight one nutrient, you still need to refer to the Nutrition Facts table to make food choices that are better for you. A health claim can help you choose foods that you may want to include as part of a healthy diet to reduce risk of chronic diseases. An example of a health claim is a healthy diet rich in a variety of vegetables and fruit may help reduce the risk of some types of cancer. Keep in mind, because health claims are optional and only highlight a few key nutrients or foods, you still need to refer to the Nutrition Facts table to make food choices that are better for you. Other types of claims, often referred to as general health claims, have appeared in recent years on front-of-package labelling. They include broad "healthy for you" or "healthy choice" claims as well as symbols, logos and specific words. These claims are not developed by the government. Instead, they are developed by third parties or corporations. While it is required that the information be truthful and not misleading, consumers should not rely only on general health claims to make informed food choices.
<urn:uuid:3a838b59-46da-4001-8cf4-53b8cb346d6a>
CC-MAIN-2016-26
http://www.hc-sc.gc.ca/fn-an/label-etiquet/nutrition/cons/claims-reclam/index-eng.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949223
427
2.953125
3
A worker at Goold Orchards in Schodack, N.Y., was pruning apple trees two weeks ago. This week, orchards from the Midwest to the East Coast were threatened by overnight frosts. (Mike Groll/AP) Last week, when I cautioned in my blog that we shouldn't be so giddy about warmer-than-normal temperatures in March, people called me a killjoy, a wet blanket, a nattering nabob of negativism, and worse. I now have more bad news. As it turns out, a global warming-induced mild winter and early spring not only can ruin vacation plans and encourage invasive species, they can increase the risk of plant damage from late-season frosts. Why is that a problem? Freezing temperatures following warmer weather could mean millions of dollars in crop losses--and higher prices for lower quality produce at the market. Given the cold overnight temperatures over the last few nights in some parts of the country--and the possibility of more frost over the next few weeks--that could be what's in store for this year. This scenario played out back in 2007. The eastern half of the United States experienced unusually warm temperatures in March--the second warmest in U.S. records to date--prompting trees and other plants to develop earlier than usual. This premature leaf and bloom made them vulnerable to a mass of cold Arctic air that swept through the central Plains, the Midwest, and much of the Southeast. Between April 4 and 10 there were more than 1,200 record lows in the lower 48, with temperatures in the South over the Easter weekend dipping below 25 degrees F, which can wipe out 90 percent or more of most crops. For farmers, the Easter freeze was catastrophic, causing an estimated loss of $2.2 billion in field crops, fruit crops and ornamental plants. According to an October 2007 report by the National Oceanic and Atmospheric Administration and the U.S. Department of Agriculture (USDA), winter wheat growers in nine South and Midwest states lost some $439 million. Peach growers in 11 states produced 76 percent less than the previous year--a $99 million loss. Apple growers in 10 Southeast and Midwest states, meanwhile, suffered a 67 percent drop in production from 2006, losing nearly 319 million pounds of apples worth about $76 million. Other crops, including alfalfa, apricots, Asian pears, blackberries, blueberries, corn, grapes, hay, pecans and plums, also took significant hits. All told, the USDA declared nearly 1,000 counties in 24 states disaster areas, making farmers in those counties--as well as contiguous ones--eligible for low-interest emergency loans. In March 2008, the journal BioScience published a paper by Oak Ridge National Laboratory scientist Lianhong Gu and seven other researchers explaining the implications of the 2007 freeze for a changing climate. Spring frosts are not unusual in the regions hurt by the Easter freeze, they pointed out, but until recently, it was rare for such an extreme freeze to follow an extended period of above-normal temperatures. And if nothing is done to dramatically curb fossil fuel emissions, stretches of above-normal temperatures are likely to become more commonplace. Likewise, if events like the Easter freeze became routine, they would undercut the potential benefits of a longer growing season and fewer frosts due to a warming climate. Indeed, Gu and his colleagues caution there would not necessarily be any reduced risk of frost damage. "Farmers and other land managers may respond to warming and reduced frost frequency by planting earlier or by planting alternative species," they wrote. "Natural plant populations and animal species might advance the development of crucial phenological [life cycle] phases, or with sufficient time, shift their ranges poleward or to higher elevations. With such adjustments or adaptations, the risk of frost damage could remain the same or even become greater." Gu et al. also detailed other, related, climate change threats to plant growth. For example, higher carbon dioxide concentrations in the atmosphere could reduce many plant species' resistance and tolerance to freezing temperatures. Warmer winter temperatures could lead to more freeze-and-thaw fluctuations weeks if not months before spring, delaying plant hardening and denying plants adequate time to acclimate to colder temperatures. Warmer winters also likely would mean reduced snow cover, shrinking snowpack and early snowmelt, changes that can "deprive plants of thermal protection when it is most needed." Spring freeze damage followed by more frequent summer droughts, meanwhile, would be another double whammy due to the fact that "drought limits post-freeze plant regrowth and recovery while freeze damage weakens plant tolerance to drought." Finally, worsening smog, triggered by fossil fuel emissions, would further exacerbate frost damage. "This [April 2007] freeze should not be viewed as an isolated event," Gu and his co-authors concluded. "It represents a realistic climate change scenario that has long concerned plant ecologists." That brings us back to 2012. Over the last several days, the National Weather Service has issued freeze warnings for a swath stretching from the central Great Lakes region to the East Coast, and farmers are rightly worried. A wide range of crops are threatened, including apples, apricots, cherries, grapes, peaches, pears, and possibly strawberries. "It's scary and amazing, but hopefully the crop will survive," Bob Barthel, a Wisconsin apple grower, told Milwaukee's ABC affiliate WISN on Wednesday. "It's not just my farm in Mequon, it's all the perennial crops, the apple crop nationwide. It's apples, cherries, blueberries, all the fruit crop is at risk this year." Two years ago, Barthel, who has 19,000 apple trees in his orchard, lost nearly 90 percent of his crop to frost damage after they started budding on April 1. This year, they started budding on March 18. Some crops can be protected at night with smudge pots--kettles that radiate heat--or sprinklers that generate mist around plants to form protective ice. Barthel is using a sprinkler system to try to save his strawberries, but there's not much he can do for his apples. Fortunately temperatures are supposed to rebound over the next few days, but most farmers won't be able to breathe a sigh of relief until early May when the chance of overnight frost finally diminishes. And then they will have next year and the year after that to worry about. Although it is impossible to predict because of all the variables, Gu and company's "realistic climate change scenario" may just become an annual occurrence sometime this decade or next. But before you throw up your hands in despair, keep in mind that we are beginning to seriously address this problem. More than half of the states now require electric utilities to increase their reliance on clean, renewable energy. New fuel economy standards for cars and trucks will go a long way cut oil consumption. And standards announced just this week for new power plants will clean up carbon emissions for the first time. That all is certainly good news, but there is still much much more to do. Elliott Negin is the director of news and opinion at the Union of Concerned Scientists.
<urn:uuid:c5e3011a-9c60-4327-a138-de798e342c8c>
CC-MAIN-2016-26
http://www.huffingtonpost.com/elliott-negin/whats-worse-than-early-sp_b_1390198.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955783
1,495
2.71875
3
Heart Rate and Daily Physical Activity with Long-Duration Habitation of the International Space Station Abstract:Fraser KS, Greaves DK, Shoemaker JK, Blaber AP, Hughson RL. Heart rate and daily physical activity with long-duration habitation of the International Space Station. Aviat Space Environ Med 2012; 83:577–84. Introduction: We investigated the pattern of activity and heart rate (HR) during daily living on the International Space Station (ISS) compared to on Earth in 7 long-duration astronauts to test the hypotheses that the HR responses on the ISS would be similar to preflight values, although the pattern of activity would shift to a dominance of arm activity, and postflight HR would be elevated compared to preflight during similar levels of activity. Methods: HR and ankle and wrist activity collected for 24-h periods before, during, and after spaceflight were divided into night, morning, afternoon, and evening segments. Exercise was excluded and analyzed separately. Results: Consistent with the hypotheses, HR during daily activities on the ISS was unchanged compared to preflight; activity patterns shifted to predominantly arm in space. Contrary to the hypothesis, only night time HR was elevated postflight, although this was very small (+4 ± 3 bpm compared to preflight). A trend was found for higher postflight HR in the afternoon (+10 ± 10 bpm) while ankle activity level was not changed (99 ± 48, 106 ± 52 counts pre- to postflight, respectively). Astronauts engaged in aerobic exercise 4-8 times/week, 30-50 min/session, on a cycle ergometer and treadmill. Resistance exercise sessions were completed 4-6 times/week for 58 ± 14 min/session. Discussion: Astronauts on ISS maintained their HR during daily activities; on return to Earth there were only very small increases in HR, suggesting that cardiovascular fitness was maintained to meet the demands of normal daily activities. Document Type: Research Article Publication date: June 1, 2012 More about this publication? - The peer-reviewed monthly journal, Aviation, Space, and Environmental Medicine (ASEM) provides contact with physicians, life scientists, bioengineers, and medical specialists working in both basic medical research and in its clinical applications. It is the most used and cited journal in its field. ASEM is distributed to more than 80 nations. To access volumes 86 to present, please click here. - Information for Authors - Submit a Paper - Subscribe to this Title - Membership Information - Information for Advertisers - Submit Articles - Ingenta Connect is not responsible for the content or availability of external websites
<urn:uuid:74db6744-1706-4f42-bb62-df130784c88c>
CC-MAIN-2016-26
http://www.ingentaconnect.com/content/asma/asem/2012/00000083/00000006/art00006
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942423
543
2.59375
3
Salerno, Province of Salerno, Campania, Italy In 1076 the Norman conqueror Robert Guiscard captured Salerno. In this period the royal palace (Castel Terracena) and the magnificent cathedral were built and science boosted as the Salerno Medical School, considered the most ancient medical institution of Western Europe, reached its maximum splendour. With the Hohenstaufen dynasty (Swabians), at the end of the 12th century, there was a period of economic revival in the city. Following the advice of Giovanni da Procida (a famous citizen of that time), King Manfred of Sicily, Emperor Frederick II's son, ordered the construction of a dock that still today bears his name. Moreover he founded Saint Matthew's Fair, which was the most important market fair in Southern Italy. After the Angevin conquest and from the 14th century onwards, most of the Salerno province became the territory of the Princes of Sanseverino, powerful feudal lords, who acted as real owners of the Region, and accumulated an enormous political and administrative power, attracting artists and men of letters. In the first decades of the 16th century the last descendant of the Sanseverino princes was in conflict with the Spanish Government, causing the ruin of the whole family and the beginning of a long period of decadence for the city. The years 1656, 1688 and 1694 represent sorrowful dates for Salerno: the plague and the earthquake which caused many victims. A slow renewal of the city occurred in the 18th century with the end of the Spanish empire and the construction of many refined houses and churches characterising the main streets of the historical centre. In 1799 Salerno was incorporated into the Parthenopean Republic; during the Napoleonic period, Joachim Murat decreed the closing of the Salerno Medical School, that had been declining for decades to the level of a theoretical school. The post-war period was difficult for all the Italian cities of the south, but Salerno managed to improve little by little. In recent years the town administration has taken great strides giving a great impulse to the revaluation of the historical centre, the rediscovery of the artistic and cultural treasures. Salerno can now offer tourists a charming synthesis of Mediterranean culture and the fascinating landscapes of the Amalfi Coast.
<urn:uuid:3a67fb32-2c0c-407d-aece-8398b48b5e83>
CC-MAIN-2016-26
http://www.italyworldclub.com/campania/salerno/salerno.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954046
480
2.859375
3
Egypt is still in shock over Ethiopia’s May 28 announcement, in which it said it was diverting the flow of the Nile River to facilitate the building of a dam on the Blue Nile. In the fourth century BCE, Greek historian Herodotus proclaimed Egypt the gift of the Nile – and this still resonates today. The mighty river surging from the depths of Africa to the Mediterranean, with its more than 4,000-mile course, is the lifeblood of Egypt and has made a flourishing civilization possible since the dawn of history. percent of the country is relentless desert, the continuation of the Sahara, and the Nile not only water the lands it passes through, it carries loose soil taken from Africa and deposits it along its banks. In this way, it turns them into a narrow strip of fertile land – inhabiting a mere 40,000 square kilometers, or 4% of a country of 1 million square kilometers. North of Cairo the river divides into two branches running to the sea, thus creating a delta in which most of Egypt’s agriculture is concentrated. Altogether, 96% of a population numbering an estimated 85 million people lives in the Nile For untold generations, Egypt has been accustomed to seeing the Nile as its own property, only grudgingly allowing Sudan – which was long under Egyptian rule and considered a sister Arab country contributing to its security – to have a small part of the river’s flow. According to the treaty signed in 1929, at a time when both countries and part of Africa were under British rule, out of the 85 billion cubic meters flowing annually in the river, Egypt received 48 billion and Sudan 4 billion. Egypt was given full control of the Nile, while African countries were forbidden to build dams on the river or its tributaries; Egypt also had the right to carry out checks to make sure that the treaty was respected. In accordance with the treaty, Egypt still maintains today a permanent delegation of engineers stationed near Lake Victoria, source of the White Nile, to supervise the activities of the countries along the river. In 1959, the treaty was amended so that Egypt received 55.5 billion cubic meters and Sudan 18.5, for a total of 87% of the annual flow accrued through the rains – leaving a mere 13% to the Upper Nile countries of Ethiopia, Tanzania, Uganda, Burundi, Rwanda, Kenya and Congo. treaty gave Egypt the right to build the Aswan Dam, and its Lake Nasser reservoir holds 168 billion cubic meters of water. The dam made it possible for Egypt to boost its production of electricity to 2,100 megawatts and to regulate the flow of the river, putting an end to the annual flooding that impacted Cairo and other areas. Lake Nasser is used to provide water for drinking and irrigation, thus increasing usable lands. In this way, Egypt has remained an agricultural land and cannot envision a future with no free and steady supply of water from the Nile for its multipurpose uses. However, the past 50 years have seen changes in Africa. The growing populations of newly independent states need more and more water – drinking water, water for agriculture and for industry, water to produce electricity. For the past 10 years they have had talks on the subject with Egypt, which stubbornly refused to see the problem and forbade them from taking advantage of the river flowing through their countries. Egypt even exerted pressure on the World Bank to refrain from financing projects along the Nile, and resorted to thinly veiled threats against the countries that were considering such projects. But the problem would not go In May 2010 at Sharm el-Sheikh, proposals for a new treaty were presented to Egypt by the upstream countries. The Entebbe Agreement they drafted created a blueprint for cooperation between all Nile River countries, which would supersede all previous agreements and provide for a new partition of the water, to answer the needs of all countries in a more equitable Egypt rejected the agreement on the basis of the treaties of 1929 Upper Nile countries then decided to submit the Entebbe Agreement for signature to all river states so that it could be implemented within a year. Angry debates have been raging ever since. Hosni Mubarak regime nor the army regime that followed were ready to enter into discussions with the relevant African states – which nevertheless kept on planning the dams they needed to develop their countries. The Blue Nile, which provides 85% of the river’s water, has its source in Ethiopia. The country, the largest in the region with a population set to overtake that of Egypt in the coming decades, has begun to build several dams. The best-known is the Grand Renaissance Dam, which is scheduled to hold 200 billion cubic meters in its reservoir and provide 6,000 megawatts of electricity. pressure from Egypt has not deterred Ethiopia, which insists upon developing its water resources, as Egypt clings to the position that the two treaties granted it the right to control what goes on in the river. Suddenly, last week – following meetings between Egyptian President Mohamed Morsi and Ethiopian Prime Minister Hailemariam Desalegn – Ethiopia published the communiqué announcing that the river would be diverted to facilitate the completion of the Grand Renaissance Dam. Egyptians are offended at what they perceive as an insult, since Morsi knew nothing of the communiqué. However, on a deeper level, they feel that the very basis of their existence is being threatened. have yet to come to terms with the new reality in the region and the needs of So far, Ethiopia says that there will be no change to the amount of water reaching Egypt, and that the reservoir will not start functioning until next year and will not be full before 2017. Egyptians do not quite believe it and are afraid that their share will be affected, since the Ethiopians will slow the flow of the river in order to fill the dam. This at a time when the individual consumption of water in Egypt has dropped to 759 cubic meters, well below the 1,000 mark recommended by the UN. Cairo is worried. While the president and sundry officials repeat that they will not tolerate attempts on their water, they say it is too early to come to the conclusion that the Grand Renaissance Dam will affect Egypt. Instead, they want to wait for the conclusions of the tripartite commission of experts from Egypt, Sudan and Ethiopia. The commission submitted its findings last week and they are still being reviewed; further studies may be needed. Politicians, on the other hand, are not waiting. There have issued calls for a stronger stand against Ethiopia and other Upper Nile countries; some would even want to see military action such as blasting the dam, and Islamist groups are calling for jihad against Ethiopia. Hamdeen Sabahi, leader of the Nasserist movement and a former presidential candidate, wants Ethiopia punished – by, for instance, barring its vessels from crossing the Suez Canal. Furthermore, says Sabahi, a similar measure should be extended to Italy, the US and Israel, since according to him these countries are providing the financing for the dam. It was left to the daughter of Gamal Abdel Nasser, a professor of political science, to point out that according to the Constantinople Convention of 1888, there must be free passage in the canal in times of both war and peace – and any one-sided move by Egypt would harm it and endanger the course of world navigation. minister in charge of irrigation has been at pains to stress that Egypt should not resort to force and that there is still time for negotiation. However, he added that there is today a deficit of 7 billion cubic meters of water, which is expected to grow – with an estimated 150 million people living in Egypt by the year 2050, and the deficit reaching 21 billion cubic meters of water. In essence, an agriculture minister would see the building of the dam as akin to using armed force against Egypt. To make matters worse, Sudan, which Egypt considered its staunchest ally on the Nile issue, has apparently come to the conclusion that it would not be harmed by the dam – though some argue that Sudan wants to take advantage of the situation, to force Egypt to be more accommodating regarding the vast, disputed Halayeb and Shalatan territories on the Red Sea. (While ruled by Egypt, Sudan claims them for its own.) Ultimately, however, the fact is that due to its copious rainfall, Sudan does not lack water, while Egypt is entirely dependent on the Nile. As is always the case with Egypt, Israel is accused of a variety of sins: inciting Ethiopia against Egypt, and even granting agricultural assistance to Ethiopia and thus increasing that country’s need for water. Of course, Egyptians are conveniently forgetting that they themselves were the recipients of Israel’s technology in the ’80s and ’90s, and that it was thanks to that help that they were able to grow crops in the light desert soil. Egyptian agriculture today is based on such Israeli techniques as drip irrigation, and on Israeli varieties of fruits and vegetables. Thousands of young Egyptians trained at Kibbutz Bror Hayil, where they learned how to cultivate the soil and save precious water. is that the writing was on the wall. Egypt had years and years during the Mubarak regime to enter into discussions with Upper Nile states with a view toward reaching an agreement. Both Egypt and the Nile states needed increasing amounts of water for their development, and cooperation and a change in the existing treaties were needed. Unfortunately, the press was not free to publish studies on the subject, which would have been considered detrimental to Yet, while some 1,600 billion cubic meters of rain fall annually on the Nile Basin area, a mere 85 billion eventually reach the river; some of the water evaporates as swampy areas appear and slow the flow. A concerted effort of all neighboring countries financed by the World Bank would considerably increase the amount of water in the river. Thus far, however, nothing has been done and Egypt is still in a state of denial, with Egyptian diplomacy suffering a serious blow. The questions remain: Can the troubled country, threatened by a potential agricultural disaster and widespread famine, understand that now is the time to enter into serious negotiations? Has it understood that only a fair and equitable solution, taking into consideration the legitimate needs of all Nile countries, will end the crisis in time? The writer, a fellow of The Jerusalem Center for Public Affairs, is a former ambassador to Romania, Egypt and Sweden.
<urn:uuid:b2a6e623-8bd1-4312-bced-1b1c293c7651>
CC-MAIN-2016-26
http://www.jpost.com/Features/Front-Lines/Arab-World-Is-Egypt-losing-the-Nile-315765
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95596
2,317
3.453125
3
DNR seeks to lift protections on gray wolves As the wolf population continues to grow, the Wisconsin Department of Natural Resources said Tuesday it is once again asking federal authorities to remove the gray wolf from a list of federal endangered species. The agency said it has asked the U.S. Department of Interior for permission to reclassify the status of the wolf so state authorities would have more flexibility to control a burgeoning wolf population. Wolves have historically been a flash point of controversy, and in recent years the gray wolf has been the subject of a series of court fights that has changed its protective status several times. If the request is approved, problem wolves could be killed. The decision could take months, or more, and in the interim, the DNR on Monday asked authorities for more authority to use lethal controls on wolves that have killed livestock and other animals. - Live web cam of great blue heron nesting at Horicon - Air quality alert issued by DNR - Skymmr ends record debris removal season on Milwaukee rivers - New video on fungus on Rock River - George Archibald, crane conservationist, worries about state's whoopers - Peregrine falcons produced record number of young - First group of whooping cranes take off for fall migration - Poll shows opposition to mine - Sturgeon stocking in Lake Michigan Saturday at Milwaukee lakefront - Mining hearing is live on public radio
<urn:uuid:b12e1551-0b70-4788-919b-1b0304dbee1b>
CC-MAIN-2016-26
http://www.jsonline.com/blogs/news/92232474.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925631
292
2.53125
3
Mosquito breeding season is in full swing. Here is some solid advice for preventing and repelling these itch-inducing pests on your property. Household Mosquito Repellents Dr. Jody Gangloff-Kaufmann, an entomologist and integrated pest management specialist from Cornell University, said, "The original pest control option was a screen." It sounds simple, but keeping mosquitoes out of your house is the best way to stop them from biting you inside your house. Electric mosquito "zappers" don't work. Entomologists from the University of Kentucky said that bug zappers that use ultraviolet light to attract mosquitoes actually attract mosquitoes to the area being protected. Mosquitoes generally comprise only a small percentage of the insects that bug zappers kill. Flies, beetles and other innocuous flying insects comprise the majority. According to the University of Kentucky entomologists, ultrasonic sound devices don't work to repel mosquitoes, either. The distributors of ultrasonic devices claim that they repel mosquitoes by mimicking the frequency of male mosquito wing beats. Some claim that they mimic the frequency of dragonfly wing beats, a natural predator of mosquitoes. Dr. Wayne J. Crans of Rutgers University said that these claims "border on fraud." Even if the electronic devices actually mimic the sound frequencies that they claim to, Dr. Crans said that female mosquitoes are not repelled by the sound of male mosquitoes, and that mosquitoes are not particularly afraid of dragonflies. The household repellent that has been scientifically proven to work is citronella oil. The University of Kentucky entomologists said that citronella candles can provide some amount of protection. They said that one candle placed in the center of an outdoor table will not be effective. Rather, multiple citronella candles should be placed a few feet away from the area in which people are sitting. The Mississippi State Department of Health cited a study which reported that 3% concentration citronella candles offer a 42% reduction in mosquito bites, and regular candles offer a 23% reduction in mosquito bites. According to the American Mosquito Control Association, a chemical called Permethrin effectively repels mosquitoes when it is applied to clothing and bed nets. Permethrin should never be applied directly to skin. However, it can be applied safely to mosquito netting. It has been used widely for years as a mosquito netting treatment in countries with high rates of malaria. Preventing Mosquitoes from Hatching on Your Property The best way to prevent mosquitoes from biting you in or around your house is to prevent them from hatching on your property. Dr. Gangloff-Kaufmann said, "The preventative stuff that a homeowner can do simply revolves around habitat. You want to eliminate standing water to eliminate their larval habitat." The lifecycle of a mosquito goes like this: An adult mosquito lays a raft of eggs in water. They don't need a large body of water to lay eggs in - a dog bowl with a little water left inside it is sufficient to make a mosquito larvae habitat. The eggs turn into larvae within 48 hours of being laid. The larvae feed on microorganisms in the water. It takes larvae about 1-6 days to become pupae. The pupae take about 48 hours to develop into full grown adult mosquitoes. I'm telling you this so that you understand how quickly mosquitoes reproduce. If you inadvertently leave standing water on your property, mosquitoes will reproduce in it extremely quickly. Here are some examples of potential mosquito larval habitats: - Bird baths - Wading pools - Rain gutters - Old tires - Pet dishes - Flower pot bottoms - Crumpled plastic sheeting and tarps - Soil depressions that accumulate water Although you might be a responsible homeowner who prevents larval habitats on your property, your neightbors might not be. Dr. Gangloff-Kaufmann said, "Unfortunately, your neighbor might still have their birdbath or bucket of water. So it's hard to eliminate all habitat around you. I had neighbors who just on a whim got a pool and put it up. They were not real serious about maintaining this pool so they chlorinated it once and they left it. So through the summer it turned green, and by August we had Asian tiger mosquitoes attacking us. I ended up calling the county vector control program and they came and inspected it and the town made them take the pool down." The Bottom Line Keeping mosquitoes away from your house takes vigilance in terms of inspecting your property for potential larval habitats. If preventative measures and mosquito repellants are not effective, consider calling a qualified exterminator or your local vector control program.
<urn:uuid:acb4f31d-b468-4971-b36b-9244558bdcae>
CC-MAIN-2016-26
http://www.networx.com/article/easy-mosquito-control
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949231
972
2.609375
3
By Joseph Robertson For many Americans, climate change has long seemed like something remote in space and time, a crisis that would affect people in other places a long time into the future. For skeptics, it seemed like we didn’t have to prioritize climate mitigation in order to build a secure and prosperous American republic, even when thinking decades into the future. We are only just now beginning to see that the destabilization of Earth’s climate system is bringing real impacts directly into our communities, in the here and now. The Third National Climate Assessment, released last month, makes this clear: Climate change is happening now, and it is affecting our economy and our daily lives in disruptive ways, and costs of dealing with this ongoing destabilization will only increase over time. In fact, the report specifically finds that “The observed warming and other climatic changes are triggering wide-ranging impacts in every region of our country and throughout our economy.” The destabilization of historically reliable climate patterns is having an impact on every region across the U.S. Our region, the Northeast, is facing a number of serious costly impacts. Already, communities across our region are experiencing deeper and longer heat waves, “more extreme precipitation events, and coastal flooding due to sea level rise and storm surge.” And not only coastal New Jersey is facing impacts. The dislocation of weather systems threatens to alter watershed ecology, the resilience of farming areas and the sustainability of green open spaces. The state’s normally lush, green, natural environment, fed by many local rivers and tributaries, with a low altitude above sea level, make the threat of saltwater intrusion a problem for most of the land area of the state. Where agriculture faces increased soil salinization, water resources tend to be more stressed and farmers have to turn to chemicals to optimize productivity and reduce the threat from crop pests. Runoff from these practices can pose a sustained public health risk. The collapse or migration of coastal ecosystems, and the salinization of green spaces, are real costs that will deplete assets that help to drive some of the most reliable sources of prosperity in New Jersey’s economy. What’s more, higher atmospheric carbon dioxide levels have been shown to correspond to a decrease in the nutritional value of farmed plants. This means we need to produce more food to meet the same nutritional needs. And it is not just human beings, but also animals, insects and crop pests that require more plant life to get the necessary energy to sustain health. So the same atmospheric imbalance that is driving climate change also puts agriculture under added pressure and leads to less sustainable farming practices such as more intensive and widespread use of chemicals. The result is food insecurity regionally and globally and the related degradation of human health. Water and air quality face long-term degradation, and in southern New Jersey, bark beetle infestation (the beetles have moved north as climate patterns shift) is already posing a mounting threat to the ecological integrity of the Pine Barrens. These are real-world costs, and they are now unfolding in real time. Last fall, the Intergovernmental Panel on Climate Change issued its Fifth Assessment Report, which also found that climate impacts are arriving much sooner than expected, and in every region of the world. Previous IPCC and NCA reports have been extremely conservative in their prognostications, because scientists were under heavy political pressure not to publish anything that would not qualify as irrefutable, hard fact. Contrary to what many in the political sphere believe, climate reporting has tended to understate, not overstate, future risks. We have seen the skeptics’ questions taken with that level of seriousness, and year after year, hard science has become more refined, more precise and more comprehensive. We have seen the skeptics’ questions answered, and we are now watching ecological, agricultural, quality-of-life and other economic losses unfold. The International Energy Agency recently released its report on the cost of escaping this mess. It found that just the last two years of inaction have added $8 trillion to future global costs. With so many of the skeptics’ questions now answered by hard science, conservatives are starting to look for a solution that lines up with their values and their fiscal priorities. A revenue-neutral carbon fee and dividend plan, supported by former Reagan Secretary of State George Shultz and others would send a clear price signal to investors to transition away from climate-destabilizing practices, with no new government spending, no new bureaucracy and no new regulation. The logic of responsible action is simple: New Jersey is facing real, long-term economic and environmental threats; so is our nation and so is the wider world. We can deal with these threats affordably, and prosper as a result. No one’s ideology should be an obstacle to supporting smart solutions. Joseph Robertson is strategic coordinator for the nonpartisan, nonprofit volunteer organization Citizens’ Climate Lobby. CONNECT WITH US: On mobile or desktop: • Like Times of Trenton on Facebook • Follow @TimesofTrenton on Twitter
<urn:uuid:432509eb-79f7-4759-a4c7-172305095d14>
CC-MAIN-2016-26
http://www.nj.com/opinion/index.ssf/2014/06/opinion_destabilization_of_earths_climate_system_is_bringing_real_impact_to_nj_communities.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949058
1,043
2.96875
3
A newly spawned salmon or trout still carrying the yolk. - The newly hatched salmon, called ‘alevins,’ remain in rapid water until they are about 65 mm long. - In mid spring, after the larval fish has absorbed its yolk sac, it emerges as an alevin and proceeds to find suitable habitat for the summer period. - After 90-150 days (depending on temperature) the eggs hatch, and the alevins (fry with yolksacs attached to the underside) stay in the gravel until the yolksac is used up. Mid 19th century: from French, based on Latin allevare 'raise up'. For editors and proofreaders Line breaks: al¦evin Definition of alevin in: What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:55473a53-107f-4332-9d5c-d0f09d61c704>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/definition/english/alevin
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907641
194
3.25
3
Real Cougars With Big Butts Are Healthier New research suggests the fat responsible for producing the pear shape flaunted by celebrities such as Jennifer Lopez and Beyonce may be active in protecting women from diseases by releasing certain hormones. Researchers at Harvard Medical School believe buttock and hip fat may protect women against type 1 diabetes. We've been hearing for a long time now that different fats affect our health in different ways. People with the apple shape, where fat is stored around the tummy, can be more prone to type 2 diabetes and heart disease. Those with pear-shaped bodies, where fat is collected in the buttocks, are less likely to have these disorders. Researcher Dr Ronald Kahn insisted that not all fat was bad for health. "The surprising thing was that it wasn't where the fat was located, it was the kind of fat that was the most important variable," he said. "Even more surprising, it wasn't that abdominal fat was exerting negative effects but that subcutaneous fat was producing a good effect. No matter what size butt you have, the best thing you can do is keep it moving. Exercise is still the key ingredient to staying healthy.
<urn:uuid:48d82d2a-b286-4b39-bef4-0dfda0f16c70>
CC-MAIN-2016-26
http://www.therealcougarwoman.com/2009/01/butts-apparently-the-bigger-the-better.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973574
241
2.671875
3
George Wythe (1726 – June 8, 1806), was a lawyer, judge, prominent law professor and "Virginia's foremost classical scholar." Wythe was one of the seven men from Virginia who signed the United States Declaration of Independence. Wythe served as mayor of Williamsburg, Virginia from 1768 to 1769. In 1779 he was appointed to the newly created Chair of Law at William and Mary, becoming the first law professor in the United States. Wythe's pupils included Thomas Jefferson, Henry Clay, James Monroe, and John Marshall. References[change | change source] - Online site for Colonial Willimsburg
<urn:uuid:c9e94a32-c091-4116-a6c8-51858cc6e18c>
CC-MAIN-2016-26
https://simple.wikipedia.org/wiki/George_Wythe
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965712
131
2.90625
3
by Calder Loth Senior Architectural Historian for the Virginia Department of Historic Resources. Member of the Institute of Classical Classical Architecture & Art‘s Advisory Council. High on a mountaintop in the Peloponnese, the fifth-century B.C. Temple of Apollo Epicurius at Bassae is among the least known, least accessible, and most intriguing of all Greek temples. (Fig. 1) It is the only Greek temple to have incorporated all three ancient orders in its design: Doric for the exterior, Ionic for the cella or naos, and a single Corinthian column marking the entrance to the adyton or inner sanctum. The 2nd-century A.D. Greek traveler and geographer, Pausanias, stated that Iktinos, best-known as one of the Parthenon architects, designed the temple, but scholars have found no further evidence to document his attribution. The temple was unknown to James Stuart and Nicholas Revett, so it was not included in their pioneering and highly influential treatise The Antiquities of Athens (1762-1795). It finally received serious study in 1811-12 when the temple was the subject of an expedition that included British architect Charles R. Cockerell and German scholar Karl Haller von Hallerstein. They and their colleagues undertook detailed measurements and drawings, but also plundered the site for artifacts. Exposure to the elements on Mount Kotilion has caused progressive deterioration of the temple’s predominately limestone fabric. In 1987, the entire structure was covered with a canopy supported on a metal framework to provide temporary protection from damaging winds and rain while long-term conservation is undertaken. (Fig. 2) Although this huge tent hinders viewing the temple in context, it has a dramatic sculptural quality of its own. No schedule for the canopy’s removal has been announced, and such protection may need to be permanent. Despite the canopy, it is possible to walk the temple’s perimeter within. Most of the thirty-eight Doric columns of the exterior peristyle have survived in situ. (Fig. 3) Two of the columns and sections of the naos walls were reassembled in a program of anastylosis undertaken in 1902-08. Antiseismic scaffolding erected in 1985 included wooden braces clasping the tops of the Doric columns just under the capitals. Although attributed to Iktinos, earthquake damage and settlement have made it difficult to determine whether the temple incorporated the visual refinements found in the Parthenon. Nonetheless, seeing the temple moved Pausanias to write, “Of all the temples in Peloponnese, next to the one at Tega, this may be placed first for the beauty of the stone and the symmetry of its proportions.” The temple plan illustrates the unique arrangement of the interior, which for clarity I will describe in the present tense. (Fig. 4) Passing through the north portico columns, the pronaos, or vestibule, is entered between two free-standing Doric columns. The pronaos precedes the naos or temple sanctuary. Defining the naos are five spurs or fins projecting from each of the side walls, forming recesses possibly used for shrines. Clasping each spur end is a fluted Ionic column topped by a distinctive capital. On axis at the far end of the naos is a single Corinthian column. Beyond the column is the adyton or inner sanctum where the most sacred ceremonies were performed. The central position of the Corinthian column has led some scholars to conclude that the image of the deity, probably a statue of Apollo, was positioned off axis. A tall opening in the adyton’s left side allowed daylight to illuminate the statue and back-light the column, creating a singularly dramatic effect. A somewhat romanticized view of the temple interior made by Charles Cockerell in 1860, displays the axial placement of the Corinthian column and the flanking Ionic columns that terminated the projecting spurs. (Fig. 5) Also depicted is the richly sculpted frieze that topped the naos walls. The surviving sections of the frieze were extracted from the ruins by Cockerell and his colleagues during in their 1811-12 expedition and sold to the British Museum in 1814, where they are displayed today. The concave abacuses of the Ionic capitals are conjectural since none of the capitals remained in situ. The vaulted ceiling is conjectural as well. Shown also in the image is the off-center statue of a deity, which appears to be a female figure rather than Apollo. Cockerell’s view, however, captures the striking quality of the adyton’s indirect lighting, pouring in from the side opening shown on the plan. Possibly the earliest published image of the distinctive Bassae Ionic capital and its base appeared in a German edition of Charles Pierre Joseph Normand’s Nouvelle Parallèle des Ordres d’Architecture, published in three parts in 1830-36. (Fig. 6) Normand accurately depicted the capital’s arched top, a conspicuous departure from the flattened volute tops found in nearly all other ancient versions of the Ionic capital. He shows no abacus since, as his narrative states, it was not in existence in its original form. Normand admits, however, that the central anthemion or honeysuckle ornament was his own conjecture. The capital had no evidence of any ornaments either there or in the echinus. Normand’s illustration of the base accurately records its strong curved projection (an exaggerated scotia). Several of these unusual bases remain in place in the temple today. The British Museum holds what is believed to be the only known original fragment of the temple’s Ionic capitals. (Fig. 7) Charles Cockerell salvaged it from the ruin during his 1811-12 expedition and later presented it to the museum. While the fragment is only a portion of a volute, enough is intact to appreciate the bold curve of the top edge. We are not told whether Cockerell and his colleagues found more Ionic capital fragments during their venture. Indeed, Haller von Hallerstein’s ca. 1812 drawings, the earliest reliable depictions of the temple, show none of the capitals in place. Consequently, this rare artifact remains the one tangible clue to the singular shape of the Bassae Ionic. The Bassae Ionic has inspired numerous modern versions. Appropriately, Charles Cockerell was perhaps the first to use the order when he applied it to the columns of the portico and side elevations of Oxford University’s Ashmolean Museum and Taylorian Institute, built 1841-45. (Fig. 8) Its use for an exterior was considered somewhat daring since the order was originally an interior order. Cockerell was faithful to the original by avoiding ornaments on the volutes as shown in Normand’s Parallèle. However, he added discreet ornamentation to the abacus and echinus and topped it with an abacus employing concave sides and sharp tips. We can only speculate that he was basing the sharp tips on fragments that he may have seen in the ruin. Alternatively, he may have derived the abacus design from the abacus of the temple’s Corinthian capital. In any case, the architectural details of the pediment are entirely Cockerell’s, including the plaited decoration of the pulvinated frieze, an arresting treatment of an exterior frieze having no ancient precedent. Daniel Burnham devoted as much attention to the decorative details of Washington’s Union Station as he did to the functionality and engineering of this great classical landmark, completed in 1908. This is evident in the terminal’s original main dining room (now a gift shop), which is a festival of Grecian decorations. The room’s walls are divided into a series of bays with recessed panels framed by fluted columns in the Bassae Ionic order. (Fig. 9) The capitals are picked out in gold, green, and red, a color pallet repeated in the entablature and other decorations. Burnham also employed the Bassae Ionic for the columns supporting the canopies on the lower track platforms. (Fig. 10) In both places, the capitals are decorated with enlarged anthemion ornaments and egg-and-dart echinuses, details shown in Normand’s Parallèle but not found on the originals. The architectural firm of Zantzinger, Borie, and Medary applied a modified version of the Bassae Ionic for the corner pavilions of the 1931-34 Department of Justice in Washington’s Federal Triangle. (Fig. 11) The capitals are true to the Bassae precedent with their arched tops, but are expressed with parallel volutes rather than volutes having the forward curvature of the originals. Other departures from the original model are the egg-and-dart echinuses and the concave abacuses with their chamfered tips. As noted above, the form or even the existence of original abacuses is uncertain. However, following Normand’s conjecture, the capitals have an anthemion ornament in their centers. It is gratifying when one can discover a creative use of a rare and beautiful classical feature in one’s hometown. Such a find occurs on a small but elegant bank in Richmond’s historic Church Hill neighborhood. (Fig. 12) Appropriately named The Church Hill Bank, the building was designed by local architect Bascom J. Rowlett and opened 1914. The main entrance is framed by two engaged columns in the Bassae Ionic order with each topped by a seated eagle holding wings aloft. (Fig. 13) As with other modern versions, the volutes are flat-faced rather than gently curved forward. While Rowlett’s source for the order is not documented, a likely candidate is William R. Ware’s The American Vignola (1903), which illustrates the Bassae capital with a similar thick block for the abacus. The American Vignola was a standard textbook for American architects in the early 20th century. Most scholars contend that the temple’s Corinthian capital is the earliest known use of the Corinthian order. The illustration shown here was drawn by J. M. von Mauch for the 1830-36 German edition of Normand’s Parallèle, and is based on field notes and sketches by Haller von Hallerstein of fragments found during his 1811-12 expedition to the site. (Fig. 14) Regrettably, only a few of the fragments survive, preserved in the National Archaeological Museum in Athens. Even so, several parts of the illustration in the Parallèle are conjectural, such the flaring of the tops of the shaft flutes since the upper part of the shaft did not survive. The tips of the abacus were missing too, so it is uncertain whether they were pointed or chamfered. Nevertheless, Mauch’s restoration has a distinctive beauty and it is lamentable that it has inspired so few modern replications. A rare (possibly unique) use of the Bassae Corinthian for an American house appears on the porch of the 1850 Hackerman house, an Italianate mansion on Baltimore’s prestigious Mount Vernon Place. (Fig. 15) The order is employed for both the forward and recessed porch columns as well as for the hall columns of the lavish interior. Designed by the Baltimore architectural partnership of Niernsee and Neilson for Dr. John Hanson Thomas, the house became part of the Walters Art Museum complex in 1985. (Fig. 16) A native of Vienna, Austria, architect John Rudolph Niernsee studied in Prague and settled in Baltimore in 1839. His source for the order was likely the German edition of Normand’s Nouvelle Parallèle des Ordres d’Architecture,(1830-36), which included J. M. von Mauch’s plate 78 showing the Bassae Corinthian. Undoubtedly, the most ingenious and informed modern-day reference to the Temple of Apollo Epicurius is the Fellows’ Dining Hall of Gonville and Caius College, Cambridge University. (Fig. 17) Designed by John Simpson and opened in 1998, the room is a reduced version of the temple’s naos, complete with the spurs fronted by their Ionic order, and the single Corinthian column on axis. All of the elements in the room are richly decorated with Grecian-style polychrome ornamentation that sets off the custom-designed Grecian-style furnishings. The Ionic capitals are true to the originals by lacking the anthemion ornaments added by Normand. Simpson employs a square abacus for the capitals with detailing echoing that on the Corinthian capital abacus. (Fig. 18) The focal point of Simpson’s Fellows’ Dining Hall is the single Corinthian column following the precedent of the original. The polychromy and gilding emphasize the special beauty of this elegant order. (Fig. 19) The only liberty taken with known features of the capital is the insertion of a double row of compressed acanthus leaves at its base in place of the single row of leaves shown in Haller von Hallerstein’s drawing. Since Haller was working from fragments, it’s possible that an extra row was missing and therefore he didn’t draw one. John Simpson’s strikingly handsome room is clear demonstration that the Temple of Apollo Epicurius at Bassae yet offers design resources appropriate for adaptation in contemporary classical projects. It is important for such notable works of the past to continue to inform designs of today. The author is grateful to Dr. George Skarmeas and his wife Dominique Hawkins for generously taking me to the temple in 2007. Johann Matthaus von Mauch & Charles Pierre Joseph Normand, Parallel of the Classical Orders of Architecture, Compiled and edited by Donald M. Rattner (Institute for the Study of Classical Architecture, Acanthus Press, 1998). Alexander Tzonis & Phoebe Giannisi, Classical Greek Architecture: The Construction of the Modern, (Flammarion, Paris 2004). Kali Tzortzi, The Temple of Apollo Epikourios: A Journey through Time and Space, (Ministry of Culture, Committee for the Preservation of the Temple of Apollo Epikourios at Bassai, 2001). David Watkin, The Life and Work of C.R. Cockerell, (A. Zwemmer Ltd, London, 1974). Epicurius (or Epikourios) was a reference to Apollo as a god of helping, a designation resulting from the belief that Apollo helped deliver the area from the plague. Quoted in Kali Tzortzi, The Temple of Apollo Epikourios: A Journey Through Time and Space (Ministry of Culture Committee for the Preservation of the Temple of Apollo Epikourios at Bassai), p 12. Although the temple likely had a statue of a deity in this position, no fragments of one were found during Cockerell’s expedition. Johann Matthaus von Mauch & Charles Pierre Joseph Normand, Parallel of the Classical Orders of Architecture, Compiled and edited by Donald M. Rattner (Institute for the Study of Classical Architecture, Acanthus Press, 1998), plate 33 text. David Watkin, The Life and Work of C.R. Cockerell, (A. Zwemmer, LTD, London, 1974) p. 13. A plaster cast of this fragment was given by the Metropolitan Museum of Art to the Institute of Classical Architecture & Art and is now displayed at the ICAA headquarters in New York City. A proposed enclosure of the track platforms will result in the removal of the canopies and their supporting columns.
<urn:uuid:620b0c0d-7bf8-470c-9fae-cad75ad2230e>
CC-MAIN-2016-26
http://blog.classicist.org/?p=7303
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931613
3,370
3.578125
4
Green coral goby The green coral goby, Gobiodon histrio, is a member of the family Gobiidae of the order Perciformes. It is native to the Indo-Pacific waters. Its commercial trade name are Blue-spotted coral goby or Broadbarred goby. It is hiding in the wild behind coral structures. This fish produces a toxin that deters predators. When disturbed, the fishes mucous contains compounds that can inhibit the locomotion of other fish. At high enough concentrations, the toxin causes the predator to lose equilibrium and tip over. It takes part in a mutualistic relationship with a species of Acropora nasuta. When the coral is damaged by toxic Chlorodesmis algae, it produces a compound that attracts the fish. The fish eat the alga and this enhances their toxicity. - Gobiodon histrio, The Fish Information Service, retrieved February 13, 2010. - London Zoo. - Schubert, M.; Munday, P. L.; Caley, M. J.; Jones, G. P.; Llewellyn, L. E. (2003). Environmental Biology of Fishes 67 (4): 359. doi:10.1023/A:1025826829548. - Dixson, D. L.; Hay, M. E. (2012). "Corals Chemically Cue Mutualistic Fishes to Remove Competing Seaweeds". Science 338 (6108): 804–807. doi:10.1126/science.1225748. PMID 23139333.
<urn:uuid:43e9bd94-d8d7-4a00-aff0-b4061a195d6d>
CC-MAIN-2016-26
http://eol.org/data_objects/23090577
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.762035
329
3.125
3
José Julián Martí Pérez was born on January 28, 1853. Today, Cuban blogger Yoani Sanchez tweeted: a melancholy but accurate reflection on his birthday: "Today is the 158th anniversary of the birth of Jose Marti & the Cuba he dreamed of appears farther away than ever..." Dedicated to prisoners of conscience in Cuba and around the world: Wall Flower written by and performed by Peter Gabriel Cuba in 2011 under the Castro brothers may even farther away from the Cuba of the poet patriot's dreams in 1869. Consider what happened today, January 28, 2011 as reported by the Associated Press: [Guillermo] Farinas and the other dissidents had sought to place a wreath at a monument to Cuban independence leader Jose Marti on the 158th anniversary of his birthday. "About 20 of them went out to place the wreath," Farinas' mother, Alicia Hernandez, told The Associated Press. "They had only gone two blocks when they took them away." Earlier Friday, Farinas told the AP he would ignore the warning. "They told us they wouldn't let us assemble in groups of more than three people," Farinas said. "If they want to detain me, that's their problem." |Alejandrina Garcia on hunger strike| On the same day a courageous woman put her life on the line to saver her husband, a prisoner of conscience rotting in prison since March 18, 2003 for his ideals: Also Friday, the wife of imprisoned dissident Diosdado Gonzalez announced the start of a hunger strike demanding his release. Alejandrina Garcia, one of the founding members of the "Ladies in White" opposition group, said in a phone interview from her home in Matanzas that she would only drink water until her husband is out of prison. Cuba promised to free all 52 remaining opposition figures from a 2003 crackdown on dissent, following a July deal with the Roman Catholic Church. Just 11 remain behind bars, including Gonzalez.Church officials have said they are optimistic the government will soon make good on its promise, but there has been little word on the men's fate since the passing of a November deadline by which all were supposed to be out. The last 11 dissidents have refused to accept exile in Spain, as most of the others did, the apparent reason for the delay. Cuba today is the antithesis of the dream shared by José Martí and the founders of the Cuban Republic yet the courage of men like Diosdado and the other 10 refusing exile and remaining in prison along with courageous women such as Alejandrina Garcia risking all for the freedom of a loved one. They bring to mind two reflections from the martyred founder of Cuba: "When there are many men without decorum, there are always others who themselves possess the decorum of many men. These are the ones who rebel with terrible strength against those who rob nations of their liberty, which is to rob men of their decorum. Embodied in those men are thousands of men, a whole people, human dignity."- José Martí Alejandrina Garcia, one of the founding members of the "Ladies in White" opposition group, sits in her home in Matanzas, Cuba, Friday, Jan. 28, 2011. Garcia began a hunger strike demanding the immediate release of her husband Diosdado Gonzalez. Although Cuba promised to free all 52 remaining opposition figures from a 2003 crackdown on dissent, following a July deal with the Roman Catholic Church, 11 remain behind bars, including Gonzalez. (AP Photo/Javier Galeano) "The struggles waged by nations are weak only when they lack support in the hearts of their women. But when women are moved and lend help, when women, who are by nature calm and controlled, give encouragement and applause, when virtuous and knowledgeable women grace the endeavor with their sweet love, then it is invincible." - José Martí *Apologies to Peter Gabriel for being inspired and paraphrasing a lyric.
<urn:uuid:e8c045e5-4ea6-4c60-b274-132ab820cf05>
CC-MAIN-2016-26
http://freecubafoundation.blogspot.com/2011_01_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97354
826
2.859375
3
If we define Wilderness as a place where humankind has no influence on the species or biological processes within it, one has to conclude that there is no Wilderness left on Planet Earth. Pollution, human habitat encroachment, and global warming have now reached every corner on the planet with such a pronounced effect that no ecosystem has been left untouched. Herein lies the paradox of the US National Park Service, whose mission it is to preserve all aspects of our parks’ scenic beauty and biodiversity including the processes that sustain them. While this policy has strong historic roots and broad public support, it is increasingly unsustainable in the face of global environmental change and changing public attitudes about the relative importance of resource preservation. This is the dilemma that William Tweed sets out to ponder on a 240 mile backpack trip along the John Muir Trail and the High Sierra Route. A 30 year Park Service Ranger and Chief Park Naturalist at Sequoia and Kings Canyon National Park before his retirement in 2006, Tweed describes the wonderous views and natural history of the region with a rich eloquence that has you pining to see what’s around the next bend and over the next hill. While the story of his journey alone makes this book worth reading, Tweed explains the philosophical origins of National Park Service which will celebrate its 100th anniversary in 2016 and its increasingly out-of-touch mandate to preserve our national parks as they were for future generations when global ecological forces such as climate change make this impossible. Plant, animal, and fish species are dying, migrating, or adapting around us and the nineteenth and twentieth century notions that humans can control these processes are historically out-of-date. Our relationship to nature has also undergone widespread change with fading social interest in the natural world. Rather than viewing nature as a place to escape and recuperate from the overwhelming and rushed pace of urban life, park visitors bring it with them with phones and a myriad assortment of electronics to keep themselves amused when they’re in the backcountry. increasingly, today’s hikers view hiking trails as race courses rather than retreats, seeking to establish Fastest Known Times by hiking 30 miles days with ultralight gear, rather than experiencing the landscape in a Muir-like state of reverence. How do such changes affect the future of the National Park Service, Tweed wonders. Should it detach itself from trying to control natural processes? Does it need to change its visitor policies to maintain relevancy in a world where there are so many other amusements for people instead of the outdoors? This is certainly not just an US problem, but a challenge for all national park systems worldwide. “The answer to these questions lies with those of us who care about these special places,” writes Tweed optimistically. I enjoyed reading Uncertain Path – actually I couldn’t put it down – because it made me consider the futility of trying preserve our national parks and wilderness areas as islands to themselves. When viewed this way, it’s no wonder that our wild lands cannot credibly be called Wilderness Areas anymore. What will their future be, when they can no longer play the part for which they were cast?
<urn:uuid:25db4693-7eac-4c97-b661-232f36f92542>
CC-MAIN-2016-26
http://sectionhiker.com/uncertain-path-a-search-for-the-future-of-the-natural-park-service/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947358
648
3.078125
3
The Murzuq Basin is situated on the southwest of Libya and was initiated during the Paleozoic. it forms a large intraCratonic basin, straddling the boundaries of Alger and Che asin is filled with sediment ranging in age from the Cambrian to Quaternary. It has a maximum total thickness of more than 3000 meter in the central part. Location of major Sedimentary Basins of Libya (click on hot spots on map to visit pages). The potential reservoirs in Muruzq Basin are mainly in siliciclastics that include the Memouniat, Hawaz, Acacus and Tadrart Formations. The Ordovician sandstone of the (Lower-Middle Ordovician) and Mamuniyat (Upper Ordovician) Formations have reservoirs with over 5 billion barrels of oil equivalent in more than 50 separate Accumulations across a broad region from the Murzuq Basin of SW Libya to the Ahnet Basin of central Algeria. The Silurian Tanzuft Shale is the major stratigraphic target in terms of source rock in southern Libya and Algeria. The other possible source rock is the Devonian Uennin organic-rich shale located at the Basin center (Meister et al.,1991). North-south cross section of Murzuq Basin Friday, February 15, 2013
<urn:uuid:7af8c57f-67ec-4e6d-b42e-6fa8da8e81b3>
CC-MAIN-2016-26
http://sepmstrata.org/page.aspx?pageid=147&4
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926604
276
2.9375
3
a(x + a)(x2 + y2) = k2x2 a(r cos(θ) + a) = k2cos2(θ) Click below to see one of the Associated curves. |Definitions of the Associated curves||Evolute| |Involute 1||Involute 2| |Inverse curve wrt origin||Inverse wrt another circle| |Pedal curve wrt origin||Pedal wrt another point| |Negative pedal curve wrt origin||Negative pedal wrt another point| |Caustic wrt horizontal rays||Caustic curve wrt another point| René Francois Walter- Baron de Sluze was an important man in the church as well as a mathematician. He contributed to the geometry of spirals and the finding of geometric means. He also invented a general method for determining points of inflection of a curve. |Main index||Famous curves index| |Previous curve||Next curve| |History Topics Index||Birthplace Maps| |Mathematicians of the day||Anniversaries for the year| |Societies, honours, etc||Search Form| The URL of this page is:
<urn:uuid:a5fa929a-6f6e-4996-9e90-bff732a4fdee>
CC-MAIN-2016-26
http://www-groups.dcs.st-and.ac.uk/%7Ehistory/Curves/Conchoidsl.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.80221
260
2.640625
3
The ‘Athirathram’ ritual, which is considered to be the world’s oldest ritual, is underway near Bhadrachalam in Khammam district. This yagam that lasts for a couple of days was started on April 21 and will conclude on May 2. Athirathram is mainly performed for the welfare of the people in the region. The previous Athirathram ritual was held in Kerala by the Namboodiri Brahmins of Kerala who preserved this tradition. As a group of priests continue to perform rituals, the devotees who will sit and watch the holy rites will later make pradakshinas around homa kundam. The legend has it that Athirathram was even performed by King Dasaratha, the father of epic hero Lord Rama. Scholars even said that this holy ritual was mentioned in the epic written by sage Valmiki, Ramayana. With the information on this ritual being publicized by some channels, lakhs of people thronged the place and took part in the yagam on the first day of Athirathram. A 62-year-old pandit, Harinatha Sarma, whose son is performing athrirathram in Khammam district, has said that only a few persons could perform Athirathram. As a part of the ritual, Sri Sitarama Kalyana (wedding ritual of Lord Rama and Sita) was performed on the eighth day of the yagam wherein a large number of people took part. Even on Saturday as heavy rain lashed Yetapaka where Athirathram is underway, a sea of devotees watched the holy rites braving the heavy downpour. (Phani)
<urn:uuid:389b1c88-d0dd-4d6c-9ee5-72e96a941bc7>
CC-MAIN-2016-26
http://www.andhrawishesh.com/home/top-stories/27221-athirathram-to-conclude-on-may-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97813
361
2.625
3
The culture history of the People's Republic of Hartford Web Publishing is not the author of the documents in World History Archives and does not presume to validate their accuracy or authenticity nor to release their copyright. - Congolese Water Spirits - By Misty Bastian, 5 January 1996. Reproduces the myth of the origin of female water spirits' white skin. - France Moves to Revamp Public Reading in - Panafrican News Agency (Dakar), 23 March 2001. France and Congo Thursday signed a financial agreement to revamp reading among youth by creating 11 pilot libraries in each chief town of the country's ten regions [principal language not specified, but presumably French]. The social and economic crisis bedevilling Congo has caused a decline in school attendance and a steady rise in illiteracy. - Congolese Welcome Swahili As Official - By Juakali Kambale, Inter Press Service (Johannesburg), 9 August 2004. News that Swahili has been adopted as one of the African Union's working languages has been well received in the Democratic Republic of Congo (DRC), where nearly half the population speaks it, writes it, or
<urn:uuid:a909cc9e-de1f-4748-9b1e-c2754eb04447>
CC-MAIN-2016-26
http://www.hartford-hwp.com/archives/35/index-df.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.891081
259
2.609375
3
Intraverbal- A verbal operant first defined by BF Skinner in his book “Verbal Behavior”. An intraverbal is a type of language that involves explaining, discussing, or describing an item or situation that is not present, or not currently happening. Examples include: Answering the question “How old are you”, filling in the missing words “At the zoo last month, we saw some _____, _______, and a ______”, or singing songs “Sing the alphabet song”. Intraverbals can often be quite challenging and time consuming programs to teach during ABA therapy. Even for therapists or parents who don’t know the verbal operants and aren’t sure what an “intraverbal” is, they usually want their child/client to perform a variety of intraverbal skills. Common questions/complaints I get when a child lacks an intraverbal repertoire include: “She only uses language to ask for things, she isn’t conversational” “He can greet his teacher by name every morning when I take him to school, but if I just randomly ask him: What’s your teachers name? he won’t say anything” “He can sing the entire Barney song (“I love you”) while watching the videos, but if I ask him to sing it during bath time he just looks at me” “She doesn’t participate when we play The Question Game during dinner. We all take turns answering questions like “Name a pink animal”, “Sing your favorite song”, and “What should we have for dessert”. I know she’s verbal, why does she refuse to answer these questions?” Teaching a child to mand (request) is extremely important, particularly when beginning an ABA program, because manding is how the child can communicate wants and needs to the outside world. Unfortunately, in some ABA programs a child can get stuck at only using language to mand or tact (label). This would look like a child who only communicates with others to request ("popcorn please") or to label ("red car"). The child might spend their entire day manding and tacting, and to the parents and therapists there is no problem! The child is talking and communicating…clearly the ABA was effective. Eventually someone notices that when asked direct questions, the child won’t respond. Or when placed with peers, the child wanders away or just stares blankly at them. At that point, either the staff and family begin to think the ABA “isn’t working”, or the child is blamed for being difficult or stubborn and just choosing not to talk. An ABA program should teach all components of language, not just the ability to request or label. Most conversation consists of a variety of intraverbals, and if you want your child/client to move past talking and really begin communicating, then intraverbals are the way to teach that. So how and when do you teach intraverbals? - Firstly, intraverbals are a verbal skill. It would be inappropriate to add intraverbal programs to the curriculum of a nonverbal child. If you are able to teach the child language, then first work on building mands and tacts before introducing intraverbals. - Strong receptive skills can also help a child learn intraverbals, because you can begin teaching by having the child receptively describe an item (Give me the one that is a utensil), and then you can remove the tangible item and present the demand as an intraverbal (Name a utensil). Similarly, you can also transfer as mand or tact to an intraverbal response by first teaching the target response as a mand or tact with the item present, and then removing the item and teaching the target response as an intraverbal. You can bring the tangible item back out as a prompt, but you would then need to fade that prompt in order for the target response to be a true intraverbal. I tell this to therapists all the time: If you are holding up a card or object, you are not teaching an intraverbal. - Be sure to minimize student error. I would suggest using Errorless Teaching, where the child is not allowed to practice making errors. The therapist is quick to provide full prompts, and then fade out those prompts systematically. Especially with intraverbal programs, it isn’t uncommon to see all sorts of lovely escape behaviors (including aggression) to get the therapist to back off. This can happen even with the most calm, sweet, “Peace & Love” type of client. Intraverbals are hard. Rote responding, studying the room, or looking at the therapists face will not reveal the answer to an intraverbal question. Understand the difficulty of intraverbal questions before you begin teaching them, and be prepared that you may need to use new and varied reinforcers, and provide lots of prompts to help the child contact success and stay motivated. - Here are a few Do Nots: Do not begin teaching intraverbals too early, or at too high of a difficulty level. Do not completely avoid teaching intraverbals ...they're the building blocks of conversation. Do not begin teaching intraverbals until echolalia is under control. Otherwise, the child will just repeat your question or statement, and become frustrated when that isn’t the right answer. Do not reinforce escape behaviors that pop up during intraverbal programs. - The simplest types of intraverbals are usually songs, or fill- ins. This would include things like: “Ready, set, (go)”, “1, 2, (3)”, “A cow says (moo)”, “I love (you)”. You may be saying to yourself: Oh, my child already exhibits some of these fill -ins or my child can sing songs. That must mean they’re ready for intraverbal programs! Not necessarily. With Autism, it is common that skills can present in a splintered fashion (the child can count up to 100 objects, but can’t rote count to 5 ). So this is why careful assessment of the child as well as looking closely at their programs is necessary before teaching intraverbals. When in doubt, seek the help of a qualified BCBA. Here is a list of some advanced intraverbal goals. Depending on the needs of your child/client, some, all, or none of these programs could be added into your current ABA program. Start simply... build up to complex: Meow says a ____/Ribbit says a _______ (Reverse fill-ins) Tell me something that flies in the sky, it’s an animal, and it says “chirp” or “tweet” (Intraverbal Feature Function Class) Socks and ________/Knife, spoon and ______ (Associations) You use a towel to _______ (Functions) Where do you bake cookies?/What can you kick? (WH questions) Is a banana a vegetable? (Yes-No questions) Name something that does NOT have a tail. (Negation)
<urn:uuid:0e6bed17-9470-44b9-bd23-371bd9ffaf3a>
CC-MAIN-2016-26
http://www.iloveaba.com/2013/06/teaching-intraverbals-how-when.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938217
1,534
4.28125
4
Conservation Checklist of the Trees of Uganda - COMPLETED Uganda has a wide diversity of plant species and quite varied habitats: open water systems, wetlands, tropical high moist forest, dry forest, woodland, thicket, bushland, steppe-like vegetation, grassland to semi-arid habitats. The distribution of plants and levels of species richness and endemism vary across these various habitats, with moist tropical forests the richest and most highly diversified. In Uganda there are a number of threats to plants generally, and trees in particular. Plant resources in Uganda now face very heavy use and extraction for various uses, partly because of the rapidly increasing population, lack of employment and limited alternative means of livelihood. Over 96% of Uganda’s population depends on biomass for fuel. By far the most important cause of vegetation change is subsistence farming. There is an appreciable number of areas outside the protected area network that are important for tree and other biodiversity conservation in Uganda. Availability of accurate and up-to-date information is essential and requisite to proper conservation planning. In this vein, we have tried to make a start by making a list of all the species of tree occurring in Uganda, and by indicating which of these have restricted distribution areas. Such restricted species may be threatened by either habitat destruction or more specific threats, and we have highlighted and illustrated such species. Product development, bio-trade and bio-prospecting all need to be guided by information on the status, trends and patterns of the very resource targeted. We hope that this checklist will be useful for both the future development of Uganda, and for the conservation of its trees. For all 827 tree species, we checked distribution area, from both literature and herbarium specimens. For those species with more restricted distribution we georeferenced the specimens from relevant herbaria. From the georeferenced files, and our estimates of population sizes and threats, we made global conservation assessments. For every species with really restricted distribution, and those with a conservation category of Vulnerable or worse, we prepared species information sheets which include maps, illustrations, data on how we arrived at the conservation assessment, and notes on local uses and local names. Of the 827 species, four are Critically Endangered, four Endangered and four Vulnerable. There are, in fact, many species under threat in Uganda; but many species have a wide distribution, and because of such distribution, they are less likely to be flagged up as being in danger at global level. If a species has a small distribution area, any threat will significantly affect such a species. On the other hand, if a species has a large distribution area, it will take quite a while before such a threat has an impact; or, more worryingly, before such a threat becomes clear. A species may be slowly disappearing over its entire range, but if such a range encompasses twelve or twenty different countries, such a slow threat may only become clear when it is too late. What can we do about this? We can communicate better and more with our close and far neighbours, of course; and in a way, this checklist is a step in such a direction. Project partners and collaborators Kalema, James (Makerere University, Kampala)
<urn:uuid:079aaa8e-244f-4812-96ad-d6c0db594d39>
CC-MAIN-2016-26
http://www.kew.org/science-conservation/research-data/science-directory/projects/conservation-checklist-trees-uganda
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933442
670
3.40625
3
Yucca smalliana Fern. - Spanish Bayonet, Adam's Needle Family - Agavaceae Stems - Essentially absent. Flowering stem to +2.5m tall. Leaves - All basal, linear, to 4cm wide, 1m long, margins often appearing shredded with coarse, curly fibers, spine-tipped. Leaves of the stem reduced to scales. Leaves of the flowering stem. Inflorescence - Single large panicle, 2-3m tall, axis somewhat pubescent. Pedicels 1.1cm long, dense pubescent. Flowers - Petals 3, acute to acuminate, white, glabrous, ovate to ovate-lanceolate, to 4cm long, 2.5cm broad, succulent. Sepals 3, white, acute to acuminate, to +4cm long, 2cm broad, elliptic, glabrous, succulent. Stamens 6. Filaments clavate, 2cm long, dense pubescent, slightly bent. Anthers small, yellow. Ovary superior, greenish-white, puberulent, 3-locular. Placentation axile. Pistil +2cm long. Stigma 3-lobed, each lobe sometimes divided again and appearing as six shallow lobes. Fruit a 6 angled capsule to +5cm long, +3cm in diameter. Seeds many, flat, +6mm broad. Flowering - May - August. Habitat - Cultivated but escaping to various localities. Origin - Native of southern U.S. Other info. - This species can be found cultivated throughout Missouri and is commonly escaped. It is very showy when in bloom and is easily recognizable. Photographs taken off Hwy 60 near Van Buren, MO., 6-5-04.
<urn:uuid:35a605e8-d82c-4285-b623-53b436c5fe93>
CC-MAIN-2016-26
http://www.missouriplants.com/Whitealt/Yucca_smalliana_page.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.842697
388
2.5625
3
Some 20% of the ROK's land area is arable, with about 70% of it sown in grain, rice being the chief crop. In 1965, agriculture (including forestry and fishing) contributed nearly 50% to GNP, but by 2001 only accounted for 4.4%. Double-cropping is common in the southern provinces. Rice production in 2000/01 was 5,290,000 tons. Barley production in 1999 stood at 331,000 tons; potatoes, 562,000 tons; and soybeans, 145,000 tons. Despite increased yields due to mechanization, the use of hybrid seeds, and increased employment of fertilizers, the ROK runs a net deficit in food grains every year. In 2001, imports of cereals, mostly from the United States, amounted to $1,510 million, consisting almost entirely of wheat and corn. Virtual self-sufficiency has been attained in rice production, but at a cost of nearly $2 billion per year in direct producer subsidies. In 2001, the ROK's agricultural trade deficit was $6.67 billion, fifth highest in the world. Hemp, hops, and tobacco are the leading industrial crops. The ROK was the world's leading producer of chestnuts in 1999. The orchards in the Taegu area are renowned for their apples, the prime fruit crop; output in 1999 was 491,000 tons. Pears, peaches, persimmons, and melons also are grown in abundance. About two-thirds of vegetable production is made up of the mu (a large white radish) and Chinese cabbage, the main ingredients of the year-round staple kimchi, or "Korean pickle." Until the Korean War, tenant farming was widespread in the ROK. The Land Reform Act of June 1949, interrupted by the war, was implemented in 1953; it limited arable land ownership to three ha (7.4 acres) per household, with all lands in excess of this limit to be purchased by the government for distribution among farmers who had little or no land. By the late 1980s, farms averaged 0.5–1 ha (1.2–2.5 acres). The New Village (Saemaul) Movement, initiated in 1972, plays a major role in raising productivity and modernizing villages and farming practices.
<urn:uuid:e2ec5c7e-43b4-46bc-a93a-93434c6c4847>
CC-MAIN-2016-26
http://www.nationsencyclopedia.com/Asia-and-Oceania/Korea-Republic-of-ROK-AGRICULTURE.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961895
479
3.34375
3
The term human resources was originally used in political economy and economics to refer to labor, one of several inputs (“factors of production”) necessary for production of wealth. Today, many corporations and businesses use the term “human resources” to refer to their employees and the portion of their organization that is responsible for personnel management. A nation’s workforce is one of the factors that determines the strength of its economy. A government can improve the quality of its workforce by maintaining a good public health system, ensuring an adequate wage, and providing education for its citizens. The globalization of the world economy has resulted in the migration of millions of workers from poorer countries to developed countries that offer more job opportunities. The billions of dollars in remittances sent every year by these workers to their relatives at home contribute significantly to the economies of their home countries. Modern personnel management theory no longer regards workers as interchangeable components, but as a crucial productive resource that creates the largest and longest lasting advantage for an organization. Human beings contribute much more to a productive enterprise than "work;" they bring their character, ethics, creativity, and social connections. “Human capital,” the knowledge, skills, and motivation of employees, increases through experience and education, and through the creation of a “corporate culture” that encourages loyalty and the full utilization of talent. In economics, "factors of production" are the resources employed to produce goods and services. Though the term “factors of production” did not come into use until the late 1800s, the earliest economists identified human labor as one of the elements essential to the production of wealth. As Europe moved away from feudalism and began to industrialize, the value of individual human beings began to be recognized. The “physiocrats,” a group of French economists who produced the one of the earliest well-developed theory of economics, believed that the wealth of nations was derived solely from the practice of agriculture and development of land. Their theories, which were most popular during the second half of the eighteenth century, emphasized productive work as the source of national wealth. They identified the four classes of productive people as farmers, artisans, landlords and merchants, but believed that only agricultural labor produced real wealth. The first modern school of economics, classical economics, which began with the publication of Adam Smith's The Wealth of Nations in 1776, identified three “components of price:” land, labor and capital stock (man-made goods such as machinery, tools and buildings, which are used in the production of other goods). Neoclassical economics continued to distinguish land, labor, and capital as the three essential components of production, but developed an alternative theory of value and distribution. As North America and Europe became industrialized, economists concluded that not just the amount of labor, but the quality and type of labor should be included in any analysis of production of wealth. John Bates Clark (1847–1938) and Frank Knight (1885–1972) introduced a fourth “factor of production:” The role of coordinating or organizing the other three factors in the most effective manner, which they termed “entrepreneurship” or “management.” This type of human input was seen as the final element that determined the degree of success of any form of production. In a market economy, individual entrepreneurs combine the other factors of production, land, labor, and capital, in an innovative way to make a profit. In a planned economy, central planners decide how land, labor, and capital should be used to provide for maximum benefit for all citizens. Economic theory identifies several types of “capital.” “Financial capital” is the money invested in operating a business, or in goods that can be used to produce other goods in the future. Fixed capital includes machinery, work sites, equipment, new technology, factories, roads, office buildings, and goods that contribute to economic growth. Working capital includes the stocks of finished and semi-finished goods that will be consumed or made into finished consumer goods in the near future, and the liquid assets needed for immediate expenses linked to the production process, such as salaries, rent, and rest on loans. Contemporary economic theory also recognizes human capital, the value and quality of the workforce involved in production. Human capital refers to the ability of people to perform labor so as to produce economic value. Early economic theory considered the labor pool to be homogeneous and fungible (easily interchangeable); modern concepts of labor account for the value of education, training and on-the job experience and also for the investment required to produce a highly-effective employee. Physical labor is performed most efficiently by a work force that is in good health, well nourished and enthusiastic. Human capital is acquired through education and training, both formal and through experience. Intellectual capital refers to the quality and number of workers who are trained in fields such as science, physics, computer engineering, and information technology. Investment in human capital takes place on several levels. A national government can maximize its human capital by ensuring a living wage for its population, providing good health care, and investing in education. A family invests in human capital by nourishing its children well, giving them with opportunities for intellectual growth, and providing them with an education so that they can qualify for a well-paid job. A company invests in human capital by training and educating its employees and providing them with incentives to remain with that company instead of seeking employment elsewhere. Investment in human capital involves the consumption of goods and services such as food, shelter, health care, and education. The overall health of a population directly affects its human capital. A work force suffering from malnutrition or debilitating diseases such as dysentery or malaria is not able to be as productive as a workforce consisting of healthy, vigorous adults. An unchecked epidemic can eliminate large portions of the work force; for example, plague epidemics during the Middle Ages altered the economic landscape of Europe. The economies of several African nations have suffered because AIDS has decimated the numbers of both trained professionals and physical laborers, and left large numbers of orphaned children unable to afford an education. The decision to vaccinate children against chicken pox was motivated partly by the fact that so much productivity was lost when parents had to stay at home for a week to care for their sick children. One of the responsibilities of a national government is to promote the good health of its citizens so that its economy can prosper. In order to attract the information and technology businesses that are at the forefront of modern economic expansion, a region must offer an adequate supply of highly-educated workers. Allocating resources to fund education is another responsibility of a national government seeking to increase economic prosperity. Many corporations collaborate with universities and fund scholarships, grants and research programs in order to ensure a supply of trained professionals. During the Meiji era (1868–1912), the Japanese government sent many students to study technology in the United States and Europe, and brought Western experts to teach in Japan. The students educated under these programs provided the skills and knowledge for Japan’s rapid economic expansion during the 20th century. After its independence in 1947, the Indian government founded and subsidized a series of Indian Institutes of Technology that admit 5,500 highly qualified students every year. At first many graduates of these Institutes emigrated to more developed countries, but today 70 percent of them remain in India, and many large corporations have now established research and computer technology facilities there. Though people have always left their homes to seek better economic opportunities in other countries, economic and demographic inequalities and expanding communications and transportation infrastructures have increased cross-border migration dramatically over the last three decades. Between 1985 and 2005, the number of international migrants in industrialized countries more than doubled, from 55 million to 120 million. In 2005 there were 191 million international migrants worldwide. The human capital of poor and developing nations is contributing to production in wealthier nations. In addition, many who receive a college degree or professional training in one country live and practice their professions in another country where there is greater economic opportunity. This movement of intellectual capital from the country which invested in the education is referred to as “brain drain.” Governments of developing nations often claim that developed nations that encourage immigration or "guest workers" are appropriating human capital that is rightfully part of the developing nation and required to further its growth as a civilization. Appropriation of human capital is equated with the exploitation of a country’s natural resources. The United Nations supports this point of view, and has requested significant "foreign aid" contributions from developed nations to offset the loss of human capital so that a developing nation does not lose the capacity to continue to train new people in trades, professions, and the arts. Cash flows from migrants back to their home countries (more than $300 billion in 2006) now far exceed direct aid flows from donor nations (about $104 billion in 2006) or foreign direct investment (about $167 billion in 2006). The remittances sent by migrant workers to relatives in their home countries can be considered a benefit of investment in human capital. The World Bank estimates that recorded remittance flows worldwide added up to $318 billion in 2007, and that informal, unrecorded remittances made the amount much larger. According to the World Bank's Migration and Remittances Factbook 2008, the top four recipient countries of migrant remittances in 2007 were India ($27 billion), China ($26 billion), Mexico ($25 billion) and the Philippines ($17 billion). Industrialized and oil-producing countries are the main source of gross remittances; the United States is the largest source, with $42 billion in formal outward remittance flows in 2006, followed by Saudi Arabia, Switzerland, and Germany. The host country benefits from the productive labor of the guest workers and from their consumption of goods and services while in the host country, but the economy of the home country benefits from the influx of foreign currency. Money from remittances is often used to purchase homes and to start small businesses in the home country. A certain population size is necessary to maintain and expand production, as well as to consume the goods and services produced. Most developed and many developing countries have low birth rates; low birth-rate countries account for more than one-third of the world's population. In 2006 -2007, 50 countries had a birth rate of less than 2 children per female.In the near future, these countries will need to draw on immigrants from countries with high birth rates to maintain their current productivity; if emigration from these countries slows, there may be a shortage of human capital. Today, many corporations and businesses use the term “human resources” to refer to their employees and the portion of their organization that is responsible for personnel management. The field of Human Resources includes administrative functions such as interviewing, hiring and firing employees; record keeping; creating and maintaining job descriptions; deciding compensation and payroll policies; implementing health insurance and benefit plans; measuring and evaluating performance; training, education and career development; employee relations; and resource planning. It is the responsibility of human resource managers to conduct these activities in an effective, legal, fair, and consistent manner. The objective of human resources is to maximize the return on investment from the organization's human capital and minimize financial risk. The field draws upon concepts developed in organizational psychology. The modern concept of human resources began in reaction to the scientific management (Taylorism) of the early 1900s, which examined the precise movements of an individual doing a particular job and attempted to modify them to improve labor productivity. In the 1920s, psychologists and employment experts in the United States started the human relations movement, which viewed workers in terms of their psychology and compatibility with a company, rather than as interchangeable parts of a machine. This movement, which grew through the middle of the twentieth century, emphasized the roles of leadership, cohesion, and loyalty in organizational success. Beginning in the 1960s, this view was increasingly challenged by more quantitatively rigorous and less "soft" management techniques. The trade union movement, especially in heavily unionized nations such as France and Germany, encourages the use of detailed job descriptions that identify the processes of production in a particular industry, outline their sequence and interaction, and define and communicate the responsibilities and authorities of workers in each position. A strong social consensus on political economy and a good social welfare system are thought to facilitate labor mobility and make the entire economy more productive by making it easy for labor to move from one enterprise to another. Modern analysis emphasizes that human beings are not "commodities" or "resources," but are creative and social beings. It is acknowledged that human beings contribute much more to a productive enterprise than "work;" they bring their character, ethics, creativity, social connections, and in some cases even their pets and children, and alter the character of a workplace, creating a unique “corporate culture.” A successful corporate culture generates employee loyalty and inspires individual employees to fully invest their talents in the company. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:93df2582-79d0-4ecd-938c-ecdcaf5f49d7>
CC-MAIN-2016-26
http://www.newworldencyclopedia.org/entry/Human_resources
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95792
2,797
4.0625
4
We live in a stressful society with constant demands. Whether it is trying to get to work, catching the final minutes of your son or daughters game, hurrying to that last sale, or getting through rush hour traffic, we are always on the go. Almost everything we do on a day-to-day basis coincides with stress. We wake up, rush to work, hurry home, eat and hopefully sleep. Throughout our busy day our bodies are sending us reminders to slow down. These reminders may be as simple as being tired, moody or achy. But, since we are moving at such a hurried pace, we tend to ignore the signals our body sends, until we receive the final signal, pain. We even try to ignore pain because we just don't have the time, energy or means to deal with it. We try to ameliorate the pain by taking pain medication. It is not to say that taking pain medication is bad but we must realize that it is reducing the pain temporarily, and is by no means taking care of the problem. Pretty soon we have to take more and more pills because the pain keeps getting worse and worse. Often times we don't realize that it comes back because it is not being treated it is being masked. We need to focus on the source of the pain. We have pain because it is a communicator between our brain and the rest of our body to warn us that something is wrong. It is the ache in your stomach when you have the flu, it is the feeling in your finger when you get a paper cut; pain signifies that something is wrong with our bodies. When we have an injury to a certain part of the body, a sprained ankle for example, the body recognizes the injury as if it is the most severe injury possible no matter what the situation is. This triggers the inflammatory process, which then causes histamine to be released, which sends a pain signal to our brain warning it of the injury. The brain then relies on help from other body systems and in this case, for the ankle, it stimulates the muscles surrounding the area to form a splint around the ankle in order to keep it from obtaining further injury. We hinder this process by injecting cortisone or taking painkillers, muscle relaxants and other means of temporary relief. The temporary relief allows us to continue to use the ankle without fixing the problem, thus causing the body system to be prone to further injury that leads to even more pain. Pain medication is good to take during the recovery process from an injury but should not be depended on to treat a problem. This scenario is true not only for ankle sprains but for all levels of the body ranging from heartburn when eating spicy foods, to stomach/internal pains and musculoskeletal/sports injuries. We look for the quick fix, without looking at long-term consequences or treatments. Our bodies are sending us a message to take time out of our busy lifestyle to realize the pain, stop hiding it instead try to resolve the problem. If you have muscular/bone aches and pains, go to your family physician and ask them if he/she is going to help fix the problem and not just ease or hide the pain. If they are unable to help you, then ask them to help you find somewhere you could go to get the best results for your condition. Most conditions relating to muscular or joint pains that are neither traumatic nor life threatening can be handled by chiropractors and physical therapists. If you have a fracture or need surgery for advanced injuries relating to musculoskeletal problems you can see an orthopedist. For most other conditions consisting of internal issues causing pain it is necessary that you either see your family physician or a specialist for the painful area. Whatever you do, listen to your body and receive the appropriate help so you can live a pain free life.
<urn:uuid:c99eb729-5bd7-45a7-87f8-d6314e3e9d9b>
CC-MAIN-2016-26
http://www.thewaynedalenews.com/heath/4946-the-pain-of-having-pain.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956346
786
2.5625
3
Experts question soil under Green Bay bridge GREEN BAY — Crews weren't able to rest the Leo Frigo Memorial Bridge's steel structure on sturdy limestone bedrock, which some geologists and engineers say may have contributed to the now-closed Green Bay span's sinking pavement. Wisconsin highway officials say that's not the case, blaming steel corrosion at ground level as the likely culprit that forced the indefinite closure of the bridge, which carried about 40,000 cars daily over the Fox River. Work starts Monday to reinforce the bridge, but officials say it could be months or longer before it's usable again. Some experts say underground soil that was sturdy enough to support the bridge when it opened in 1980 might have deteriorated in subsequent years, the Green Bay Press Gazette reported Sunday. The bridge was built on top of 51 two-legged concrete piers. Each leg is supported underground by about 20 steel beams that extend downward vertically. Records of soil tests show layers of sand, clay and other materials before limestone bedrock appears about 120 to 130 feet down. The beams beneath the damaged pier reached layers of hardened clay or other material that were judged capable of supporting at least 150 tons — generally to depths of 90 to 115 feet. "You could have a soft spot," said Bill Kallman, a Michigan-based engineering consultant who has worked on bridges for the New York Department of Transportation. Kallman said he considers that a more likely explanation than corrosion, which he said would have to be widespread in order to affect all the steel beams supporting the damaged pier. Tom Buchholz, the state's project manager for the bridge investigation, said bridge designers don't require that beams go all the way into bedrock.
<urn:uuid:664902ca-9f4f-415a-a349-d32d899ccffb>
CC-MAIN-2016-26
http://www.walworthcountytoday.com/article/20131015/WC/131019839/1153
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965075
346
2.8125
3
The Beginning of Wonders Abstract. On the basis of the self-evident facts presented here the first verse of the Hebrew Scriptures - already featuring the most widely read words of all time - must be regarded as the most remarkable combination of words ever written, and a standing miracle. Introduction. Few people today realise that what is generally known as 'The Bible' represents, potentially, about 50% of the information contained between its covers. Let me explain: the original Hebrew, Aramaic and Greek documents from which all Bible translations ultimately derive may also be fairly read as sets of numbers. This intriguing situation comes about because, long ago, these ancient peoples adopted the practice of using the letters of their alphabets as numerals. Accordingly, each letter was associated with a fixed value, and a sequence of letters with the sum of their respective values. Consequently, equipped with the relevant scheme of numeration, every Hebrew or Aramaic word of the Old Testament and every Greek word of the New Testament may be readily translated into a whole number . But it is appropriate that we ask whether numbers obtained in this way can, in any sense, be regarded as meaningful. I suggest that, under normal circumstances, we would tend to conclude that these derivatives are meaningless adhesions to the text. But here is a Book that claims to be divinely-inspired! Might not things be different in this case? Might not the numbers represent information that complements the biblical text? Might not this particular text be self-authenticating? How could we know for sure? Clearly, a simple test is required to settle the matter. The Test. It is reasonable that we begin at the beginning by considering the numbers that arise from a reading of the 7 Hebrew words of the Bible's first verse, Genesis 1:1 - a fundamental and strategically-placed assertion. In word order these are, respectively, 913, 203, 86, 401, 395, 407, and 296; their sum - the number to be associated with the complete verse - is 2701, or 37x73. Now observe: Concluding Remarks. These evidences of deep design in the Bible's opening words throw a completely new light on the true status of the Judeo-Christian Scriptures, for Who alone is capable of simultaneously speaking into existence a meaningful sentence copiously embroidered with such a variety of significant number structures - these incorporating the Author's own signature?! Can this be a sign for our generation? If so, how is it to be interpreted? Vernon Jenkins MSc
<urn:uuid:f7d53e57-576d-4772-bb92-ea69e3d222f6>
CC-MAIN-2016-26
http://www.whatabeginning.com/Misc/Wonders/P.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926278
518
2.65625
3
This section shows how to use the features of the WinPcap API. It is organized as a tutorial, subdivided into a set of lessons that will introduce the reader, in a step-by-step fashion, to program development using WinPcap, from the basic functions (obtaining the adapter list, starting a capture, etc.) to the most advanced ones (handling send queues and gathering statistics about network traffic). Several code snippets, as well as simple but complete programs are provided as a reference: all of the source code contains links to the rest of the manual, making it is possible to click on functions and data structures to jump to the corresponding documentation. The samples are written in plain C, so a basic knowledge of C programming is required. Also, since this is a tutorial about a library dealing with "raw" networking packets, good knowledge of networks and network protocols is assumed. documentation. Copyright (c) 2002-2005 Politecnico di Torino. Copyright (c) 2005-2009 CACE Technologies. All rights reserved.
<urn:uuid:4a97cf84-03b5-4a76-871d-2f6eb96f1892>
CC-MAIN-2016-26
http://www.winpcap.org/docs/docs_412/html/group__wpcap__tut.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921186
217
2.609375
3
Is it trite to say that prime numbers are unique? Probably. They are quintessentially unique. Pattern-seeking among prime numbers scattered here and there along the number line has ended many a time in a headache and a sleepless night. Why are they distributed so unevenly along the number line? And why, among the first 30 million numbers, are only 4 of the prime numbers perfect numbers? Based on the work of Omar E. Pol, this stunning visualization by Jason Davies reveals the interplay of each number’s unique pattern, displayed as a periodic curve, superposed with the unique pattern of every other number. “For each natural number n, we draw a periodic curve starting from the origin, intersecting the x-axis at n and its multiples. The prime numbers are those that have been intersected by only two curves: the prime number itself and one.” It begs the question: is it the superposition of these different patterns — these unique periodic curves — that causes the seeming irregularity of prime numbers? Something to ponder. “This pattern cannot merely be a coincidence. A mathematician who finds a pattern of this sort with instinctively ask, ‘Why? What is the reason behind this order?’ Not only will all mathematicians wonder what the reason is, but even more importantly, they will all implicitly believe that whether or not anyone ever finds the reason, there must be a reason for it. Nothing happens ‘by accident’ in the world of mathematics. The existence of a perfect pattern, a regularity that goes on forever, reveals — just as smoke reveals a fire — that something is going on behind the scenes. Mathematicians consider it a sacred goal to seek that thing, uncover it, and bring it out into the open.” — Douglas Hofstadter (I Am A Strange Loop, p. 117) DH, while ready to admit that it is certainly a pretty image, was quick to point out that it isn’t anything mathematically new. While true, there is a usefulness to this pretty image: it provides a way of determining the primality (even the perfection) of numbers as far out on the number line as you can imagine. An elegant improvement on prime number calculators, I think.
<urn:uuid:e1467d78-ed0c-4e48-abca-0953c0f0ff3c>
CC-MAIN-2016-26
https://thequantumfantastic.wordpress.com/2012/07/20/prime-number-patterns/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942259
468
3.03125
3
Operations on the Abdominal Wall John O.L. DeLancey and Robert G. Hartman Table Of Contents John O. L. DeLancey, MD Robert G. Hartman, MD RELEVANT ANATOMY OF THE ANTERIOR ABDOMINAL WALL OPENING THE ABDOMEN: SKIN PREPARATION, INCISION, AND HEMOSTASIS SPECIFIC INCISIONS: CHOICES AND TECHNIQUES PRINCIPLES OF ABDOMINAL WALL CLOSURE Incision and closure of the abdominal wall is one of the most frequently performed, yet least discussed, of surgical procedures. This chapter will cover the fundamental principles of anatomy and wound physiology related to this topic, as well as the basic types of incisions used in gynecologic and obstetric surgery. Prevention and treatment of common complications will also be discussed. |RELEVANT ANATOMY OF THE ANTERIOR ABDOMINAL WALL| The structural integrity of the anterior abdominal wall depends upon the rectus abdominis muscles, the muscles of the flank, and the conjoined tendons of the flank muscles that combine to form the rectus sheath. The rectus abdominis muscle is found on either side of the midline with the pyramidalis muscle lying superficial to the rectus muscle just above the pubis. Lateral to these are the flank muscles: the external oblique, internal oblique, and transversus abdominis (Fig. 1 and Fig. 2). The broad sheet-like tendons of these latter muscles form aponeuroses that unite with their corresponding members of the other side, forming a dense white covering of the rectus abdominis muscle, properly called the rectus sheath (sometimes referred to in surgical writings as the rectus “fascia”), Rectus Abdominis and Pyramidalis Muscles There are three tendinous inscriptions within each rectus abdominis muscle. These are fibrous interruptions within the muscle that firmly attach it to the rectus sheath. They are confined to the region above the umbilicus1 but can occasionally be found below. When found below the umbilicus, the rectus sheath is attached firmly to the rectus muscle at the inscription, causing difficult separation during Pfannenstiel incision. In addition, these points of fixation keep the muscle in place when it is transected during Maylard incision. The pyramidalis muscles arise from the pubic bones and insert into the linea alba in an area several centimeters above the symphysis pubis. The pointed insertion of the pyramidalis muscles into the linea alba can be used to assist in locating the midline. The most superficial of the flank muscles is the external oblique. It runs diagonally anteriorly and inferiorly, from its origin on the lower 8 ribs and lilac crest to fuse with the rectus sheath. The fibers of the internal oblique fan out from their origin on the anterior two thirds of the iliac crest and the thoracolumbar fascia. In most areas, they are perpendicular to the fibers of the external oblique, but in the lower abdomen, their fibers arch somewhat more caudally, and run in a direction similar to those of the external oblique. The deepest of the three layers, the transversus abdominis, has fibers that run in a primarily transverse orientation. The caudal portion of the transversus abdominis muscle is fused with the internal oblique muscle. This explains why only two layers are discernible in the lateral part of a transverse lower abdominal incision. (The superficial layer is formed by the external oblique and the deep layer by the fused internal oblique and transversus abdominus muscles.) The rectus sheath is formed by the conjoined aponeuroses of the flank muscles. The line of demarcation between the muscular and aponeurotic portions of the external oblique occurs along a vertical line through the anterior superior iliac spine (see Fig. 1). The internal oblique and transversus abdominis muscles extend farther toward the midline, coming closest at their inferior margin, at the public tubercle. Therefore, the muscular fibers of the internal oblique are found underneath the aponeurotic portion of the external oblique.2 There are several specialized aspects of the rectus sheath that are important to the surgeon. In forming the rectus sheath, the conjoined aponeuroses of the flank muscles are separable lateral to the rectus muscles, but as they reach the midline, they fuse and lose their separate directions. As a consequence of this midline fusion, these layers are usually incised together in the midline during any transverse fascial incision until separate layers are identified laterally, where they can be individually incised parallel to the direction of their fibers. The lower one fourth of the rectus sheath lies entirely anterior to the rectus muscle, while in the upper three fourths it splits to lie both ventral and dorsal to it forming both an anterior and posterior rectus sheath. The lower margin of the posterior rectus sheath is recognized as the semicircular or arcuate line, occurring midway between the umbilicus and the pubes. Cranial to this line, the midline ridge of the rectus sheath, the linea alba, unites the anterior and posterior sheathes. Sharp dissection is usually required to separate these layers when elevating the rectus sheath during Pfannenstiel incision. When the peritoneum is opened vertically and past the arcuate line, the posterior rectus sheath is divided along with the peritoneum and must be repaired during closure. Transversalis Fascia, Peritoneum, and Bladder Reflection Deep to the muscular layers, and superficial to the peritoneum, lies a layer of fibrous tissue called the transversalis fascia, which lines the abdominal cavity. It is visible during abdominal incisions as the layer just underneath the rectus abdominis muscles. It is separated from the peritoneum by a variable layer of adipose tissue and is frequently incised or bluntly dissected off the bladder prior to opening the peritoneum. The peritoneum itself is a single layer of serosa with a thin subserosal layer of connective tissue. It is thrown into five vertical folds by underlying ligaments or vessels that converge toward the umbilicus (Fig. 3). The single median umbilical fold is caused by the presence of the urachus. Lateral to this are paired umbilical ligaments raised by the obliterated umbilical arteries that connect the internal iliac vessels to the umbilicus. Finally, the lateralmost ridge is caused by the deep inferior epigastric arteries and veins. The reflection of the bladder onto the abdominal wall is triangular in shape, with its apex blending into the medial umbilical ligament. Because the bladder extends highest in the midline, incising the peritoneum somewhat off the midline is less likely to result in bladder injury and can provide more exposure. Vessels of the Abdominal Wall Beginning as a single artery that branches extensively, the superficial epigastric vessels run a diagonal course in the subcutaneum from the femoral vessels toward the umbilicus. Its position can be anticipated on a line between the palpable femoral pulse and the umbilicusjust superficial to Scarpa's fascia. The deep inferior epigastric artery and its accompanying veins originate lateral to the rectus muscle frorrt the external iliac vessels. They run diagonally toward the umbilicus and cross the muscle's lateral border midway between the pubis and the umbilicus (see Fig. 3). Below the point at which the vessels intersect the rectus, they are found laterad to the rectus muscle deep to the transversalis fascia. After crossing the lateral border of the muscle, they lie on its dorsal surface, between the muscle and the posterior rectus sheath. As the vessels enter the rectus sheath, they branch extensively so that they no longer represent a single trunk. The angle between the vessel and the border of the rectus muscle forms the apex of Hesselbach's triangle (inguinal triangle) whose base is the inguinal ligament. A summary of some clinical applications for specific anatomic points is presented in Table 1. Most wound complications can be traced to a failure of the healing process to eliminate the bacteria that are invariably introduced in some quantity into the wound, or failure of the healing process to synthesize adequate quantities of collagen to restore abdominal wall strength. Understanding the fundamental processes that are responsible for these functions is necessary to best create and close an abdominal incision. For a detailed description of these important events the reader is directed to Hunt and Dunphy's excellent book on this subject.3 The basic principles of healing are summarized here. There is a common misconception that infectious complications are primarily related to sterile technique. Information concerning sterile technique has received a great deal of attention, yet surgical technique continues to play a critical role.4 The wound-healing process is a balance between the amount of damage done to the tissue during an operation, and the ability of the body to decontaminate and repair it. The surgeon stands in a position to influence this balance significantly and to affect both the rate of wound infection and dehiscence. Studies have shown that when a surgeon is made aware of his or her wound-infection rate when it is compared with the rates of peers in their institution, the surgeon with a higher infection rate can decrease it simply by altering his or her management of abdominal incisionsfi Understanding how wounds heal is critical to minimizing postoperative wound complications. With the initial incision, exposure of blood and platelets to connective tissue begins the inflammatory response that will sterilize and heal the wound. During the initial phases of this process, the small vessels in the region of the injury become permeable to both molecular and cellular mediators of the inflammatory response. These elements are essential to eliminating bacteria through opsonization, phagocytosis, and cellular killing, as well as to recruiting wandering tissue macrophages that direct subsequent events. This is a decisive phase because it establishes the inflammatory process that is to follow. Clinical studies have shown that injection of vasoconstrictive agents at the time of surgery limits this response and is associated with increased numbers of infections.6 This may seem paradoxical because the infect:ions do not appear for several days--well after the vasoactive effects are gone. The vasoconstric-tion prevents the outpouring of the factors that initiate the inflammatory response. This creates a period of time for bacteria to multiply exponentially and become established in numbers that will later overwhelm host defense, thus explaining this phenomenon and calling attention to the importance of these early events. After this initial phase, the polymorphonuclear neutrophils (PMNNs) and wandering tissue macrophages begin their work of digesting damaged tissue, killing bacteria, and synthesizing the chemotactic factors that direct wound repair. These cells lay the groundwork for the later appearance of the fibroblast that will reestablish wound strength. Although these cells are capable of limited activity in an anaerobic environment, their proper function in the wound depends upon the oxygen supply to tissue and, therefore, upon a lack of surgical damage adjacent to the wound. This emphasizes one of the key surgical applications of wound healing; namely, protecting the capacity of adjacent tissues to perfuse the healing wound after the operation by avoiding unnecessary damage to this tissue. The next critical factor in proper healing is the amount of necrotic tissue created. Actual repair must begin from healthy tissue. If a ligature is placed around a piece of adipose tissue, it will become necrotic. Healing must then begin from the uninjured tissue behind the area of damage. Prior to reaching the edge of an incision, the healing process must disinfect, digest, and remove the dead tissue, before healing can begin. During this delay, bacteria in the ischemic tissues can multiply, further increasing the need for cleanup, delaying repair, and increasing the likelihood of infection. Hemostasis, either by ligation or electrocautery, abrasion, and desiccation of tissue are all injuries that occur in any incision. The more of these damaging elements present, the more necrotic tissue the body must eliminate before joining the edges of the wound. This allows more time and space for bacteria to multiply and overwhelm the host. Multiple knife strokes made when incising the subcutaneous tissue, leave more damaged tissue behind than a single, clean stroke and can be shown to increase the risk of wound infection.7 Although the minute details of repair cannot be covered here, an understanding of several points may influence the surgeon's management of abdominal incisions. First, healing is under the direction of the inflammatory response, especially the macrophage, and agents that influence inflammation also influence healing. For example, the: anti-inflammatory effect of steroid hormones can impair both PMNNs and macrophage function and will influence the development of wound strength. The re-establishment of abdominal wall strength depends upon the synthesis of new connective tissue. This is accomplished by fibroblasts and requires, not only the protein precursors for collagen synthesis, but also occurs most rapidly in a normally oxygenated environment where the enzymes and cofactors needed for collagen synthesis are present. Factors limiting the availability of these critical substances and conditions will delay or impair the development of wound strength and increase the likelihood of wound disruption. Ischemia caused by tight sutures, foreign bodies, lack of nutritional factors such as protein or ascor-bic acid, or inhibitors of cell division can adversely influence found healing. Collagen, the primary structural protein of the body, is synthesized by the fibroblast. It begins to appear in the wound on the second day, as an amorphous gel devoid of strength. Maximum collagen synthesis occurs around the fifth day. It depends especially upon the presence of oxygen, vitamin C, and amino acid precursors. Deficiency of these factors in the wound can inhibit healing, resulting in an increased incidence of wound dehiscence. Maximum strength development does not occur for several months and depends upon the interconnection of the collagen subunits. Approximately 80% of original strength is reached in about 6 weeks and can be significantly delayed if the normal factors for wound repair are not present. It is important to recognize that perfusion of the wound is the most important factor in wound healing. Integrity of the microvasculature and flow is responsible for the oxygenation needed for cellular metabolism. Damage to tissue that impairs the delivery of oxygen to the wound increases the number of wound infections and the likelihood of dehiscence. Adopting an attitude in the operating room where tissue damage is minimized has been shown to decrease complications in obstetric and gynecologic surgery.8,9 |OPENING THE ABDOMEN: SKIN PREPARATION, INCISION, AND HEMOSTASIS| The concept of cleaning the skin for surgery began with Maimonides in the 11th century and has evolved significantly during this century. The choice of approach and agent for cleansing is often derived from tradition and salesmanship rather than proven efficacy. Widely accepted goals, however, include cleansing away dirt and contaminants by physical means and rapid antisepsis to reduce bacterial density, followed by application of a long-acting bactericidal agent to deal with resident bacte:ria brought to the skin's surface by sweat. Handwashing has been shown to reduce bacterial counts significantly, but to an inadequate degree. In fact, prolonged handwashing with plain soap may increase bacterial density due to chapped skin. Nonetheless, handwashing remains an important element of preoperative preparation by removing gross contaminants and dirt. Numerous antiseptics are available, but with variable properties and effectiveness. Alcohol has repeatedly been shown to be an excellent choice due to its immediate and broad activity against gram-positive and gram-negative organisms. Although not sporicidal, alcohols also act against many fungi and viruses as well as mycobacteria. A 1-minute scrub with alcohol has been shown to be as effective, as a 4- to 7-minute scrub with other antiseptics.10 A 70% solution is most commonly used as a compromise between effectiveness and desiccation of the skin. Ethanol, normal propyl, and isopropyl are all effective. The World Health Organization's draft guidelines in 1987 designated alcohol as the gold standard against which all other skin antiseptics should be compared.10 Surgeons must be aware of alcohol's highly flammable nature, however, taking special precautions to assure complete drying where electrocautery or laser will be used. Other popular scrubs include the iodophors, hexachlorophene, and chlorhexidine gluconate. The iodophors are highly effective, but their anti-microbial action declines rapidly upon drying. Hexachlorophene is active against gram-positive bacteria but less so against gram-negative bacteria, mycobacteria, and viruses. Chlorhexidine glu-conate has a broad spectrum of antibacterial activity, but is relatively more effective against gram-positive bacteria than gram-negative bacteria, with fair activity against the tubercle bacteria and poor activity against viruses. It does have extended effectiveness, remaining chemically active for approximately 5 hours. It is also available as an alcohol-based hand-rinse, combining the rapid and effective action of alcohol with the long action of chlorhexidine gluconate. Preparation of the patient's skin involves similar considerations to those noted above. If necessary, hair removal should be accomplished immediately prior to surgery by clipping, not shaving, as the latter has been shown to damage the skin's defenses and increases the risk of wound infection.5 The incision should be accomplished with the least possible tissue damage. A scalpel should be used, and the fewest possible strokes will limit tissue damage.7 Electrocautery tends to produce much larger zones of damage and increases infection rates.12 Hemostasis can be obtained with well-directed cautery and fine ligatures (4-0 absorbable suture is adequate), taking care to isolate bleeding vessels and to exclude any unnecessary tissue from the ligature. In general, when discrete vessels are encountered, isolation with a hemostat and ligation will provide the least volume of necrotic tissue. Absorbable sutures such as polyglycolic acid or polyglactin are preferred to catgut, which causes inflammation. Wound drainage has a long and varied history.12 In cases where diffuse oozing persists or wound contamination is greater than normal, drains may be considered. Seromas and hematomas significantly delay approximation and healing of the subcutaneous tissue. Because a surgical drain is a foreign body, it has the potential to increase wound infection by its presence. 13 In addition, it can provide access for bacteria to enter the wound after the skin has been closed.14 Subcutaneous drains, therefore, should be used only when there is sufficient risk of hematoma or seroma formation so that it can do more good than harm, as is true in the massively obese patient. 15 The best choice is a closed suction drain brought out through a separate stab incision since this option offers a lower infection rate than Penrose drains, or drains brought out through the incision.11 |SPECIFIC INCISIONS: CHOICES AND TECHNIQUES| In choosing a particular approach to the pelvis or abdomen, specific operative goals should be weighed. Considerations include the need for .speed, potential difficulties with hemostasis, expo-.sure requirements, cosmetic concerns, the presence of a previous incision, and the patient's overall nutrition and health. The various advantages and disadvantages of abdominal incisions are summarized in Table 2. Although there is a temptation to become inflexible in the type of incision chosen, one must guard against habit or face the possibility of surgical compromise and complications. The Pfannenstiel incision is perhaps the most frequently used incision in both obstetrics and gynecology, offering satisfactory exposure of the pelvis, excellent postoperative strength, and pleasing cosmetic results (Fig. 4). Limitations include lack of upper abdominal exposure, increased risk of hematoma or seroma formation--especially in the face of abnormal coagulation--due to the extent of dissection required, and greater operating time. Great care should be taken preoperatively to make certain that exposure out of the pelvis will not be required, and that the incision will provide enough room to remove the expected structure safely (e.g., a large fibroid uterus). The skin incision is made transversely, approximately 4 cm above the superior border of the pubis, and is carried through the subcutaneous fat. When the rectus sheath is encountered, it is divided transversely in the midline with the knife, very often encountering the superior extent of the pyramidalis muscles. Once the incision is lateral to these structures, the rectus sheath is seen to consist of two layers: the aponeuroses of the external oblique and the combined internal oblique-transversus abdominus muscles. Each of these layers is separately divided laterally on each side with the scissors, following the fiber directions in each of the layers. Of equal importance to the skin and fascial incisions in providing adequate exposure is the next step, separating the rectus muscles from the sheath superiofiy and inferiorly. The sheath is elevated on each side of the midline using sharp and blunt dissection, separating the rectus sheath and muscle for a total distance above and below equal to the length of the fascial incision. If a tendinous inscription exists below the umbilicus, making elevation difficult, the muscle and sheath must be sharply separated, taking care not to cut through either the muscle or the sheath. Perforating blood vessels should be clamped, cut, and ligated only if bleeding occurs, otherwise preserving the nerve that accompanies the vessel. If the nerve is transected, some patients will develop an area of cutaneous anesthesia in this area that can be annoying for the patient. The pyramidalis muscles may be left attached to the undersurface of the fascia, or left on the rectus muscles and divided in the midline with the next step. The rectus muscles are separated in the midline; this may be initiated by spreading the points of a hemostat between the muscles until the transversalis fascia is encountered. The separation of the muscles can usually be carried out superiorly and inferiorly, using blunt dissection with the exception of the insertion of the pyramidalis muscles that must be incised. The peritoneum is opened, initiating the entry at the superior extent of exposure, off of the midline, in order to minimize the risk of entering the urinary bladder. Final exposure is obtained by spreading the entire incised abdominal wall laterally. If exposure is inadequate, the skin, fascial, and peritoneal incisions should be extended, along with further dissection between the rectus muscles and their sheath. The four points that limit the size of the incision are the two lateral corners of the incision in the flank muscles and the superior and inferior extent to which the fascia has been elevated and the rectus muscles separated. The former points can be extended to the iliac crests, and the latter, to the pubis and umbilicus. If further room is needed once these limits have been reached, conversion to a Cherney incision can be made by simply cutting the rectus abdominus tendons from the pubic bones. Although surgeons frequently hesitate to take this step, the danger from operating with inadequate exposure is far greater than the possible risk of difficulty with wound healing from the extended incision. The Cherney incision17 combines the excellent exposure of a Maylard incision and the strength of a Pfannenstiel incision (Fig. 5). Unlike the Pfan-nenstiel incision, wherein the rectus muscles are separated, the Cherney permits the detachment of these muscles from their insertion on the pubes, and allows them to retract upward “like a window shade.” It begins with a low transverse incision of the abdominal skin and rectus sheath similar to the Hhnnenstiel incision. The sheath is then elevated off the rectus abdominis muscle inferior to the fascial incision until the pubic bone is reached; superior dissection of the fascia need not be done. Leaving the pyramidalis muscle attached to the rectus sheath minimizes unnecessary bleeding. Next, the rectus muscles are severed from their insertion on the pubic bone. This is accomplished by perforating the transversalis fascia lateral to the muscle, but medial to the deep inferior epigastric vessels in Hesselbach's triangle. The surgeon's linger is then used to dissect under the tendons of the rectus muscle. The tendons are cut approximately 0.5 cm above their insertion to free them from the pubic bones. If the tendon is shaved directly from the periosteum, there is no tissue to grasp and ligate should bleeding from the bone occur. Care should be exercised in dissecting around the lateral border of the muscle to avoid damage to the adjacent deep inferior epigastric vessels. If exposure is found to be limited by these vessels during the procedure, they can be ligated and the peritoneal and fascial incisions extended. Since the rectus muscles are no longer attached, lateral exposure can be extensive, and the incision can be carried above the anterior iliac spines into the flank, if needed. At the completion of the operation, the tendons are reattached to the under surface of the rectus sheath, rather than to the pubic bone. This is accomplished by placing horizontal mattress sutures (see Fig. 5). Usually, three 2-0 delayed absorption sutures on each half of the muscle suffice, but if delayed healing is anticipated, permanent sutures should be employed. The re-establishment of these attachments provides for excellent postoperative strength. This incision requires more time to accomplish and includes more potential blood loss than other incisions, but offers great pelvic exposure and can be used at any level of the abdomen (Fig. 6). For pelvic operations requiring this degree of exposure, the skin incision is generally made at approximately the level of the superior iliac spine. In order to achieve adequate exposure of the pelvic sidewall, it is extended to about 5 centimeters medial to the iliac spine. In obese patients with a pendulous “apron” of skin, the incision may not need to be made in the moist crease of the skin, but can be made below the umbilicus, as long as it lies above the pubic bone, and the underlying crease is not traversed.18 Again., skin, subcutaneous tissue, and rectus sheath are divided transversely as noted above for the Pfannenstiel incision, carrying the incision past the lateral border of the rectus muscle. However, instead of separating the rectus muscles and fascia, the rectus muscles are cut transversely. The deep inferior epigastric vessels are usually ligated before transecting the muscles, but not all authors have felt this is necessary. To accomplish this ligation, recall the course of the inferior epigastric vessels as described above. They will be found by gently retracting the lateral border of each rectus muscle, exploring the region for their location. Near the pubis, in the area of Hesselbach's triangle, they are found lateral to the muscle, whereas above that level, they are found on its undersurface. Note that numerous branches may exist arid should be carefully identified, isolated, clamped, cut, and ligated. After blunt dissection of the muscles from the peritoneum, they may then be divided safely with the knife or electrocautery. Elevation of the muscle off the peritoneum with the hand or an Army-Navy or similar retractor is required if the electrocautery is to be used safely. If the vessels are difficult to isolate, then the muscles can be cut before ligating the vessels. This approach is used to expose the vessels that lie between the muscle and the peritoneum. To complete the incision, the transversalis fascia and peritoneum are incised transversely. The midline or paramedian vertical incision is the simplest of abdominal incisions, and it offers the greatest ease of extension into the upper abdomen as well as the least blood loss (Fig. 7). Although the pararectal approach that goes lateral to the rectus muscle might be acceptable in terms of exposure, the resultant denervation of the rectus muscle that occurs weakens this incision, 19 leaving the midline vertical and paramedian incision for discussion in the context of gynecologic procedures. The main considerations in choosing the vertical incision include the need for speed, a relatively bloodless approach, and possible need for exposure of the upper abdomen. In addition, because of its lack of dead space, the vertical incision is preferable in patients taking anticoagulants or in the presence of disseminated intravascular coagulation. Also, for patients with cirrhosis of the liver who may have greatly enlarged abdominal wall vessels that follow a longitudinal course, vertical incision minimizes the number of vessels that must be transected. In the lower abdomen, the incision is made from just above the pubis to below the umbilicus in the midline. Although it is customary to carry the incision lateral to the umbilicus if extension to the upper abdomen is required, the incision can equally well be made through the umbilicus without any increased risk of disruption and is technically simpler to perform.20 Below the umbilicus the linea alba tends to be narrow, and the rectus sheath is usually entered on one side or the other, thus making this a paramedian incision rather than a true midline approach. The transversalis fascia and peritoneum are also opened in a vertical direction; entry should begin at the superior extent of the incision to obviate the possibility of bladder entry. As is true of all peritoneal entry, great care must be taken against the possibility of encountering adherent bowel. Closure considerations are discussed below. |PRINCIPLES OF ABDOMINAL WALL CLOSURE| Regardless of the type or direction of incision, the factors involved in closure are similar and will be discussed together. Maintenance of tissue per-fusion, minimizing necrosis, creating good initial strength, protection against late hernia formation, and assuring a cosmetic result are factors all incisions share in common. Tight Sutures and Ischemia All sutures used to close the musculofascial wall must be tied with enough tension to approximate the edges of the incision. If greater tension is applied, the tissue will become ischemic, and a certain amount of necrosis will develop. If the extent of necrosis is marked, the tissue will not hold the suture, resulting in dehiscence or hernia formation. This is illustrated by the fact that dehiscence rarely occurs immediately after the surgery but is usually delayed for several days,21 during which interval the tissues' weakness from ischemia develops. The choice of suture technique and the way that the sutures are tied determine the extent of necrosis that will occur. In much the same way that a suture tied around the base of a pedunculated skin lesion will allow it to become necrotic and fall off, abdominal wall sutures create ischemia, necrosis, and tissue disruption. Experiments studying the difference in the strength of wounds closed with tightly tied sutures, as opposed to those tied just tightly enough to coapt wound edges demonstrate that the wounds with more loosely tied sutures are stronger.22,23 These same studies demonstrate that tightly tied sutures create lower breaking strength, increasing the likelihood of disruption. Therefore, whatever suture is chosen, it should not be placed so tightly a to cause ischemia. A second element that is important to wound strength is the distance between the wound edge and suture placement. First, the inflammatory process at the wound edge produces collagenases to help with removal of necrotic debris. This zone of collagen degradation extends for approximately 1.5 cm from the edge. 24 The fascia in this region is partially digested during the immediate postopera-tive period. Secondly, there is a purely mechanical factor: The farther from the edge the suture is placed, the greater the amount of fascia the suture would have to tear through in order to pull free25 and the more secure the closure would be. Therefore, sutures should be placed at least 1 to 1.5 cm fi-om the wound edge. In patients at increased risk of wound disruption, sutures should be placed 2 cm from the edge. Choice of Closure There are several techniques that can be used to suture the wound edges together. In general, these can be divided into running and interrupted closures. Running sutures have the advantage of speed, since knots need only be tied at two or three points. In the past, these had often been considered to be weak closures because disruption of any portion of the suture would open the entire wound. More recently, it has been appreciated that these can be strong closures.26,27,28,29 Compared to the nature of interrupted sutures, the helical nature of an unlocked running stitch evenly distributes tension along the entire wound and allows for superior perfusion. Large studies of gynecology patients who are at high risk for dehiscence have demonstrated the safety of this type of closure.30 Interrupted and figure-of-eight sutures have an advantage; if one is insecurely tied, or breaks, the whole incision will not come apart. If tied only tightly enough to approximate the tissue, but loosely enough to permit adequate perfusion of the fascia within the suture, such sutures can provide a secure closure without necrosis. There is, however, an inherent tendency to pull forcefully on these sutures as they are being tied. This is unquestionably the reverse of what is best. If interrupted sutures are not done properly, they can be a greater impediment to a strong closure than a running suture that produces inherently less ischemia. Further aspects of wound closure will be discussed in the section on wound dehiscence below. Closure of the peritoneum has been a topic of controversy, but several points are now clear. The peritoneal mesothelium does not heal like skin.31 Rather than healing only from the edges toward the center of a defect, as is true of epidermis and dermis, a new layer arises from the exposed bed of connective tissue. Therefore, it makes little sense to bring edges of peritoneum together to hasten healing. If the underlying tissue is undamaged, adhesions do not form in the absence of mesothelium before it can regrow; moreover, this process of regrowth occurs rapidly (usually within 48–72 hours). On the other hand, rarely is the mesothelium alone incised or damaged. In opening the abdomen, the peritoneum is incorporated in the lower abdomen with the transversalis fascia. Above the arcuate line, the posterior rectus sheath also lies under the rectus abdominus muscles, and these structures are incised along with what is clinically referred to as the peritoneum. Closure of the transversalis fascia and posterior rectus sheath can add to overall wound strength and should be accomplished to make a more secure wound.32 Experimental studies of adhesions frequently use a suture tied around a piece of tissue as a reliable stimulus for the formation of adhesions. Therefore, if a ligature is exposed to the intraperitoneal contents as it would be in a Maylard incision, where the ligatures on the deep inferior epigastric vessels are exposed, covering them by approximating peritoneum with a fine or 4-0 non-reactive suture seems preferable to leaving this nidus for adhesions to remain exposed to bowel. Dehiscence is defined as the separation of the sutured layers of the abdominal wall and may be classified as partial or complete. In the case of a partial dehiscence, one or more, but not all of the sutured layers may separate. This situation may also be referred to as wound disruption. Complete dehiscence is marked by separation of all layers resulting in exposure of the peritoneal cavity. Synonyms include evisceration and burst abdomen. The incidence of this complication has been quoted as 0.3% to 3% of all pelvic surgeries, but it is currently thought to occur in less than 1% of cases.33,34,35 Historically, the incidence was thought to be greater for vertical as opposed to transverse incisions, but more recent studies have shown them to be equal.35,36 ETIOLOGY AND PREVENTION. The main causes of dehiscence include failure of the suture to remain anchored in the fascia, suture breakage, and knot failure. Of these, tissue failure and improper suture choice are the most common.35,37 Since closure techniques involving permanent suture and wide bites of tissue exist that effectively prevent dehiscence, the central problem becomes one of recognizing the patient in whom the extra time taken to use a more secure closure is justified. Risk factors for dehiscence are listed in Table 3. Inherent strength of abdominal wall tissue affects the risk of dehiscence and, in turn, is also affected by such factors as age, sex, metabolic disease, and the presence of malignancy. Patients over the age of 60 are at increased risk, as are males with a ratio of 2.6 to 6.7: 1 over females.36 Uremia and diabetes are associated with poor healing, as is vitamin C deficiency in the malnourished patient. These underlying conditions should be corrected if possible. White and co-workers found half of their cases of burst abdomen occurred in patients with malignancy.38 The presence of these risk factors indicates the need for a closure that is resistant to disruption such as a mass closure, Smead-Jones closure, or placement of retention sutures. Increased stress on wound Local wound factors The method of closure plays an important part in wound security (Fig. 8). In layered closure each layer--peritoneum, fascia, subcutaneous tissue, and skin--is closed separately, as opposed to mass closure where all layers, usually excluding skin, are closed in a single unit. Ellis cites mass closure as one of the most significant advances in reducing the risk of burst abdomen. One of the key elements in mass closure is the obligatory use of wide tissue bites (1.5–2 cm from the wound edge) when placing the suture. Recall that the edge of the fascial incision is often necrotic to some degree, resulting in tenuous tissue strength and increased risk of disruption. if sutures are placed near the cut edge. In a review, Wadstrom and Getdin27 found no studies proving an advantage to layered closure. Similar arguments are forwarded in concluding that a continuous closure is superior to interrupted sutures; no clinical advantage of interrupted closure has been shown in the large majority of studies comparing the two. In view of the :shorter operating time for continuous closure, it would seem to be the obvious choice. Additionally, one could argue that it is easier to tie two or three knots precisely than to tie precisely the many knots required in interrupted closure. As noted above, to prevent necrosis and subsequent wound disruption, sutures must not be placed under undue tension. In patients at unusually high risk of wound dehiscence, special consideration should be given to using a closure that is, perhaps, somewhat more time-consuming, but lessens the risk of disruption.39,40 When a wound dehiscence occurs, and permanent suture has been used, the separation usually occurs where the sutures are inserted into the tissue. Therefore, unless a weak or absorbable suture has been used, it is usually not the suture that is at fault, but rather the way that the sutures are anchored in the tissue.37 As previously mentioned, the further a suture is placed from the edge of the wound, the more force is required to pull it out, making wide suture placement a stronger technique. An additional factor that can be used to increase the strength with which the suture can be anchored to the tissue comes from distributing the tension that the suture places on the tissue between at least two points. The Smead-Jones suture takes advantage of this distribution by placing two bites on each side of the wound edge in a far-near near-far arrangement as shown in Figure 8. Originally described, this technique was a mass closure that incorporated both the muscle, fascia, and peritoneum of the abdominal wall. This closure, incorporating all layers, is extremely strong, but it is usually modified to have a separate peritoneal closure, with the Smead-Jones stitches including only the musculofascial layer. This suture gains its strength from the fact that before the suture can tear out of the tissue, it would have to rupture the tissue at two points rather than just one41 and can be done either as an interrupted technique or running suture. The most secure closure of the abdomen in-dudes both closure of the musculofascial layers and placement of retention sutures through all layers of the abdominal wall, including the skin and usually (although not always) the peritoneum. It is virtually impossible for dehiscence to occur while these sutures are in place because of the great amount of tissue they would have to disrupt in order to pull out. They are usually placed with a number 1, or greater, suture, and are especially useful in treating a wound that has already undergone dehiscence. Proper suture selection will decrease the risk of dehiscence in patients at normal risk. Clinical studies have shown that wound disruption is most likely to occur in the early postoperative period, usually 5 to 8 days after surgery. Both theoretical concerns about maintenance of suture strength and clinical studies agree that there is no place for catgut sutures in fascial closure. Other absorbable sutures, such as polyglycolic acid (Dexon) and polyglactic acid (Vicryl), even losing up to 80% of its tensile strength in 2 weeks,42 seem to compare favorably to permanent sutures, such as Prolene, in healthy patients undergoing elective surgery who are at no unusual risk for dehiscencefi As we will see below, there may be concern about the more distant complication of wound hernia. In patients at risk for dehiscence, permanent sutures are needed. In addition to the way in which a wound is closed, the stresses placed upon it are important. Mechanical factors such as abdominal distension from ileus, vomiting, and chronic cough may also play a role and should, therefore, be treated when present or prevented when possible in the patient already at risk due to other risk factors. Other factors that may weaken a wound include the presence of hematoma, wound infection, and obesity. A hematoma will disrupt tissues, preventing approximation as well as providing an excellent nidus for infection. More often than not, wound dehiscence is associated with a combination of events that, when recognized, calls for meticulous preoperative preparation, wound closure, and postoperative care. DIAGNOSIS AND TREATMENT. Due to the increased morbidity and mortality associated with dehiscence, diagnosis and treatment should be prompt. Mortality as high as 15% to 20% has been reported in the literature, although more recent studies demonstrate a rate around 10%. Mortality is not solely due to the dehiscence, however; these patients are often ill with other chronic disease and undergo a second anesthetic. Wound disruption usually occurs on the sixth to eighth postoperative day. Evisceration will be apparent on simple inspection. When this is the case, the intestines should be covered with a saline-moistened towel and immediate steps taken to close the incision in the operating room. Although lesser degrees of disruption may be asymptomatic, many patients have a sense of something “giving way.” The most common complaint is that of a profuse serosanguineous discharge from the wound. When disruption is strongly suspected, careful exploration may best be accomplished in the operating room with suitable anesthetic. In any event, the wound should be opened as necessary to aid in diagnosis and the fascial closure critically evaluated for disruption. A broad-spectrum antibiotic should be started as soon as cultures have been obtained. Once the patient is in the operating room, the wound must be cleansed carefully and thoroughly. Debridement of the subcutaneous tissue and fascia should be accomplished as necessary. Strict attention must be paid to closure--in this case to mass closure. Permanent material of suitable size (number 1 or larger) should be used along with, possibly, retention sutures, depending on the patient's general health and other etiologic factors. Undue suture tension must be avoided to reduce the possibility of necrosis, and bites at least 2 cm from the edge should be used. Retention sutures may be left in place for 14 to 21 days. Underlying conditions should, of course, be treated (e.g., an NG tube placed if ileus is present). When malnutrition is present or develops, hyperalimentation should be considered and followed by careful dietary support once oral alimentation is resumed. Wound herniation is defined as an incomplete dehiscence in which the peritoneum, subcutaneous tissue, and skin remain intact, but the muscle or fascia do not. As opposed to dehiscence that occurs and is recognized in the early postoperative period, wound herniation follows apparently satisfactory healing only to present with an incisional defect at a later date. Although it has been said that most such defects occur before 6 months and nearly all before 1 year, several studies with longer patient follow-up have found occurrences up to 5 years after the initial operation. For low midline incisions, the most often quoted incidence is about 1% in uncomplicated cases; after wound infection, the risk rises to about 10% and, after repair of a dehiscence, to about 30%. Ellis maintains that the reported low incidence of wound hernias is due in part to the lack of prolonged follow-up in most studies. In a review of their own patients for up to 5 years after operation in a group free from hernia at 1 year, an additional 5.8% incidence was found.44 These defects range from the small and insignificant to the large and unsightly. When the fascial defect is small, the risk of volvu-lus and infarction of the hernial contents is increased, but still uncommon. ETIOLOGY AND PREVENTION. Late wound separation is more common in the lower abdomen because of increased hydrostatic pressure and the lack of the posterior rectus sheath below the arcuate line and, for vertical incisions, the greater lateral forces provided by the bulkier oblique muscles inferiorly. The underlying cause is inadequate healing of the fascial layer--perhaps more related to degree than representing a true difference in etiology when compared to dehiscence. Causes may include fascial necrosis from initial excessive suture tension or, secondarily, from abdominal distension associated with ileus, postoperative nausea and vomiting, or pulmonary disease resulting in chronic cough. Necrosis may, in turn, be followed by suture pull-through due to the inadequate tissue strength. These causes of late wound separation can be largely eliminated by placing wide tissue bites and by approximating the tissue without undue tension. Poor tissue vitality is also the common factor in wound hernias associated with wound infections and after repair of dehiscence. One element in reducing the risk of wound infections would be selection of a permanent monofilament suture. Another strong association is fascial closure with catgut suture due to its inadequate continuation of strength during the period critical to healing 45,46 DIAGNOSIS AND TREATMENT. Consideration should be given to the diagnosis of hernia when, with the patient lying supine and legs raised, an incisional bulging is noted. The hernia is often asymptomatic, although the patient may complain of a bulging, or even note apparent peristalsis with resolution upon lying down followed, in turn, by recurrence when standing. The bulge may increase with a Valsalva maneuver. On examination the fascial defect is often palpable. Because torsion and infarction are uncommon, colic, distension, nausea, and vomiting are unusual symptoms, and repair is usually elective. Small, asymptomatic hernias need not be repaired; those that are symptomatic, large, or disfiguring deserve operation. The principles of repair are: Usually, the old skin scar is excised followed by careful dissection of the subcutaneous tissue until the hernia sac is encountered. Wide isolation of the sac is continued until the fascial edges are encountered and adequately undermined. The sac may then be opened, and attention turned to any adhesions of peritoneum, bowel, or omentum that are carefully separated until the hernia sac can be closed with a purse string suture, and excess sac excised. This dissection can be confusing, with apparent secondary hernia sacs formed by multiple bowel adhesions which, in turn, must be separated until sufficient normal anatomy can be restored to permit a safe closure. The next task is isolation of the fascial edges and freshening of them if the size of the defect will allow primary closure. Several satisfactory approaches may be used, including Smead-Jones closure, mass closure, and overlapping of fascia (pants-and-vest closure). Theoretical considerations would point to the selection of a permanent monofilament suture. Appropriate selections would include 0 or 1 Prolene, nylon, or other permanent sutures. Wire, although important historically, has no advantages over the newer, nonreactive, monofilament sutures. Large defects may need to be closed with the use of prosthetic mesh (e.g., Merselene, Marlex, Gortex) to bridge the hiatus where approximation of the fascial edges is not possible or causes undue tension. Other indications for the use of a graft would include repair of a recurrent hernia, grossly attenuated tissues, and fascia that is too weak for adequate repair. Most commonly, the graft is placed anterior to the peritoneum and transversalis fascia, and posterior to the rectus muscles. The material must be anchored to the posterior aspect of the recti “on stretch” to prevent folding when the muscles are approximated in the midline. The anterior rectus sheath is approximated as closely as possible. In some instances, dissection between the hernial sac and the fascia may prove exceedingly difficult or impossible, in which case the Marlex mesh may be anchored to the anterior aspect of the rectus muscles and, again, the anterior rectus sheath would be approximated as closely as possible. This location of the dissection is not as satisfactory, however, in that the already increased risk of wound infection due to the presertce of the graft is further increased by its proximity to subcutaneous tissue and skin. Postoperatively, predisposing conditions for wound failure should be addressed fastidiously, including control of nausea and vomiting, aggressive and early treatment of ileus and pulmonary complications, and attention to adequate nutrition. Wound infection has been reported to occur in 2% to 4% of all clean abdominal incisions and up to 35% of all grossly contaminated incisions. Clean incisions are defined as those initiated on prepared skin without entering a contaminated viscus or encountering infection. Clean contaminated wounds are the same as clean incisions, but a contaminated viscus, such as the vagina that has been prepared, is entered without gross spillage. A wound is classified as contaminated if an infected genitourinary tract is entered or gross gastrointestinal spillage occurs. A dirty wound is one that occurs when pus from an abscess is spilled intraoperatively, or previously ruptured bowel is present. The rate of infection varies not only according to increasing severity, but also according to patient socioeconomic status, surgical technique, operating time, obesity, age, and sex. ETIOLOGY AND PREVENTION. Infection is often initiated by direct inoculum of bacteria into the wound from the patient's or surgeon's skin and is potentiated by the presence of necrotic tissue. Proper preparation of both is necessary to ensure the lowest possible rate of infection. If hair removal is required, clipping immediately before surgery is preferable to shaving, and either is preferable to shaving the evening before, which has been associated with higher rates of wound infection. For the surgeon's scrub, both initial antisepsis and the use of a long-acting antibacterial agent is recommended; 20% to 40% of all gloves are punctured during the course of an average operation 47 Although not currently the most popular choice, an initial scrub followed by a 1-minute application of alcohol in an emollient base or the chlorhexidine gluconate-alcohol hand rinse (Hibistat) would represent the best possible preparation. After adequate skin antisepsis, multiple intraoperative factors come to bear. Since devitalized tissue offers increased opportunity for poor wound healing and infection, every effort should be made to minimize infection's presence, including meticulous incisional technique with a stainless steel scalpel and precise hemostasis with cautery or fine, nonreactive suture. These same considerations hold true while operating (i.e., creation of the smallest possible pericles, only precise use of the electrocautery, avoidance of ischemic closure of the vaginal cuff). Mass closure of the abdominal wall with continuous mortoff-lament suture would seem preferable in theory, although clinical studies have not yet supported this view (other considerations, such as decreased risk of dehiscence, may suggest this combination).27 Other closure considerations include copious irrigation with nonirritating physiologic solutions, especially in wounds other than those classified as clean. Even in clean wounds, however, irrigation removes fragments of free tissue and fat globules from separated adipose cells that will prolong inflammation and delay repair. Drains may be placed in the subcutaneous tissue when diffuse oozing resistant to hemostatic efforts is present. Soft drains, such as the Penrose, have been replaced by closed suction drains brought out through a separate stab wound with improved results (i.e., decreased rates of infection and hematoma). A trial of closed, subcutaneous drains alternately placed in suction and irrigated every 8 hours for 3 days with an antibiotic solution showed possible benefit in grossly infected wounds, but probably are not justified in clean contaminated wounds.48 A more traditional and proven approach to the contaminated wound is delayed primary closure, either closed in delayed primary fashion or allowed to heal by secondary intention. With delayed primary closure, Verrier and colleagues showed a decrease in infection rates in contaminated wounds from 11.1% to 4.8% and in infected wounds from 33% to 6.6%.49 Primary closure of the skin and subcutaneous tissues is defined as closure at the time of the initial operation, and secondary closure indicates closure after granulation tissue has formed either with suturing or spontaneously as healing by secondary intention. A delayed primary closure is one in which the subcutaneous tissue and skin are not closed at the time of initial surgery, but covered by a :sterile dressing and then closed some days later (usually on the fourth day), but before the formation of granulation tissue. Sutures can be placed during the original operation and left to be tied later, or the wound can be sutured under local anesthesia in the patient's room. During this time, the body's immune response has had a chance to clean the wound, and microscopic capillary formation has begun, creating excellent oxygenation of the wound edge. Closure of the wound on the fourth day greatly decreases the chance of infection, allowing patients to avoid the potentially serious problem of sepsis associated with wound infection. This approach is most helpful during treatment of pelvic infection, especially in patients with poor healing characteristics. In these patients, delayed primary closure has resulted in an extremely low complication rate. 50,51 DIAGNOSIS AND TREATMENT. Wound infections may present in several ways, depending on the extent of the infection, host resistance, and the etiologic microorganisms. Early, mild infections may be associated with only scant exudate from the incision and, upon exploration of the wound, poor healing. Hemolytic streptococcal organisms may cause erysipelas, an infection marked by a rapidly extending erythematous cutaneous border. Deeper infections may be found during the process of evaluation for postoperative fever and may additionally be associated with erythema, induration of skin and subcutaneous tissues or, possibly, fluctuation. One must be alert for the rare but devastating signs of necrotizing infections, including brawny edema, cutaneous sensory loss, and obvious necrosis. Patients with necrotizing fasciitis need prompt and aggressive debridement under general anesthesia to avoid death. In cases of contaminated and infected wounds, consideration should be given to delayed primary or secondary closure. In these situations characteristics of each case should be taken into account, such as the amount of infected tissue left behind, nutritional status of the patient, presence of diabetes, malignancy, or obesity--factors associated with poor wound-healing. When the decision is made to proceed with delayed closure, retention sutures may be placed. Permanent, monofilament suture would be the best choice. Cultures, of course, should be obtained. Postoperatively, the incision can be left covered until the fourth day, at which time the attending physician assesses whether the wound is clean enough to close. If there is any infected or necrotic tissue, then regular dressing changes and debridement can be commenced postoperatively until the wound is ready to close. Delayed primary closure may be done using one of several techniques: closure with sutures placed, but not tied, in the operating room; placement of sutures with local anesthesia; or application of sterile adhesive strips. In the high-risk patient, when coaptation of the wound is difficult, or if the wound does not appear clean in a reasonable period of time, the wound may be allowed to heal by secondary intention. Perhaps surprisingly, the cosmetic result in such a case is equal to that of delayed primary closure. Careful instruction prior to discharge and follow-up by a visiting nurse will be very helpful to the patient and her family. Treatment for superficial and minor infections may consist only of application of moist heat. Erysipelas usually responds rapidly to such local treatment with the addition of penicillin. When discharge from the wound is prominent, or fluctuation is thought to be present, the wound should be explored and all areas presenting little resistance to separation opened fully. Cultures should be obtained, appropriate antibiotics should be started and the wound should be debrided and packed. Secondary closure may be desirable and possible if the wound reveals healthy granulation tissue 3 to 5 days after opening. Again, the patient may be sent home with follow-up by a visiting nurse. Nerve injury associated with abdominal incision can pose a distressing, and often unexpected ending to an otherwise successful operation. Two types of injury occur. First, the incision and closure may transect or damage the nerves of the abdominal wall. Second, a retractor used during the operation can cause injury to nerves on the posterior body wall. The most serious nerve damage is that to the fiemoral nerve, because of the loss of innervation to the quadriceps muscle in the leg and loss of the ability to extent the leg at the knee joint. This damage is usually caused by the blades of a self-retaining retractor. The lateral blades of these instruments can press upon the nerve as it emerges from the lateral border of the psoas muscle before passing under the inguinal ligament (Fig. 9). The frequency of femoral neuropathy after gynecologic surgery is surprisingly common, occurring in approximately 10% of pelvic laparotomies.52,53,54 Fortunately, most of these cases resolve spontaneously, yet when a case does not, it poses a significant problem. Damage to the nerve should be suspected with loss of sensation in the anteromedial thigh, diminished knee jerk, and weakness of extension of the knee, which creates a specific problem climbing stairs. In addition to damage to the main femoral nerve, retractor blades can compress the genito-femoral nerve that emerges from the body of the psoas muscle to lie on its muscle-belly. Although this situation creates no motor abnormality, the loss of sensation in the upper medial thigh and labium majus can be quite distressing. The risk of these complications is higher in thin individuals and when retractors with deep blades have been used. Simply placing a laparotomy pack over the retractor blades will not diminish the amount of force that impinges on the nerve, and a space between the blade and nerve should always be confirmed, remembering that some downward pressure will unavoidably be placed on the retractor during surgery. Although the nerve itself can not readily be palpated in the operating room, the psoas muscle can be. It lies lateral to the external iliac artery, and identification of the vessel by its pulse will lead the examining finger laterally to the muscle. An additional type of injury that can occur is entrapment of the iliohypogastric or ilioinguinal nerves in the lateral closure of a transverse incision (Fig. 10)55 Transection of the nerve can lead to an area of anesthesia in the distribution of the damaged nerve, and trapping them in a suture can give rise to pain in the lower abdomen or groin. These nerves lie medial to the anterior superior iliac spine, first, between the layers of the transversus abdominus and internal oblique muscles, and then, more medially, come to lie between the internal oblique and external oblique. Although most surgeons fail to notice them during the lateral extension of a transverse incision, they are sometimes visible in the lateral aspects of the wound and should be looked for and avoided when seen. Minor sensory abnormalities can arise when the nerve that innervates the abdominal skin and that accompanies the blood vessels that run between the rectus muscle and its sheath to reach the skin is transected during elevation of the fascia off of the muscle in a Pfannenstiel incision. Because of the extensive overlap of dermatomes here, this transection is usually not a problem, but can cause troublesome loss of sensation above the incision. 10. Laufman H: Current use of skin and wound cleansers and antiseptics. Am J Surg 157:359, 198911.Madden JE, Edlich RF, Custer JR et al: Studies in themanagement of the contaminated wound. IV. Resistance to infection of surgical wounds made by knife, electrosur-gery and laser. Am J Surg 119:222, 1970 11. Madden JE, Edlich RF, Custer JR et al: Studies in the management of the contaminated wound. IV. Resistance to infection of surgical wounds made by knife, electrosurgery and laser. Am J Surg 119: 222, 1970 26. Fagniez PL, Hay JM, Lacaine F et al: Abdominal midline incision closure: A multicentric randomized prospective trial of 3,135 patients, comparing continuous vs. interrupted polyglycolic acid sutures. Arch Surg 120: 1351, 1985
<urn:uuid:f01c5c84-272a-468c-8aaf-f75e61714a03>
CC-MAIN-2016-26
http://glowm.com/resources/glowm/cd/pages/v1/v1c056.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931516
13,572
2.734375
3
The November 1988 worm perpetrated by robert t. morris. The worm was a program which took advantage of bugs in the sun unix sendmail program, vax programs, and other security loopholes to distribute itself to over 6000 computers on the internet. The worm itself had a bug which made it create many copies of itself on machines it infected, which quickly used up all available processor time on those systems. Some call it "The Great Worm" in a play on Tolkien (compare elvish, elder days). In the fantasy history of his Middle Earth books, there were dragons powerful enough to lay waste to entire regions; two of these (Scatha and Glaurung) were known as "the Great Worms". This usage expresses the connotation that the RTM hack was a sort of devastating watershed event in hackish history; certainly it did more to make non-hackers nervous about the Internet than anything before or since.
<urn:uuid:57e09f54-2046-483e-b036-bc1fc12edd96>
CC-MAIN-2016-26
http://hyperdictionary.com/computing/internet+worm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971289
191
2.515625
3
and in the problem . If , then thus I dont know if this is in the right forum because it involves trig stuff. So mod, feel free to move it. i was just going through some work on the topic 'area between two curves' and i was given the equations as: f(x) = sin x, and g(x) = sin(2x), a = 0 and b = 'pi' The solution states that the intersection point could be found as: sin x = sin(2x), then x = 'pi' - 2x and hence x = 'pi'/3. i was wondering could someone show me how the step by step as to how they derived the intersection point? Because i never did trig before Thanks in advance
<urn:uuid:cfe98c74-949c-4d3b-a5cd-dd0e649aa229>
CC-MAIN-2016-26
http://mathhelpforum.com/calculus/92273-quick-explaination.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964355
161
3.59375
4
Mercury is the closest planet to the Sun and due to its proximity it is not easily seen except during twilight. For every two orbits of the Sun, Mercury completes three rotations about its axis and up until 1965 it was thought that the same side of Mercury constantly faced the Sun. Thirteen times a century Mercury can be observed from the Earth passing across the face of the Sun in an event called a transit, the next will occur on the 9th May 2016. Mercury Planet Profile |Mass:||3.29 × 10^23 kg (0.06 Earths)| |Orbit Distance:||57,909,227 km (0.39 AU)| |Orbit Period:||88 days| |Surface Temperature:||-173 to 427°C| |First Record:||14th century BC| |Recorded By:||Assyrian astronomers| Facts about Mercury Quick Mercury Facts - Mercury does not have any moons or rings. - Your weight on Mercury would be 38% of your weight on Earth. - A day on the surface of Mercury lasts 176 Earth days. - A year on Mercury takes 88 Earth days. - Mercury has a diameter of 4,879 km, making it the smallest planet. - It’s not known who discovered Mercury. Detailed Mercury Facts A year on Mercury is just 88 days long. One solar day (the time from noon to noon on the planet’s surface) on Mercury lasts the equivalent of 176 Earth days while the sidereal day (the time for 1 rotation in relation to a fixed point) lasts 59 Earth days. Mercury is nearly tidally locked to the Sun and over time this has slowed the rotation of the planet to almost match its orbit around the Sun. Mercury also has the highest orbital eccentricity of all the planets with its distance from the Sun ranging from 46 to 70 million km. Mercury is the smallest planet in the Solar System. One of five planets visible with the naked eye a, Mercury is just 4,879 Kilometres across its equator, compared with 12,742 Kilometres for the Earth. Mercury is the second densest planet. Even though the planet is small, Mercury is very dense. Each cubic centimetre has a density of 5.4 grams, with only the Earth having a higher density. This is largely due to Mercury being composed mainly of heavy metals and rock. Mercury has wrinkles. As the iron core of the planet cooled and contracted, the surface of the planet became wrinkled. Scientist have named these wrinkles, Lobate Scarps. These Scarps can be up to a mile high and hundreds of miles long. Mercury has a molten core. In recent years scientists from NASA have come to believe the solid iron core of Mercury could in fact be molten. Normally the core of smaller planets cools rapidly, but after extensive research, the results were not in line with those expected from a solid core. Scientists now believe the core to contain a lighter element such as sulphur, which would lower the melting temperature of the core material. It is estimated Mercury’s core makes up 42% of its volume, while the Earth’s core makes up 17%. Mercury is only the second hottest planet. Despite being further from the Sun, Venus experiences higher temperatures. The surface of Mercury which faces the Sun sees temperatures of up to 427°C, whilst on the alternate side this can be as low as -173°C. This is due to the planet having no atmosphere to help regulate the temperature. Mercury is the most cratered planet in the Solar System. Unlike many other planets which “self-heal” through natural geological processes, the surface of Mercury is covered in craters. These are caused by numerous encounters with asteroids and comets. Most Mercurian craters are named after famous writers and artists. Any crater larger than 250 kilometres in diameter is referred to as a Basin. The Caloris Basin is the largest impact crater on Mercury covering approximately 1,550 km in diameter and was discovered in 1974 by the Mariner 10 probe. Only two spacecraft have ever visited Mercury. Owing to its proximity to the Sun, Mercury is a difficult planet to visit. During 1974 and 1975 Mariner 10 flew by Mercury three times, during this time they mapped just under half of the planet’s surface. On August 3rd 2004, the Messenger probe was launched from Cape Canaveral Air Force Station, this was the first spacecraft to visit since the mid 1970’s. Mercury is named for the Roman messenger to the gods. The exact date of Mercury’s discovery is unknown as it pre-dates its first historical mention, one of the first mentions being by the Sumerians around in 3,000 BC. Mercury has an atmosphere (sort of). Mercury has just 38% the gravity of Earth, this is too little to hold on to what atmosphere it has which is blown away by solar winds. However while gases escape into space they are constantly being replenished at the same time by the same solar winds, radioactive decay and dust caused by micrometeorites
<urn:uuid:565ed02a-4537-4839-8be1-8b3040379270>
CC-MAIN-2016-26
http://space-facts.com/mercury/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950285
1,070
3.765625
4
The way the Upsizing Wizard converts data types can have an enormous effect on the Access user interface and application. The data-type conversion can even cause the application to fail or to produce incorrect results. The Upsizing Wizard converts all Access text data types (i.e., fixed-length character) to SQL Server variable-length character (varchar) data types. Generally, because Access text fields store white space and varchar fields don't, this conversion from text to varchar character types is beneficial—most text fields have at least as much white space as they have characters. For example, in Access, the data type of the Employee table's LastName field is text(20), and the upsizing process converts it to a SQL Server data type of varchar(20). If the average last-name length is nine characters, the storage requirement is 10 bytes—9 bytes for the name, plus 1 byte for the length of the data in storage. A problem occurs with text fields that don't usually have much white space, such as in the original Northwind database Orders table CustomerID field. This field was an Access text(5) data type and was mapped to a SQL Server varchar(5). Every entry in the Orders table has a CustomerID, which is always five characters, so the field is always completely filled—it has no extra white space. After CustomerID is converted to a SQL Server varchar(5), each CustomerID requires 6 bytes of storage (5 for data plus the 1 length byte) plus extra processing cycles to decode and encode a variable-length character field each time the data moves between storage and memory. Although this combination (lack of storage space and extra processing cycles) doesn't sound like a big problem for small data sets, it could quickly become a performance drag and storage-capacity problem for large data sets.
<urn:uuid:3465e75d-4bef-4d2b-8743-55801690886a>
CC-MAIN-2016-26
http://sqlmag.com/systems-administrator/data-type-conversion
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900289
385
2.625
3
Traditional, Yet Unique Quilt Designs Quilts with this kind of overall pattern were typically stitched solo: the intricate appliqué had to be heavily basted before the fine appliqué stitching was done and the appliqué had to lie completely flat to avoid distortion. "These quilts have their basis in the traditional European quilt, but when you see them you know they’re Hawaiian," says Carolyn. "The Hawaiians found a way to take the traditional and make it unique." For information and pictures of a variety of Hawaiian quilts, visit Honolulu’s Bishop Museum’s Ethnology web site. To learn more about the IQSCM’s extensive collection of more than 2,300 quilts and the history behind them, visit quiltstudy.org. Photo courtesy of International Quilt Study Center, University of Nebraska-Lincoln, 2005.015.0001.
<urn:uuid:189b49db-c0d2-4620-8a38-72d6c2b8d101>
CC-MAIN-2016-26
http://www.allpeoplequilt.com/magazines-more/american-patchwork-quilting/international-quilt-museum-0?page=0%2C2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943884
190
2.59375
3