text
stringlengths
174
640k
id
stringlengths
47
47
dump
stringclasses
15 values
url
stringlengths
14
1.94k
file_path
stringlengths
125
139
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
49
156k
score
float64
2.52
5.34
int_score
int64
3
5
Credits: STUART DRYDEN/CALGARY SUN/QMI AGENCY There are two Christmases. One is the Christian holy day, the other a Western cultural celebration full of snowmen, chestnuts, bright lights, eggnog, warm fires, giftwrap, pop songs and cheer. There's no denying the cultural event has its origins in the Christian holiday. Still, the cultural side is no longer inextricably bound to the religious side, if it ever was. Within Christianity there has long been tension between the spiritual meaning of the day and its secular celebration. To this day, there are Christians who worry the commercial and cultural elements that have attached themselves to the holiday represent a re-emergence of the day's pagan origins as a winter solstice ritual to coax back the disappearing sun. Most of our cultural Christmas traditions date only to 19th-century Victorian England. The greeting "Merry Christmas," which has become so controversial on public signs and on the lips of government clerks and store cashiers, was popularized in the 1840s by Charles Dickens and by early greeting card makers. It comes originally from a 16th-century song, "We Wish You a Merry Christmas," which sounds more like a pub tune than a church hymn with its good-natured demand for figgy pudding and "a cup of good cheer" and its insistence that "we won't go until we get some." Indeed, the cultural side of Christmas has so overwhelmed the spiritual side in the past couple of centuries that devout Christians have sometimes felt compelled to print T-shirts and bumper stickers or erect billboards reminding others that Jesus is "the reason for the season." So why my little sociological history of Christmas in Western countries? To demonstrate the silliness of those who seek to eradicate any and all public exhibitions and expressions of Christmas. Take Ashu Solo of Saskatoon. He has threatened to complain to the Saskatchewan human rights commission about the flashing of "Merry Christmas!" from the route signs on the front of public buses in his city. Solo claims to have been "extremely surprised, offended and angered," by the greeting. "This is not a Christian city ... Christmas messages on Saskatoon Transit buses make religious minorities, atheists and agnostics who do not celebrate Christmas feel excluded and like second-class citizens." He insists wishing passengers, motorists and bystanders a "Merry Christmas!" amounts to "a forcible attempt at Christian indoctrination." Thankfully, Saskatoon city council has rejected Solo's way-off-base complaint and voted to keep the bus-front messages. It is not clear, though, whether the province's human rights busybodies would be equally sensible. Just like the suburban Philadelphia principal who suspended a high school senior for dressing as Santa (because Santa is a saint who has no place in a public school) and the Texas school that banned candy canes because they represent shepherd's crooks (as in Jesus, the shepherd of souls), Solo misses the point. The joyful greetings, the decorated houses, the mall Santas, candies and trees are all from the cultural side of the holiday, not the religious side. You don't have to be Christian to enjoy them, so those outward trappings are inclusive, not exclusive. I would never go to Mumbai during Diwali - the Hindu festival of lights - or a Muslim country during Eid and demand an end to those celebrations' public expression. Indeed, I would do all I could to experience and enjoy those other cultures. The Ashu Solos of the world should open-mindedly do the same at Christmas. Christmas is a celebration. Whether it has religious significance for you or is simply a time to gather with family and friends for good food, gifts, charity and hopes of peace on earth, enjoy the day. Have a very Merry Christmas!
<urn:uuid:6f253f11-34a0-464b-a32e-a423610812a2>
CC-MAIN-2013-20
http://www.sunnewsnetwork.ca/sunnews/straighttalk/archives/2012/12/20121224-081203.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951798
793
2.59375
3
New Method to Manage Stress Responses for More Successful Tumor Removal Monday, January 30, 2012 TAU-developed drug treatment in clinical trials may improve the outcome for cancer surgery patients The week before and two weeks after surgery are a critical period for the long-term survival rate of cancer patients. Physiological and psychological stresses caused by the surgery itself can inhibit the body's immune responses, heightening vulnerability to tumor progression and spreading. Now a new clinical trial by Prof. Shamgar Ben-Eliyahu of Tel Aviv University's School of Psychological Sciences and Dr. Oded Zmora will combine two medications originally used to treat excessive stress and inflammatory responses at Israel's Tel Hashomer Sheba Medical Center. The trial is the culmination of 15 years of research on the connection between the body's stress responses, immune functions, and tumor metastasis — the process of cancer cells spreading to new tissue. In pre-clinical studies on animal models, long-term post-operative survival rates increased by up to 300 percent. "Given our current understanding of how psychological and physiological stress help tumor cells to spread, we can now intervene in a simple and effective manner," says Prof. Ben-Eliyahu, whose research has been published in a number of journals, including the Journal of Immunology,PLoS One, and Annals of Surgery. The mind-body connection Though critical for the treatment of cancerous tumors, surgery can cause untold stress on the patient. The psychological stress and anxiety surrounding the surgery itself is obvious, but physiological processes that occur due to the surgical removal of the primary tumor also cause the body to release stress hormones that markedly inhibit the functioning of the immune system. And just when the body is lowering its defenses, tumor cells are shifting into high gear. Hormones like prostaglandins and catecholamines, which weaken the body's immune defence, also directly strengthen cancer cells, making them more aggressive and efficient in their invasion of new tissues throughout the body, Prof. Ben-Eliyahu explains. "Through selection, similar to evolutionary processes, tumor cells have acquired a mechanism to synchronize the timing of their progression when the body is more vulnerable to metastasis. When the entire body is under stress, they metastasize because they have a greater chance of surviving," he says. Prof. Ben-Eliyahu's clinical approach addresses this problem, hindering tumor metastasis by addressing the patient's anxiety and physiological stress responses to surgery. The two-drug cocktail, which includes a generic version of a beta-adrenergic antagonist and a COX2 inhibitor — used to treat hypertension and anxiety, and to inhibit inflammation and pain — will be administered to patients over a twenty day period before, during and after surgery. Saving on healthcare costs For the first phase of the trial, Prof. Ben-Eliyahu and his team have already begun to recruit the 400 patients they want to include. The researchers are seeking grants and outside funding for a trial that is a crucial step in testing this treatment and hopefully making it widely available. Typically, he says, pharmaceutical companies have strong financial incentives to support clinical trials, knowing that they could benefit from a new drug. In this case, however, the trial is based on medications that have been previously approved, are safe, inexpensive, and already widely used. Prof. Ben-Eliyahu is currently aiming to recruit the necessary funds to conduct this clinical trial without the help of commercial resources. "In the broader scheme of health and healthcare systems, we can help save lives and a lot of money," he notes, pointing out that with this drug treatment, governments and individuals will spend less on the long-term care of cancer patients, with fewer numbers experiencing tumor recurrence. For more cancer research news from Tel Aviv University, click here.
<urn:uuid:0c349541-0eff-409e-92f5-5442c3f1bf06>
CC-MAIN-2013-20
http://www.aftau.org/site/News2/1965580851?page=NewsArticle&id=15919&news_iv_ctrl=-1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940282
790
2.671875
3
Tyzzer's disease is an illness that can cause cell death in the liver and intestinal tract of many small mammals including rabbits, guinea pigs, hamsters, and gerbils. It has also been reported less commonly in rats, mice, cats, dogs, and horses. What causes Tyzzer's disease? Tyzzer's disease is caused by the bacteria, Clostridium piliforme (C. piliforme), formerly called Bacillus piliformis. C. piliforme lives in the intestine and is spread from animal to animal through fecal contamination of food and water. The bacteria can produce spores, which can survive for years in the environment, and are very resistant to heat and many disinfectants. The spores are shed in the feces of infected animals. What are the signs of Tyzzer's disease? Animals with Tyzzer's disease often have watery diarrhea, staining around the anal area, depression, dehydration, lethargy, and scruffy hair coats. It is more frequently and likely to cause acute death (within 48 hours of the first signs) in young animals or those stressed by overcrowding, poor hygiene, extreme environmental temperatures and humidity, parasitic infections, or malnutrition. How is Tyzzer's disease diagnosed? Unlike most other disease-causing bacteria, C. piliforme only grows inside of cells, and therefore will not grow on routine culture media in a laboratory. A blood test is available to test for antibodies to C. piliforme, but false positive test results can occur. Diagnosis is often made post-mortem, by using specific stains on tissues of the intestine and liver and examining them microscopically. The disease can also affect the heart and central nervous system. How is Tyzzer's disease treated? There is no specific therapy that will kill C. piliforme, although tetracycline is often administered. Treatment is generally aimed at supportive care including fluids, good nutrition, and providing the optimal temperature and humidity. In young and stressed animals, treatment is usually unsuccessful. How can Tyzzer's disease be prevented? Conditions that cause stress should be avoided, especially in young animals during weaning. Extreme care should be taken to assure animals have a proper environment, diet, and treatment of any parasitic infections. Healthy animals should be separated from any animals showing signs of the disease. There is no vaccine for Tyzzer's disease. The bacteria and spores can be killed using a 1:10 dilution of household bleach and water (½ cup of bleach to 5 cups of water).
<urn:uuid:a7c00e89-1dde-44af-ac16-c02b8648a433>
CC-MAIN-2013-20
http://www.peteducation.com/article.cfm?c=18+1799&aid=2846
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94712
535
3.640625
4
The April issue of Scientific American includes an exclusive excerpt from Bill McKibben's new book, Eaarth: Making a Life on a Tough New Planet, plus an interview that challenges his assumptions. Expanded answers to key interview questions, and additional queries and replies, appear here. McKibben is a scholar in residence at Middlebury College in Vermont and is a co-founder of the climate action group, 350.org. He argues that humankind, because of its actions, now lives on a fundamentally different world, which he calls Eaarth. This celestial body can no longer support the economic growth model that has driven society for the past 200 years. To avoid its own collapse, humankind must instead seek to maintain wealth and resources, in large part by shifting to more durable, localized economies—especially in food and energy production. [A Scientific American interview with McKibben follows.] You entitled your book Eaarth, because you claim that we have permanently altered the planet. How so? And why should we change our ways now? Well, gravity still applies. But fundamental characteristics have changed, like the way the seasons progress, how much rain falls, the meteorological tropics—which have expanded about two degrees north and south, making Australia one big fire zone. This is a different world. We underestimated how finely balanced the planet's physical systems are. Few people have come to grips with this. The perception, still, is that this is a future issue. It's not—it's here now. Is zero growth necessary, or would "very slight" growth be sustainable? A specific number is not part of the analysis. I'm more interested in trajectories: What happens if we move away from growth as the answer to everything and head in a different direction? We've tried very little else. We can measure society by other means, and when we do, the world can become much more robust and secure. You start having a food supply you can count on, and an energy supply you can count on, and know they aren't undermining the rest of the world. You start building communities that are strong enough to count on, so individual accumulation of wealth becomes less important. If "growth" should no longer be our mantra, then what should it be? We need stability. We need systems that don't rip apart. Durability needs to be our mantra. The term "sustainability" means essentially nothing to most people. "Maintenance" is not very flashy. "Maturity" would be the word we really want, but it's been stolen by the AARP. So durability is good; durability is a virtue. In part, you're advocating a return to local reliance. How small is "local"? And can local reliance work only in certain places? We'll figure out the sensible size. It could be a town, a region, a state. But to find the answer, we have to get the incredibly distorting subsidies out of our current systems. They send all kinds of bad signals about what we should be doing. In energy we've underwritten fossil fuel for a long time; unbelievable gifts to the "clean coal" industry, and on and on. It's even more egregious in agriculture. Most of the United States's cropland is devoted to growing corn and soybeans--not because there's an unbelievable demand to eat corn and soybeans, but because there are federal subsidies to grow them—written into the law by huge agricultural companies who control certain senators. Once subsidies wither, we can figure out what scale of industry makes sense. It will make sense to grow a lot of things closer to home. It's plausible to "go local" in, say, your home state of Vermont, where residents have money and are forward-looking—and their basic needs are met. But what about people in poor places; don't they need outside help? Absolutely. The rich nations have screwed up the climate. It's our absolute responsibility to figure out how to allow poor people to have something approaching a decent life. What happens to the poorest and most vulnerable people in the world? They get dengue fever. The fields they depend on are ruined by drought or flood. The glaciers that feed the Ganges will be gone, yet 400 million people depend on that water. We are not helping the poor by destabilizing the planet's systems. Meantime, what works best for them? Local, labor-intensive, low-input agriculture: It provides jobs, security, stability and food, and helps make local ecological systems robust enough to withstand the damage that's coming. U.S. debt is rising to insane levels because the country has lived beyond its means, which supports your call to switch from growth to maintenance. But how do countries like the U.S. get out of debt without growing? Do we need a transition period where growth eliminates debt, and then we embrace durability? My sense is that all of this will flow logically from the physics and chemistry of the world we're moving into, just like the centralized industrial model flowed logically from the physics and chemistry of the fossil-fueled world. The primary political question is: Can we make change happen fast enough to avoid all-out collapses that are plausible, even likely, under the patterns we're operating in now? How do we force global changes that move these transitions more quickly than they want to move? We have an incredibly small amount of time; we have already passed the threshold points in some respects. We best get to work.
<urn:uuid:919585eb-f1f1-4d32-80a4-202da545b609>
CC-MAIN-2013-20
http://www.scientificamerican.com/article.cfm?id=bill-mckibben-question-and-answer
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956115
1,136
2.8125
3
Though not an anti-aging potion, it could improve the lives of older people. DALLAS MORNING NEWS DALLAS -- Scientists have identified an anti-aging hormone in mice that could one day help explain what governs the human life span. Scientists in the United States and Japan described the hormone and the mice -- which can live to the ripe old age of 3, about a year longer than the average mouse -- in a report released online this week by the journal Science. The findings are exciting because they support a well-documented method to slow aging in other animal species, said George Martin, a pathologist at the University of Washington who commented on the work. "It's further evidence that there's a mechanism that's modulating life span that works throughout the living world," Martin said. "And it might even apply to us." What it won't do Researchers are quick to warn that the hormone will not translate into an anti-aging potion. Instead, it potentially could be useful in slowing the decline in particular tissues such as bones or the brain, thus improving the lives of the elderly. "I'm not very positive about using it to extend life span," said Makoto Kuro-o. He is the molecular biologist at the University of Texas Southwestern Medical Center at Dallas who led the study. "But it might be useful in treatment of age-related disease." Scientists began to focus on the naturally occurring hormone several years ago. Kuro-o, then working in Japan, was studying a breed of mice that aged quickly. The accelerated aging was eventually attributed to a defect in a gene that the scientists named Klotho, after the mythical Greek goddess said to spin the thread of life. Kuro-o reasoned that if a defect in the Klotho gene sped up aging, upping the gene's activity through genetic engineering might lengthen life. His hunch was correct. Instead of living a normal two years, the genetically engineered mice typically lived 2 1/2 years.
<urn:uuid:4fba29c0-7fac-48ee-bb5b-239142384d10>
CC-MAIN-2013-20
http://www.vindy.com/news/2005/aug/27/lifespan-hormone-in-mice-has-potential-to-slow/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96512
411
2.703125
3
(Also known as CARDINAL JULIAN) Born at Rome, 1398; died at Varna, in Bulgaria 10 November, 1444. He was one of the group of brilliant cardinals created by Martin V on the conclusion of the Western Schism, and is described by Bossuet as the strongest bulwark that the Catholics could oppose to the Greeks in the council of Florence. He was of good family and was educated at Perugia, where he studied Roman law with such success as to be appointed lecturer there, Domenico Capranica and Nicholas of Cusa being among his pupils. When the schism was ended by the universal recognition of Martin V as pope, Giuliano returned to Rome, where he attached himself to Cardinal Branda. Suggestions of wide reform were rife, and the principles of the outward unity of the Church and its reformation from within became the ideals of his life. In 1419 he accompanied Branda on his difficult mission to Germany and Bohemia, where the Hussites were in open rebellion. The cardinal thought so highly of his services that he used to say that, if the whole Church were to fall into ruin, Giuliano would be equal to the task of rebuilding it. He had all the gifts of a great ruler, commanding intellectual powers, and great personal charm. He was a profound scholar and a devoted Humanist, while his private life was marked by sanctity and austerity. In 1426 Martin V created him cardinal and sent him to Germany to preach a crusade against the reformers who were committing grievous excesses there. After the failure of this appeal to arms Cesarini was made President of the Council of Basle in which capacity he successfully resisted the efforts of Eugene IV to dissolve the council, though later (1437) he withdrew from the opposition, when he perceived that they were more anxious to humiliate the pope than to accomplish reforms. When the reunited council assembled at Ferrara he was made head of the commission appointed to confer with the Hussites and succeeded at least in winning their confidence. In 1439, owing to a plague, the council was transferred from Ferrara to Florence, where Cesarini continued to play a prominent part in the negotiations with the Greeks. After the successful issue of the council, Cesarini was sent as papal legate to Hungary (1443) to promote a national crusade against the Turks. He was opposed to the peace with Ladislaus, King of Hungary and Poland, and signed at Szegedin with Sultan Amurath III, and persuaded the former to break it and renew the war. It was an unfortunate step and resulted in the disastrous defeat of the Christian army at Varna in 1444, when Cardinal Giuliano was slain in the flight. His two well-known letters to Aeneas Sylvius about the pope's relations to the Council of Basle are printed among the works of Pius II (Pii II Opera Omnia, Basle 1551, p. 64). VESPASIANO DA BISTICCI, Vite di Uomini illustri, first printed at Rome, 1763; also printed in MAI, Spicilegium Romanum, I, 166-184; and in the new ed. of VESPASIANO (Bologna, 1892), I. JENKINS, The Last Crusader: The Life and Times of Cardinal Julian (London, 1861); PASTOR, History of the Popes, tr. ANTROBUS (London, 1899), I; GREGOROVIUS, History of the City of Rome in the Middle Ages, tr. HAMILTON (London, 1900), VII, Part I, Bk. XIII, i, ii; CHEVALIER, Rep.: Bio bibl. (Paris, 1905-1907) gives an extensive bibliography. APA citation. (1908). Giuliano Cesarini. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/03546a.htm MLA citation. "Giuliano Cesarini." The Catholic Encyclopedia. Vol. 3. New York: Robert Appleton Company, 1908. <http://www.newadvent.org/cathen/03546a.htm>. Transcription. This article was transcribed for New Advent by Ted Rego. Ecclesiastical approbation. Nihil Obstat. November 1, 1908. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is feedback732 at newadvent.org. (To help fight spam, this address might change occasionally.) Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:00bf7df0-d5fb-4e1f-8b46-3ba5d351c222>
CC-MAIN-2013-20
http://www.newadvent.org/cathen/03546a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971119
1,021
3.25
3
Tucked inside Carl Zimmer's wonderful and thorough feature on de-extinction, a topic that got a TEDx coming out party last week, we find a tantalizing, heartbreaking anecdote about the time scientists briefly, briefly brought an extinct species back to life. The story begins in 1999, when scientists determined that there was a single remaining bucardo, a wild goat native to the Pyrenees, left in the world. They named her Celia and wildlife veterinarian Alberto Fernández-Arias put a radio collar around her neck. She died nine months later in January 2000, crushed by a tree. Her cells, however, were preserved. Working with the time's crude life sciences tools, José Folch led a Franco-Spanish team that attempted to bring the bucardo, as a species, back from the dead. It was not pretty. They injected the nuclei from Celia's cells into goat eggs that had been emptied of their DNA, then implanted 57 of them into different goat surrogate mothers. Only seven goats got pregnant, and of those, six had miscarriages. Which meant that after all that work, only a single goat carried a Celia clone to term. On July 30, 2003, the scientists performed a cesarean section. Here, let's turn the narrative over to Zimmer's story: As Fernández-Arias held the newborn bucardo in his arms, he could see that she was struggling to take in air, her tongue jutting grotesquely out of her mouth. Despite the efforts to help her breathe, after a mere ten minutes Celia's clone died. A necropsy later revealed that one of her lungs had grown a gigantic extra lobe as solid as a piece of liver. There was nothing anyone could have done. A species had been brought back. And ten minutes later it was gone again. Zimmer continues The notion of bringing vanished species back to life--some call it de-extinction--has hovered at the boundary between reality and science fiction for more than two decades, ever since novelist Michael Crichton unleashed the dinosaurs of Jurassic Park on the world. For most of that time the science of de-extinction has lagged far behind the fantasy. Celia's clone is the closest that anyone has gotten to true de-extinction. Since witnessing those fleeting minutes of the clone's life, Fernández-Arias, now the head of the government of Aragon's Hunting, Fishing and Wetlands department, has been waiting for the moment when science would finally catch up, and humans might gain the ability to bring back an animal they had driven extinct. "We are at that moment," he told me. That may be. And the tools available to biologists are certainly superior. But there's no developed ethics of de-extinction, as Zimmer elucidates throughout his story. It may be possible to bring animals that humans have killed off back from extinction, but is it wise, Zimmer asks? "The history of putting species back after they've gone extinct in the wild is fraught with difficulty," says conservation biologist Stuart Pimm of Duke University. A huge effort went into restoring the Arabian oryx to the wild, for example. But after the animals were returned to a refuge in central Oman in 1982, almost all were wiped out by poachers. "We had the animals, and we put them back, and the world wasn't ready," says Pimm. "Having the species solves only a tiny, tiny part of the problem." Maybe another way to think about it, as Jacquelyn Gill argues in Scientific American, is that animals like mammoths have to perform (as the postmodern language would have it) their own mammothness within the complex social context of a herd. When we think of cloning woolly mammoths, it's easy to picture a rolling tundra landscape, the charismatic hulking beasts grazing lazily amongst arctic wildflowers. But what does cloning a woolly mammoth actually mean? What is a woolly mammoth, really? Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth? Does it matter that there are no mammoth matriarchs to nurse that calf, to inoculate it with necessary gut bacteria, to teach it how to care for itself, how to speak to other mammoths, where the ancestral migration paths are, and how to avoid sinkholes and find water? Does it matter that the permafrost is melting, and that the mammoth steppe is gone?... Ultimately, cloning woolly mammoths doesn't end in the lab. If the goal really is de-extinction and not merely the scientific equivalent of achievement unlocked!, then bringing back the mammoth means sustained effort, intensive management, and a massive commitment of conservation resources. Our track record on this is not reassuring. In other words, science may be able to produce the organisms, but society would have to produce the conditions in which they could flourish.
<urn:uuid:b4c4f96b-307a-4d35-9c3d-b1ec4ed02ca7>
CC-MAIN-2013-20
http://www.theatlantic.com/technology/archive/2013/03/the-10-minutes-when-scientists-brought-a-species-back-from-extinction/274118/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972978
1,030
2.859375
3
Ometepe is the name of two small islands in the middle of Lake Nicaragua (Nicaragua), the largest lake in Central America. These islands were formed by the peaks of two volcanoes and connected by a short isthmus. This area is part of the Greater Nicoya subregion, one of the richest archaeological zones in Central America. At the time of the Spanish invasion of Nicaragua, in 1524, the Greater Nicoya was occupied by three aboriginal groups: Mangue, Orotiña and Nicarao or Nicaragua. A fourth group called Bagaces occupied a smaller portion of Guanacaste, in northwestern Costa Rica. The Nicarao (a Nahuatl-speaking group, a language spoken in Central Mexico) arrived and occupied the Isla of Omotepe probably during the Postclassic period. The word Ometepe is, in fact, a Nahuatl word that means “two mountains” referring to the two volcanic peaks that dominate the islands. However, the first occupation of the island dated at least as long ago as 1300 BC, during the local Dinarte phase. Archaeological Research at Ometepe The first archaeological excavations in the island of Ometepe were carried out in 1881. Archaeologist J. Bransford excavated near the town of Moyogalpa, in the Luna Hacienda, recovering many funerary urns dating between 800 and 1350 AD. At the beginning of the 1960s, archaeologists Gordon Willey and Albert Norweb excavated the site of La Cruz, finding huge amounts of ceramic fragments and turtle bones. More long-term, systematic works were carried out in Ometepe by the German archaeologist Wolfgang Haberland, who in 1958 excavated a tomb in Moyogalpa which he considered to be the burial of a shaman. Later, with the help of Peter Schmidt, he surveyed more that 50 sites, finding many petroglyphs; carried out test excavations in 10 of the sites. One of these was the important cemetery site of Los Angeles. Current archaeological research in the island are being carried out by the Ometepe Archaeological Project, directed by Suzanne Baker, a passionate, activist archaeologist. To date, the project has recorded more than 70 new sites with more than 1000 petroglyphs; the fieldwork has a large and established volunteer component. Lange, Frederick W. (editor) 1996 Paths to Central American Prehistory. University of Colorado Press, Boulder. Lange F.W. and D. Stone (eds.), The Archaeology of Lower Central America. School of American Research, University of New Mexico, Albuquerque.
<urn:uuid:8e8e3a47-30f3-4ca1-9994-5c959ab4019e>
CC-MAIN-2013-20
http://archaeology.about.com/od/oterms/a/Ometepe-Island.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947399
556
3.484375
3
Choosing a Kitten If you're getting a kitten, try to choose one you can see with her mother and siblings. If you can, meet the father, too, although this is often not possible. Every kitten differs somewhat from his parents and siblings, but many personality and behavioral traits are inherited, and kittens learn a lot from their mothers. If the mother is calm and friendly with people, chances are her kittens will be, too. If mama cat is shy or un-friendly, her kittens might not be very social, either. How a kitten is handled also has a profound effect on his development and attitude. A kitten who is handled gently and frequently by different people from his first few days onward and who is exposed to other gentle animals will be more social throughout his life than kittens who are ignored or mistreated during their first few weeks. If he has been exposed to the sights, sounds, and smells of a normal household during this period, he'll be better adjusted and more confident than a kitten raised away from people. Ever wonder how kittens within a litter can be very different in color, coat length, body style, even personality? A female cat on the loose often mates with several males when she's in heat, so it's possible for kittens within a litter to have different daddies. Early handling doesn't negate a kitten's need to be with her mother and siblings, of course. Living in a feline family teaches a kitten to control and behave herself. She finds out that if she bites or scratches, the others retaliate or shun her. She learns that she can't always have what she wants. Kittens who are removed from their mom and siblings too early often fail to learn these lessons. Observe the kittens interacting with one another. A kitten should be confident and playful, but not a bully. A healthy kitten … - Is coordinated and shows no obvious physical problems. - Is solid and well proportioned. - Is not excessively thin for her breed. - Is not pot-bellied (which might indicate roundworms). - Has soft, glossy fur. - Is free of fleas. - Has no red, itchy, or bald spots. - Has a clean rectal area with no sign of tapeworm or diarrhea. - Has bright, clear eyes. - Has pink gums and healthy-smelling breath. - Breathes normally with no sneezing, coughing, or wheezing and has no nasal discharge. - Has clear eyes, fully open and free of tearing and discharge. - Has clean ears, free of odor, inflammation, dirty-looking buildup, or discharge. - Should be curious and willing to approach you or at least to be held and cuddled if he's more reserved. - Should show interest in a string or toy dragged or tossed on the floor. - Is happy and playful—unless she's asleep! A lethargic kitten might be ill, and a kitten who hides or reacts with hostility when you try to touch him will be a difficult pet. Ask the litter owner about the kittens, and be cautious if she can't tell you about each individual. That might mean the kittens haven't been handled much. More on: Pets Excerpted from The Complete Idiot's Guide to Getting and Owning a Cat � 2005 by Sheila Webster Boneham, Ph.D. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc. To order this book visit the Idiot's Guide web site or call 1-800-253-6476.
<urn:uuid:696dd460-1dc5-41c8-bc17-e10e821f56c5>
CC-MAIN-2013-20
http://life.familyeducation.com/cats/pets/45687.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952084
767
2.5625
3
As with any responsible reintroduction of captive or semi-captive animals back into the wild there are certain criteria that need to be followed which reflect the best interests of the individual animals released, as well as the wild indigenous populations. This process must remain dynamic in order to adjust to the dolphins’ needs and any environmental variables but the basic principles are outlined below: Full health assessment - ensuring that the animals are healthy enough for release and there is little or no chance of transmitting any known pathogen or disease to the wild dolphin population. Returning Tom and Misha to their native waters and or within their home range - it is believed that both Tom and Misha were captured near Izmir. Izmir is approximately 150 miles from where the animals are currently being held and well within their home range. Gate preparation so they are primed to depart the pen at release – this has been established by encouraging Tom and Misha to move through an artificial gate into the medical pen. Determining if the animals can effectively hear and echolocate – we are confident in their hearing abilities and monitor echolocation through the use of hydrophones. Establish and document that the animals can readily kill and eat live fish – transferring the dolphins’ diet from frozen fish to live fish is a slow and calculated process but essential for successful rehabilitation. Their captive diet has consisted of hand fed fish for many years and we must convert that to live fish over the coming months utilising the following process: Developing the animals’ strength and endurance – possibly the most important aspect of the dolphins’ rehabilitation. In captivity, dolphins become dependent on humans and work very little for their food compared to being in the wild, subsequently developing behaviours such as lulling at the surface for prolonged periods. In the wild dolphins are constantly swimming, not only for food, but for predator avoidance, playing, or migration. In captivity this behaviour is rendered useless and they tend to lose their physical stamina. Wild bottlenose dolphins swim at an average speed of 1.5-1.7 m/s with bursts of speed up to 8.3 m/s and frequently hold their breath for 20-40 seconds, and at times up to 6 minutes and more. Like a human in physical training for competition, dolphins must be muscular and fit in order to handle many different situations from attacking a school of sardines to avoiding sharks. The better shape the dolphins are in, the more likely they are to survive in the wild. Conditioning is undertaken with Misha and Tom in their sea pen by encouraging them to be constantly active, incorporating high energy actions such as bows (breaching the water), speed swims and more. As we consistently work with them on these activities they will become more fit and well-developed increasing their potential for success in the wild
<urn:uuid:c2f939ec-200a-4cb4-bb04-ecdd08df2006>
CC-MAIN-2013-20
http://www.bornfree.org.uk/campaigns/marine/hisaronu-dolphins/rehabilitation/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965677
575
3.34375
3
Most importantly - If you, or anyone/thing else, believes they have been in contact with any poisonous materials, contact your local Poison Control Center. The American Association of Poison Control Centers has a great website - www.aapcc.org - with information about all types of poisoning. For the Galveston Area, the Southeast Texas Poison Control Center's (STPCC) website is www.utmb.edu/setpc. To contact STPCC: Mailing address: UTMB, 3.112 Trauma Building, Galveston, Tx 77555-1175 Emergency Numbers: (800)764-7661 (TX ONLY) and (409)765-1420 Animal Poison Control Center - www.napcc.aspca.org (888)426-4435 Oleanders contain a toxin called Cardenolide Glycosides. The toxin is mostly contained in the sap which is clear to slightly milky colored, and sticky. When ingested in certain quatities, this toxin can cause harm - and possibly death. The extremely bitter and nauseating taste of the sap (much like a rotten lemon) causes a mechanical reflex in the stomach which rejects and expels the vile substance. Although not impossible, a person or animal would have to have a strong stomach or no sense of taste for a dose of the toxin to be fatal. What can I do to avoid the a possible poisoning when working with Oleanders? Wash hands (and arms) thoroughly when finished working with the plant. Do not chew on any part of the plant. And do NOT use it as a skewer for food (or as a toothpick!). Are the fumes from burning Oleanders hazardous? Yes! The fumes from a burning Oleander is still very hazardous. Steer clear of the fumes and NEVER use the branches as firewood! What do I do if I accidently ingest some of the sap? Call the poison control center nearest you. What do I do if I see my pet chewing on the plant?Call your veterinarian immediately! What do I do if I see my pet chewing on the plant? Call your veterinarian immediately! What are some other poisonouse plants? Azaleas - Roman soldiers were poisoned from honey of azalea pontica. Rhododendrons - The poisonous compound is Actylandromdal found in the nectar, and produces depression of blood pressure, shock, and finally death
<urn:uuid:83b6e13a-5391-4a2d-92e4-27713ad22ffc>
CC-MAIN-2013-20
http://www.oleander.org/toxic.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.884748
508
2.578125
3
September 14, 2004 > Celebrating Hispanic Heritage Month Celebrating Hispanic Heritage Month by Susana Nuñez September 15 marks the beginning of National Hispanic Heritage Month, a month dedicated to raising public awareness and appreciation of Hispanic/ Latino culture. The observation was initiated in 1968 when Congress authorized President Lyndon B. Johnson to proclaim a week in September as National Hispanic Heritage Week. In 1988, the observance was expanded to include the entire 31-day period. The month, which runs through October 15, focuses directly on the ingenuity, creativity, cultural, and political experiences of Hispanic Americans. In addition, September 15 celebrates the anniversary of independence for five Latin American countries: Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua. Also, Mexico declared its independence on September 16, and Chile on September 18. As a people who have contributed greatly to American culture, Hispanics/Latinos are the largest ethnic minority. According to a United States Census release, as of July 1, 2003, Hispanics/Latinos make up 13.7 percent of the United States population. That's over 39.9 million! By 2050, the percentage is expected to rise to 24 percent, at 102.6 million. Around the bay, activities for everyone to enjoy with their Hispanic/Latino friends and neighbors will be held in the following weeks. For a taste of what's to come, the following is a list of events to celebrate and enhance your knowledge of this influential culture. Alameda County Library Presents Latino Cultural Arts Series Event Location: Alameda County Library (check for local branch) Time: library hours US Bank graciously provided funding to the Foundation for a series of programs honoring National Hispanic Heritage Month. The programs feature a variety of wonderful family-oriented events at Alameda County Library Branches. Mr. Juan L. Sanchez entertains audiences with Musica de Las Americas/Music of the Americas, a bilingual program of songs and stories from all over the Americas. Ms. Olga Loya, a master storyteller, with stories full of mystery, suspense and humor. Joe and Ronna Leon delight audiences with their popular Caterpillar Puppets and PINATA program. For more information, contact the Library at (510) 745-1514. Viva Las Americas Saturday, September 18 Event Location: Pier 39, San Francisco Time: 12-5 PM This festive event showcases music and dance performances commemorating the artistry of Mexico, Central and South America. Mariachis strumming their guitars serenade visitors as they stroll throughout PIER 39. Fun for children includes traditional Latin American craft making and face painting. For further information, call Pier 39 at (415) 705-5500. Tardeada Latina Silent Auction Fundraiser Saturday, September 18 Event Location: Duran Foundation 1035 Carleton Street, Berkeley Time: 2 PM The Bay Area Institute for Advancement, Inc. will be holding its fourth annual Silent Auction fundraiser to benefit its bilingual (Spanish/English) childcare centers (Centro VIDA & Bahia School Age Program). BAHIA's mission is to provide quality bilingual and multi-cultural early childhood education, particularly serving low-income Latino families of Northern Alameda County, enabling them to improve the quality of life for their children. Sponsorship opportunities are available. Featuring DJ José Ruiz; Catering provided by Cancun Sabor Mexicano of Berkeley and Tlaloc Sabor Mexicano of San Francisco. Tickets: $50/person, $75/couple. For more info: (510) 525-1463, CentroVIDA1975@aol.com A Talk by Dr. Solimar Sunday, September 19 Event Location: 34007 Alvarado Niles Rd., Union City Time: 3 PM Vernice Solimar, PhD is here to talk about the values of the indigenous people ion the Amazonian rainforest and how these values can be helpful today in our own lives. During her three-week stay in Quito and the Ecuadorian Rainforest, Vernice directly experienced the spiritual depth, beauty and living presence of Pachamama, the sacred Earth Mother. She is here to share the experience of her journey through slides and discussion of indigenous wisdom. 2nd Annual Dunsmuir Mariachi Festival Sunday, September 19 Event Location: 2960 Peralta Oaks Court, Oakland (off of Highway 580) Time: 12-6 pm The historic Dunsmuir Estate in Oakland is once again planning to bring a touch of Mexico to Oakland. This year's event will feature Mariachi bands and Ballet Folklorico dancers. There will be cultural exhibits, crafts and entertainment for all ages. Tickets are $20; at the door, $25. For more information, call (510) 615-5555 x7 http://www.dunsmuir.org. Festival Cine Latino (Latino Theater) Friday- Sunday, September 17-19 Event Location: Presentation Theater, 2350 Turk Street, University of California San Francisco See films directed by US and Latin American filmmakers and cutting-edge videographers at the 12th Annual Festival ÁCine Latino! hosted by Cine Accién. Over 40 films will be shown. Tickets are $8, general admission and $5 for Cine Accién members, students and seniors. For more information, call (415) 553-8135, email@example.com, http://www.cineaccion.com. A Speech by Author, Activist Elizabeth Martinez Thursday, September 23 Event Location: Toland Hall, UC Hall, 533 Parnassus Ave., University of California San Francisco Time: 12-1:00 PM Elizabeth "Betita" Martinez will be the keynote speaker for the Hispanic Heritage Month celebration. The topic of her talk is "Multicultural Alliances in Building the Road to Educational Justice." She is best known for her bilingual volume 500 Years of Chicano History in Pictures, which became the basis for a video that she co-directed. Her latest book is De Colores Means All of Us: Latina Views for a Multi-Colored Century. All are invited to the event, which is sponsored by the Latin American Campus Association. Light refreshments will be available. Benefit Fashion Show - Tarde Internacional del Rebozo Friday, September 24 Event Location: The Mexican Heritage Plaza at 1700 Alum Rock Ave, San Jose Time: 5:30-8:30 PM The rebozo from Santa Maria del Rio, San Luis Potosi, Mexico, is a pre-Hispanic accessory that is now considered a piece of art because they are still weaved by hand! This event will feature "rebozo" dances by the Ballet Folklorico Mexicano de Carlos Moreno, highlighting the traditions from various states in Mexico, modeling, arts and crafts, displays, music by Juanita Ulloa and more. Tickets are $20 in advance, $25 at the door. For more information, check out http://www.palaciosproductions.com. San Jose Tamale Festival Saturday, October 23 Event Location: Emma Prusch Park, San Jose (at Story & King Rd.) Time: 10 AM-4:30 PM Hot Tamales, anyone? This year's San Jose Tamale Festival was designed with a few things in mind. To promote, preserve, celebrate and share the Mexican Heritage and Culture with the communities of the Bay Area with a healthy family environment of goodwill, music, dance, and of course, tamales! Free admission. For more information, contact Roger Hooks at (408) 617-2520, firstname.lastname@example.org, www.sanjosetamalefestival.com. Casa de los Esp’ritus: The Paul Sherrill Days of the Dead Collection September 8- November 27 Event Location: The Mexican Museum of San Francisco; Marina Boulevard and Buchanan St. Fort Mason, Building D, San Francisco Time: Wednesday to Saturday 11 AM to 5 PM (Gallery hours) The Paul Sherrill Days of the Dead Collection is drawn from the prized art collection of the late San Francisco architect, Paul Sherrill, who over his lifetime gathered more than 600 pieces of popular Mexican art. For more information, call (415) 202-9700 or visit www.mexicanmuseum.org. Museum admission: Members and children 12 and under free, adults $3.00, students and seniors $2.00, group tours $25.00, first Wednesday of every month free!
<urn:uuid:5471da57-75d5-40c0-8930-662908282af9>
CC-MAIN-2013-20
http://tricityvoice.com/articledisplay.php?a=2887
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896384
1,807
3.140625
3
MAKE A WORD SEARCH JR Make your own Word Search Puzzle - Junior! Children can practice spelling by making a Word Search Puzzle to search for words! At home or in school children can make their own puzzles and have fun learning to spell. You can print or play the word search online. Recommended for Grades: K,1
<urn:uuid:1cdc1d86-677f-48b0-a719-57d712ef6047>
CC-MAIN-2013-20
http://www.abcya.com/word_search_jr.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.858743
70
2.828125
3
The first issue of the English florins, so called because the letters D.G. (“by God's grace”) were omitted for want of room. It happened that Richard Lalor Sheil, the master of the Mint, was a Catholic, and a scandal was raised that the omission was made on religious grounds. The florins were called in and re-cast. (See Mr. Sheil was appointed by the Whig ministry Master of the Mint in 1846; he issued the florin in 1849; was removed in 1850, and died at Florence in 1851, aged nearly 57. Source: Dictionary of Phrase and Fable, E. Cobham Brewer, 1894 More on Graceless Florin from Infoplease: - Graceless Florin - Graceless Florin The first issue of the English florins, so called because the letters D.G. ... - Godless Florin - Godless Florin (The). Also called “The Graceless Florin.” In 1849 were issued florins ... - Florin - Florin An English coin representing 2s., or the tenth of a sovereign, issued in 1849. Camden ... - Defender of the Faith - Defender of the Faith A title given by Pope Leo X. to Henry VIII. of England, in 1521, for a Latin ... - Dei Gratia - Dei Gratia By God's grace. Introduced into English charters in 1106; as much as to say, ... 24 X 7 ||24 x 7 Tutor Availability ||Unlimited Online Tutoring
<urn:uuid:aeef6cf6-5b32-4b04-89ad-953d90e5f87c>
CC-MAIN-2013-20
http://www.infoplease.com/dictionary/brewers/graceless-florin.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952807
352
2.90625
3
Russian family of architects and urban planners. The brothers Leonid Aleksandrovich Vesnin (b Nizhny Novgorod, 10 Dec 1880; d Moscow, 8 Oct 1933), Viktor Aleksandrovich Vesnin (b Yur’evets, 9 April 1882; d Moscow, 17 Sept 1950) and Aleksandr Aleksandrovich Vesnin (b Yur’evets, 16 May 1883; d Moscow, 7 Nov 1959) worked independently on occasion but are best known for their collaborative projects. After the Revolution of 1917 they had a central role in formulating and developing Constructivism, which became the dominant form of architectural Modernism in the USSR in the 1920s. Aleksandr Vesnin, the most active and innovative of the brothers, also had a significant early career as a painter and theatre designer. © 2009 Oxford University Press
<urn:uuid:90c93163-531a-47e7-a208-ef3512f380a5>
CC-MAIN-2013-20
http://www.moma.org/collection/artist.php?artist_id=6139
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93951
183
2.609375
3
Limehouse was the site of a short-lived porcelain factory founded by George Wilson in 1746. It was one of many attempts to make a British version of the beautiful, white ceramic that was flooding into London from the Far East. Limehouse porcelain looked Chinese but was made in East London. You can see examples of this porcelain at the Museum of London. One hundred years later, a small community of Chinese sailors settled at Limehouse Causeway. This was one of two small, East End Chinese communities. The other was in Pennyfields in Poplar, where Chinese sailors from Shanghai had settled. Virtually all were single men, some of whom married British women. By 1914, there were around 30 businesses and 300 people living in these small East End communities. Limehouse and Pennyfields became known as Chinatown, and many of its inhabitants made a living by running laundries. During the Second World War, the Docklands area, including Chinatown, was badly damaged and many Chinese people moved out. In the 1950s, the market for Chinese food grew and restaurants and stalls began to spring up in Gerrard Street and Lisle Street. This was the start of the Chinatown we know today in Soho.
<urn:uuid:d032009c-0709-4c28-836b-cac0a7ae257d>
CC-MAIN-2013-20
http://blog.visitlondon.com/tag/docklands/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.988914
251
3.25
3
Noryl is a blend of polyphenylene oxide (PPO) and polystyrene (PS) which was developed in 1966 by General Electric Plastics (now owned by SABIC). It is a rare example of a homogeneous mixture of two polymers. Most polymers are incompatible with one another, so tend to produce separate phases when mixed. The compatibility of the two polymers in Noryl is caused by the presence of a benzene ring in the repeat units of both chains. The addition of polystyrene to PPO increases the glass transition temperature above 100 °C, owing to the high Tg of PPO, so Noryl is stable in boiling water. The precise value of the transition depends on the exact composition of the grade being used. There is a smooth linear relation between weight content of polystyrene and the Tg of the blend. Noryl has good electrical resistance, so is widely used for switch boxes. However, product design is important in maximising the strength of the product, especially in eliminating sharp corners and other stress concentrations. Injection molding must ensure that moldings are stress-free. Like most other amorphous thermoplastics, Noryl is sensitive to environmental stress cracking when in contact with many organic liquids. Compounds such as gasoline, kerosine and methylene chloride may initiate brittle cracks which lead to product failure. Noryl has numerous applications in electronics, electrical equipment, coating, machinery, etc. Noryl has possible applications in the production of hydrogen, where it could serve as cost-effective electrodes in an electrolyzer, replacing expensive rare elements. It is highly resistant against the alkaline potassium hydroxide. For conductivity the plastic is sprayed with a nickel-based catalyst.
<urn:uuid:2eb594fe-cf60-48af-aa7f-914a3c671734>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Noryl
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934606
365
2.671875
3
(En español: Conjuntivitis) Is your eye looking pink and not so pretty? Sounds like conjunctivitis, sometimes called pinkeye. This can happen when the conjunctiva, the covering of your eye and inside your eyelids, gets infected. Your eye may feel itchy and like you have a grain of sand caught in it. Your eye may be teary or gunky, especially when you wake up in the morning. Although sometimes pinkeye will get better on its own, some kids will need special eye drops to make their conjunctivitis disappear.
<urn:uuid:a3cc97f2-7e0a-45de-8f34-a34f393832b7>
CC-MAIN-2013-20
http://kidshealth.org/PageManager.jsp?dn=American_Academy_of_Family_Physicians&lic=44&cat_id=20193&article_set=30511&ps=309
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908245
122
2.640625
3
Alabama 4th District The Appalachian Mountains’ corduroy ridges, dividing the Atlantic coast from the interior, are America’s coal-and-steel industrial spine, from the black coal country of western Pennsylvania to the red hill country of northern Alabama. Here rose America’s two premier steel cities, Pittsburgh and Birmingham. Around both, and for many miles in between them, is countryside settled by feisty Scots-Irish farmers in the years between the Revolution and the Civil War. In valley land accessible to railroads, great steel factories were built in the 80 years after the Civil War, along with smaller factories that produced underwear and tires, glass and chemicals, socks and butchered chickens. Northern Alabama was solidly Democratic through the 1950s. It was populist on economics, conservative on cultural issues. Since then, the region has moved toward the Republicans, even though it has benefited from massive federal public works programs. The movement is most pronounced in counties close to Birmingham and along the interstates. 2008 Presidential Vote |Cook Partisan Voting Index| Alabama’s 4th Congressional District is a collection of small towns—Cullman, Jasper, Russellville, Fort Payne, and Albertville. The last is the home of a military helicopter plant and other aerospace facilities. Gritty Gadsden (pop. 37,000) is the biggest city, with a large Goodyear tire plant built in 1929. Sandwiched between Huntsville to the north and Birmingham to the south, the 4th District crosses the state and the Appalachian ridges, from the Georgia state line to the Mississippi state line. Decades of coal mining scarred 150 square miles of landscape, about one-fourth of which has been reclaimed. This is Alabama’s premier Scots-Irish district, with the lowest African-American percentage of the state’s seven congressional districts. Though family income is low and poverty above national averages, high marriage rates give some social stability. There are few vestiges of its Democratic heritage. George Bush won here with 71% in 2004. John McCain won many of these counties with over 70% of the vote in 2008.
<urn:uuid:7265a855-b2af-4a53-956c-4baa26aa2bca>
CC-MAIN-2013-20
http://www.nationaljournal.com/almanac/2010/area/al/04
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952097
440
2.765625
3
CRUISER DUELS: The Konigsberg vs. the Pegasus at Zanzibar The German ships The Konigsberg was a small light cruiser with ten 105mm guns. These German light cruisers were reliable, beautiful, durable ships, but were often pitted against newer or much larger British adversaries resulting in a high rate of loss. This battle was one of the few occasions where a German raider met an opponent of the same type. The British ships The Pegasus was also a small light cruiser with 8 4-inch guns. It was slightly older than the German ship and the German 105mm (4.1-inch) guns were much superior weapons. Two other British cruisers had been hunting the Konigsberg also, but the Pegasus, stopped at Zanzibar for repairs, faced its enemy alone. Return to main list
<urn:uuid:c29405b8-7254-4b85-9995-f1eccf6006d7>
CC-MAIN-2013-20
http://www.reocities.com/Athens/Agora/8088/ZanzibarII.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980435
185
2.609375
3
From her earliest days in school, Professor Jana Gevertz was beguiled by the elegance and precision of mathematics. But as her experience widened, she also felt the pull of biology, a powerful, hands-on force in such critical arenas as health care. “I love how beautifully logical math is. You can’t fight me on a proof—there is no bias. What I love about biology is its ability to impact people and how we live our lives. I’m drawn to the human side of it. So for me, the ideal career was one that used math to benefit people in society,” she says. Her eureka moment came during her junior year in college when she took the course Differential Equations in Biology and realized “it was possible to work at the interface of math and biology.” That discovery led her to cancer research, where mathematicians, scientists, and clinicians are teaming up to exploit reams of new data—much of it at the molecular level—to advance their understanding of cancer growth and treatments. As a specialist in mathematical biology, Gevertz, an assistant professor in the Department of Mathematics and Statistics, devises equations that model the progression of tumors. To date, she has focused on glioblastoma, a complex and deadly form of brain cancer that poses a persistent challenge to researchers and clinicians. “When I first started researching in this field, mean survival times for these patients hadn’t budged since the 1970s, while we were seeing progress in treating other types of cancer. While tons of data was being gathered about glioblastoma, it was not translating into improved clinical outcomes for patients. So much remained unknown about the disease,” she recalled. Gevertz was particularly drawn to an aspect of cancer progression that was not well understood: the interactions between different elements of a tumor and the surrounding healthy tissue that lead to its growth. These elements include the genes expressed by cancer cells, the blood vessels that give the tumor oxygen, the stiffness of the surrounding healthy tissue, and immune system responses. While the tumor is initially constrained by its host, it works to reshape its environment so it can thrive. “Imagine a scenario in which a 17th-century scientist finds a modern-day computer,” she says, by way of analogy. “The scientist would try to understand how each component functions, for instance, the hard drive, the keyboard, and the mouse. But even if the function of each computer part were understood, the scientist would still need to figure out how these pieces interact to produce a functioning computer. Increasingly, researchers are focusing on pinpointing the interactions that allow the tumor to overcome normal physiological defenses against cancer and coming up with ways to inhibit them. A primary goal of the field, she adds, is to develop a “virtual patient,” a computer program that would use specific information about a patient’s disease to predict how their particular tumor will grow and what treatments would be most effective. “Patients with the same cancer respond differently. Some people are cured by chemotherapy and some are not. Some people have awful side effects, others none,” she says. “The holy grail of the field is to obtain clinical data on a singular patient, plug that into a computer, and be able to figure out how best to treat them. This is what we call individualized medicine, and it’s something many are striving for in all branches of medicine.” The relationship between math and biology is not new, Gevertz notes. She points to Gregor Mendel, often called the father of modern genetics, who used comparatively simple math in the 1800s to derive his laws of inheritance, based on breeding experiments with pea plants. The partnership between the two fields has become increasingly productive, however, in recent years. “There was a boom in the 1990s when biologists began gathering large amounts of data, necessitating a more quantitative approach to their field,” she says. Advances in molecular biology had given them the ability to identify the genes that are mutated in cancer, as well as the proteins expressed by these genes that abet tumor growth. A leap in computing power gave mathematicians powerful new analytical tools. On occasion, math-based insights can seem counterintuitive, she notes, citing an example in which her equations suggested that using an on/off schedule for a drug that prevents a tumor from growing its own blood supply was more effective than deploying the drug continuously, at full-blast. “Sometimes you learn things you weren’t expecting to be true,” she says.Supported in graduate school by National Science Foundation and Burroughs Wellcome Fund fellowships, Gevertz comes to her career with a research-intensive background. But she discovered earlier in her academic life that teaching was also important to her. “What sold me on a career in math ultimately was tutoring. I love working with people and communicating complicated ideas in a way that others can understand,” said Gevertz, who teaches courses ranging from introductory calculus to upper-level classes in applied mathematics. She is thrilled to see her students embracing interdisciplinary subjects as well. “Students’ interest in fields like mathematical biology is growing rapidly,” she says, noting that her course on the topic next semester is already over-enrolled. This summer, she will be working on research projects with two of her TCNJ students: a physics major who will study the way cancer cells invade normal tissue and a math major looking at the interactions between a tumor and the body’s immune system. Both students will be using mathematical and computational techniques to tackle challenging biological problems. “I’m excited that these students are engaging in real scientific research, and also pleased that they will come away from it appreciating that there are multiple ways to approach these important problems,” she says.
<urn:uuid:799717ba-4c43-440e-ab6e-b6869c7e8c8b>
CC-MAIN-2013-20
http://www.tcnjmagazine.com/?p=5366&archive=Spring%202007
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971329
1,238
2.984375
3
If you are what you eat, you might be having an identity crisis. A new study on food fraud was released Wednesday morning by U.S. Pharmacopeial Convention (USP), a scientific nonprofit organization that helps set standards for the "quality, safety and benefit" of foods and medicines. The group runs a searchable online database of food fraud reports at foodfraud.org and nearly 800 new records were added as part of the study - a 60% increase from last year. Food fraud, as defined by the U.S. Food and Drug Administration (FDA), is the adulteration, dilution or mislabeling of goods. USP further defines food fraud in the study as "the fraudulent addition of nonauthentic substances or removal or replacement of authentic substances without the purchaser's knowledge for economic gain to the seller." The new records show that the most commonly fraudulent products are olive oil, milk, saffron, honey and coffee. Tea, fish, clouding agents (used in fruit juices, like lemon, to make products look freshly squeezed), maple syrup and spices (turmeric, black pepper and chili pepper) were also top imposters. Most of the reported food fraud was committed by producers adding fillers (i.e. other plant leaves to tea leaves), mixing in less expensive spices with high value spices or watering down liquids. Olive oils were often replaced and/or diluted with cheaper vegetable oils. Clouding agents were found in 877 food products from 315 different companies. Another popular target: Pomegranate juice, often made with grape skins and grape and pear juices. Tips to combat food fraud Editor’s note: For more on fake ingredients in food, don’t miss “Sanjay Gupta, MD” on Saturday at 4:30 p.m. ET and Sunday at 7:30 a.m. ET. From around the web
<urn:uuid:90d6031b-c1d7-46b0-9a68-3865ac292050>
CC-MAIN-2013-20
http://eatocracy.cnn.com/2013/01/23/faux-pas-food-fraud-on-the-rise/?hpt=ea_r4
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9574
396
2.609375
3
The name ἀντίχριστος signifies an opposer of Christ. It is used only by John in his first and second epistles, though those opposed to Christ are referred to by others under different names. It is important to distinguish between an antichrist and the antichrist. John says, "as ye have heard that antichrist shall come, even now are there many antichrists;" whereas "he is the antichrist that denieth the Father and the Son." 1 John 2:18, 22. He is the consummation of the many antichrists. To deny Jesus Christ come in the flesh is the spirit or power of the antichrist, but it eventuates in a departure from the special revelation of Christianity: 'they went out from us.' 1 John 2:19; 1 John 4:3; 2 John 7. Now this clears the ground at once of much that has obscured the subject. For instance, many have concluded that Popery is the antichrist, and have searched no farther into the question, whereas the above passage refutes this conclusion, for Popery does not deny the Father and the Son; and, in Revelation 17, 18, Popery is pointed out as quite distinct from 'the false prophet,' which is another name for the antichrist. It is fully granted that Popery is anti-christian, and a Christ-dishonouring and soul-deceiving system; but where God has made a distinction we must also do so. Besides Popery there were and there are many antichrists, which, whatever their pretensions, are the enemies of Christ, opposers of the truth, and deceivers of man. As to the Antichrist, it should be noticed that John makes another distinction between this one and the many. He speaks of the many as being already there, whereas the one was to come; and if we turn to 2 Thess. 2:3-12 we read of something or some one that hinders that wicked or lawless one being revealed, although the mystery of iniquity was already at work. Now there has been no change of dispensation since this epistle was written, and John wrote much later, from which we learn that the revelation of the antichrist is still future, though doubtless the mystery of iniquity is getting ripe for his appearing; that which hindered and still hinders the manifestation of the antichrist is doubtless the presence of the Holy Spirit on earth. He will leave the earth at the rapture of the saints. This passage in Thessalonians gives us further particulars as to this MAN OF SIN. His coming is after the working of Satan, that is, he will be a confederate of Satan, and be able to work signs and lying wonders with all deceit of unrighteousness in them that perish. Those that have refused the truth will then receive the lie of this wicked one. We get further particulars in Rev. 13:11-18, where the anti-christian power or kingdom is described as a beast rising out of the earth, having two horns as a lamb, but speaking as a dragon. Here again we read that he will do great wonders, making fire come down from heaven, with other signs or miracles. In the description in Thessalonians he opposeth himself against all that is called God or that is worshipped, and sits down in the temple of God, and sets forth himself as God. The Jews will receive him as their Messiah, as we read in John 5:43. In the above passage in the Revelation this counterfeit of Christ's kingdom is openly idolatrous. He directs the dwellers on the earth to make an image of the beast (named in ver. 1, the future head of the resuscitated Roman empire) to which image he gives breath, that it should speak, and persecutes those who will not worship the image. He also causes all to receive a mark on their hand or their forehead that they may be known to be his followers; and that none else should be able to buy or sell. We thus see that in the Revelation the anti-christian power called also 'the false prophet' will work with the political head, and with Satan — a trinity of evil — not only in deceiving mankind, but also, in Rev. 16:13-16, gathering together by their influence the kings of the earth to the battle of that great day of God Almighty. The three are cast into the lake of fire Rev. 19:20; Rev. 20:10. In the O.T. we get still another character of this wicked one. In Dan. 11:36-39 he is called 'king.' Here he exalts himself and speaks marvellous things against the God of gods. He will not regard the God of his fathers (pointing out that he will be a descendant of Israel, probably from the tribe of Dan, cf. Gen. 49:17), nor "the desire of women" (i.e. the Messiah, of whom every Jewess hoped to be the mother): he exalts himself above all. Here again he is an idolater, honouring a god that his fathers knew not. In Zech. 11:15-17 he is referred to as the foolish and idol shepherd, who cares not for the flock, in opposition to the Lord Jesus the good Shepherd. This man of sin will 'do according to his own will' — just what the natural man ever seeks to do. In contrast to this the blessed Lord was obedient, and came not to do His own will. May His saints be ever on the watch against the many false prophets in the world, 1 John 4:1, and be loyal to their absent Lord, behold His beauty in the sanctuary, and reproduce Him more down here in their earthen vessels. Strictly, those opposed to the inculcation of good works from a perverted view of the doctrines of grace; but the term is also falsely applied to those who know themselves free through the death of Christ from the law as given by Moses. Rom. 7:4; Gal. 2:19. One has but to read carefully the epistle to the Galatians to see that for Gentile believers to place themselves under the law is to fall from grace; and Paul exhorted them to be as he was, for he was (though a Jew by birth) as free from the law by the death of Christ as they were as Gentiles. They had not injured him at all by saying he was not a strict Jew, Gal. 4:12: in other words, they may have called him an antinomian, as others have been called, whose walk has been the most consistent. To go back to the law supposes that man has power to keep it. For a godly walk the Christian must walk in the Spirit, and grace teaches that, "denying ungodliness and worldly lusts, we should live soberly, righteously, and godly in this present world." Titus 2:12. On the other hand, there have been, and doubtless are, some who deny good works as a necessary fruit of grace in the heart: grace, as well as everything else, has been abused by man. See LAW. Antioch in Pisidia. [An'tioch in Pisid'ia] A Roman colony of Phrygia in Asia Minor, founded by Seleueus Nicator. Its ruins are now called Yalobatch or Yalowaj. Paul's labour here was so successful that it roused the opposition of the Jews and he was driven to Iconium and Lystra; but he returned with Silas. Acts 13:14; Acts 14:19-21; 2 Tim. 3:11. Antioch in Syria. [An'tioch in Syria] This is memorable in the annals of the church as the city where the disciples were first called Christians, Where an assembly of Gentiles was gathered, and from which Paul and his companions went forth on their missionary journeys, and to which they twice returned. It formed a centre for their labours among the Gentiles, outside the Jewish influence which prevailed at Jerusalem; yet the church in this city maintained its fellowship with the assembly at Jerusalem and elsewhere. Acts 6:5; Acts 11:19-30; Acts 13:1; Acts 14:26; Acts 15:22-35; Acts 18:22; Gal. 2:11. Antioch was once a flourishing and populous city, the capital of Northern Syria, founded by Seleueus Nicator, B.C. 300, in honour of his father Antiochus. It was afterwards adorned by Roman emperors, and was esteemed the third city was eventually the seat of the Roman proconsul of Syria. It stood on a beautiful spot on the river Orontes, where it breaks through between the mountains Taurus and Lebanon. It is now called Antakia 36 12', 36 10' E. It has suffered from wars and earthquakes, and is now a miserable place. Comparatively few antiquities of the ancient city are to be found, but parts of its wall appear on the crags of Mount Silpius. There were several kings bearing this name who ruled over Syria, and though they are not mentioned by name in scripture, some of their actions are specified. These are so clear and definite that sceptics have foolishly said that at least this part of the prophecy of Daniel must have been written after the events! The Greek kingdom, the third of the four great empires, was, on the death of Alexander the Great, divided among his four generals, and this resulted principally in a series of kings who ruled in Egypt bearing the general name of PTOLEMY, and are called in scripture 'Kings of the South;' and another series, called 'Kings of the North,' who bore the general name of either SELEUCUS or ANTIOCHUS. Both the Ptolemies and the Seleucidae began eras of their own, and some of the kings of each era had to do with Palestine and the Jews. The following is a list of the kings, with the dates when they began to reign, noticing the principal events that were prophesied of them in Daniel 11. 320 Ptolemy I, Soter. He takes Jerusalem. Era of the Ptolemies begins. 312 SELEUCUS I, Nicator. He re-takes Palestine. Era of the Seleucidae begins. 283 Ptolemy II, Philadelphus. The O.T. translated into Greek. 280 ANTIOCHUS I, Soter. 261 ANTIOCHUS II, Theos. He was at war with Ptolemy, but peace was restored on condition that Antiochus should put away his wife Laodice and marry Berenice the daughter of Ptolemy. This was done, but on the death of Philadelphus he restored Laodice; but she, fearing another divorce, poisoned her husband, and then caused the death of Berenice and her son. See Dan. 11:6. 247 Ptolemy III, Euergetes. He revenged his sister's death, being 'a branch of her roots;' and carried off 40,000 talents of silver, etc. 'Shall enter into the fortress of the king of the north,' and carry away their precious vessels of silver and gold. Dan. 11:7-9. 246 SELEUCUS II, Callinicus. 226 SELEUCUS III, Ceraunus. 223 ANTIOCHUS III, the Great. 222 Ptolemy IV, Philopater. War between Ptolemy and Antiochus. Ptolemy recovers Palestine. Dan. 11:10-12. 205 Ptolemy V, Epiphanes (5 years old). Antiochus seized the opportunity of the minority of the king to regain the country. Dan. 11:16. He also joined with Philip of Macedonia to capture other portions of the dominions of Ptolemy. But Rome was now growing in power, and on being appealed to by Egypt for protection, Antiochus was told he must let Egypt alone. In the meantime an army from Egypt had re-taken Palestine; but Antiochus, on his return, again obtained the mastery there. Wishing to extend his dominions in the west he proposed that Ptolemy should marry his daughter Cleopatra, that she might serve her father's ends; but she was faithful to her husband. Daniel thus speaks of it: "He shall give him the daughter of women, corrupting her, but she shall not stand on his side, neither be for him." Dan. 11:17. Antiochus took many maritime towns, but after many encounters he was compelled by Rome to quit all Asia on that side of Mount Taurus, give up his elephants and ships of war and pay a heavy fine. Antiochus had great difficulty in raising the money, and on attempting to rob a temple at Elymais he was killed. Dan. 11:18, 19. 187 SELEUCUS IV, Philopator, succeeded. His principal work was the raising of money to pay the war-tax to Rome. He ordered Heliodorus to plunder the temple; but Heliodorus poisoned him. He was thus 'a raiser of taxes,' and was 'destroyed neither in anger, nor in battle.' Dan. 11:20. Heliodorus seized the crown but was destroyed by Antiochus IV. 181 Ptolemy VI, Philometor. He was a minor, under his mother and tutors. 175 ANTIOCHUS IV, Epiphanes. He was not the rightful heir. He 'obtained the kingdom by flatteries.' He called himself Epiphanes, which is 'illustrious;' but he was such 'a vile person' that people called him Epimanes, 'madman.' Dan. 11:21-24. He invaded Egypt and was at first successful: cf. Dan. 11:25, 26. The two kings entered into negotiations, though neither of them was sincere in what they agreed to: their hearts were to do mischief, and they 'tell lies at one table.' Dan. 11:27. Then Antiochus returned to his land with great riches: his heart was 'against the holy covenant,' and he entered Jerusalem and even into the sanctuary and took away the golden altar, the candlestick, the table of showbread, the censers of gold, and the other holy vessels and departed. 'At the appointed time he shall return and come toward the South,' Dan. 11:29; but he was stopped by Rome; 'ships of Chittim,' ships from Macedonia, came against him; and in great anger he returned and vented his wrath on Jerusalem. He sent an army there with orders to slay all the men and sell the women and children for slaves. This was to a certain extent carried out. The walls were also thrown down and the city pillaged and then set on fire. He then decreed that the Jews should forsake their religion, and all should worship the heathen gods. To ensure this at Jerusalem with the few that still clung to the place, an image of Jupiter Olympius was erected in the temple and on an altar sacrifices were offered to this god. This was in B.C. 168 on the 25th of the month Chisleu. Daniel relates "They shall pollute the sanctuary of strength, and shall take away the daily sacrifice, and they shall place the abomination that maketh desolate." Dan. 11:31: cf. also Dan. 8:9-12 where the 'little horn' refers to Antiochus Epiphanes. Bleek, Delitzsch, and others consider that in Dan. 8:14, the 2,300 'evening, morning,' margin, refer to the daily sacrifice, which is spoken of in Dan. 8:11, 12, 13; and that by 2,300 is meant 1,150 days: cf. also Dan. 8:26. The dedication of the temple was on the 25th of Chisleu, B.C. 165, and the desecration began some time in the year 168. Dan. 11:32b, 33-35 refer to the change that soon took place under Judas Maccabeus and his brothers, commencing B.C. 166, and in 165 the temple was re-dedicated. In B.C. 164 ANTIOCHUS V. Eupator succeeded to the throne; and in 162 DEMETRIUS SOTER; but they were not powerful against Judaea, and in B.C. 161 an alliance was made by Judaea with Rome. The historical notices in Daniel end at Dan. 11:35. It will be seen by the above that the records of history agree perfectly with the prophecy, as faith would expect them to do. It is only unbelief that has any difficulty in God foretelling future events. Without doubt some of the acts of Antiochus Epiphanes are types of the deeds of the future king of the North — referred to in other prophecies as 'the Assyrian' — in respect to the Jews and Jerusalem. 1. A Christian of Pergamos, who was martyred. Rev. 2:13. 2. Son of Herod the Great, but not called Antipas in the N.T. See HEROD. The town to which Paul was taken in the night from Jerusalem on his way to Caesarea. Acts 23:31. It was built by Herod the Great in a well-watered spot surrounded by a wood, and named after his father. At Ras el-Ain, 32 6' N, 34 56' E, are ruins which are held to mark the spot. This is 5 or 6 miles nearer Jerusalem than Kefr Saba, which some associate with Antipatris, because Josephus says it was called Kapharsaba before its name was altered by Herod. The former place being nearer to Jerusalem removes the difficulty that some have felt as to the distance of Antipatris being too far to reach in a night ; this reduces it to about 36 miles, and it would be even less by cross roads. The word antitype does not occur in the A.V., but the Greek word ἀντίτυπον occurs in Heb. 9:24, translated 'figures,' and in 1 Peter 3:21, translated 'like figure.' It is that which answers to a type, as a wax impression answers to a seal: if the device is sunk, the impression will be raised, or vice versa. To take a simple but beautiful example, a lamb was offered up for a burnt offering both morning and evening under the law; and in the N.T. we read, "Behold the Lamb of God, which taketh away the sin of the world." It is plain that the morning and evening lamb in Israel were types and the death of the Lord Jesus was the antitype. In Heb. 9:23, the 'heavenly things' are the type, and 'holy places,' Heb. 9:24, the antitype, or what corresponded to the pattern. In 1 Peter 3:21, eight souls were saved through water, of which baptism is the figure, or what answers to it. Doubtless there are many other antitypes in the N.T., but every antitype must have a type to which it corresponds, though the correspondence may not lie on its surface. Where scripture is silent as to types and antitypes the teaching of the Holy Spirit is needed, or grievous error may result in associating two things together which have no spiritual connection, though names and words may seem to correspond. A tower or fortress built by Herod the Great near the temple at Jerusalem in which he placed a guard to watch over the approaches to the sacred edifice. Josephus (Wars v. 5, 8) says it was situated "at the corner of two cloisters of the court of the temple; of that on the west, and that on the north; it was erected upon a rock fifty cubits in height and was on a great precipice." Where this precipice was is not known, for it is a much disputed question upon what part of the temple area the temple was built. There is a tower, now called Antonia, on the N.W. angle, and there are indications of a similar one having stood on the S.E. angle. A descendant of Benjamin. 1 Chr. 8:24. Son of Coz, of the posterity of Judah. 1 Chr. 4:8. The ape is not indigenous to Palestine; they were brought in the days of Solomon, with gold, silver, ivory and peacocks by the ships of Tarshish. The word goph may signify any of the monkey tribe. 1 Kings 10:22; 2 Chr. 9:21. A Christian of Rome saluted by Paul as 'approved in Christ.' Rom. 16:10. Apharsachites, Apharsathchites. [Aphar'sachites, Aphar'sathchites] Some unknown Assyrian tribe sent as colonists to Samaria under Asnapper. Ezra 4:9; Ezra 5:6; Ezra 6:6. An unknown Assyrian tribe as the preceding. Ezra 4:9. 1. Royal city of the Canaanites, the king of which was killed by Joshua, Joshua 12:18: probably the same as APHEKAH in Joshua 15:53. Not identified. 2. City in the north border of Asher, from which in the time of Joshua the inhabitants were not expelled. Joshua 13:4; Joshua 19:30: called APHIK in Judges 1:31. Identified with Afka at the foot of the Lebanon between Baalbek and Byblus. 3. Place where the Philistines encamped when Israel was defeated. 1 Sam. 4:1. 4. Where the Philistines encamped when Saul and Jonathan were killed. 1 Sam. 29:1. Perhaps the same as No. 3. 5. City, the wall of which falling killed 27,000 of the Syrians, 1 Kings 20:26, 30; 2 Kings 13:17. It is identified with Fik, 32 47' N, 35 41' E, on the great road between Damascus and Jerusalem. A 'mighty man of power,' an ancestor of Saul. 1 Sam. 9:1. The margin of Micah 1:10 explains the name as 'house of dust,' so that there is a play upon the word 'dust:' 'in the house of dust roll thyself in the dust.' The LXX read 'the house in derision.' It may refer to OPHRAH in Joshua 18:23; 1 Sam. 13:17, a city in the tribe of Benjamin. Head of the eighteenth course of priests for service in the temple. 1 Chr. 24:15. Another name for the REVELATION, q.v., being its Greek title ἀποκάλυψις. The name given to those Books which were attached to the MSS copies of the LXX, but which do not form a part of the canon of scripture. The term itself signifies, 'hidden,' 'secret,' 'occult;' and, as to any pretence of being a part of scripture, they must be described as 'spurious.' There are such writings connected with both the Old and the New Testament, but generally speaking the term 'Apocrypha' refers to the O.T. (for those connected with the N. Test. see APOSTOLIC FATHERS. The O.T. books are: 1 I. Esdras. 2 II. Esdras. 5 Chapters of Esther, not found in the Hebrew nor Chaldee. 6 Wisdom of Solomon. 7 Jesus, son of Sirach; or Ecclesiasticus; quoted Ecclus. 8 Baruch, including the Epistle of Jeremiah. 9 Song of the Three Holy Children 10 The History of Susanna. 11 Bel and the Dragon. 12 Prayer of Manasseh. 13 I. Maccabees. 14 II. Maccabees. The Council of Trent in A.D. 1546, professing to be guided by the Holy Spirit, declared the Apocrypha to be a part of the Holy Scripture. The above fourteen books formed part of the English Authorised Version of 1611, but are now seldom attached to the canonical books. Besides the above there are a few others, as the III., IV., and V. Maccabees, book of Enoch, etc., not regarded by any one as a part of scripture. It may be noticed 1. That the canonical books of the O.T. were written in Hebrew (except parts of Ezra and Daniel which were in Chaldee); whereas the Apocrypha has reached us only in Greek or Latin, though Jerome says some of it had been seen in Hebrew. 2. Though the Apocrypha is supposed to have been written not later than B.C. 30, the Lord never in any way alludes to any part of it; nor do any of the writers of the N.T., though both the Lord and the apostles constantly quote the canonical books. 3. The Jews did not receive the Apocrypha as any part of scripture, and to 'them were committed the oracles of God.' 4. As some of the spurious books were added to the LXX Version (the O.T. in the Greek) and to the Latin translation of the LXX, some of the early Christian writers were in doubt as to whether they should be received or not, and this uncertainty existed more or less until the before mentioned Council of Trent decided that the greater part of the Apocrypha was to be regarded as canonical. Happily at that time the Reformation had opened the eyes of many Christians to the extreme corruption of the church of Rome, and in rejecting the claims of that church they were also freed from its judgement as to the Apocryphal books. 5. The internal evidences of the human authorship of the Apocrypha ought to convince any Christian that it can form no part of holy scripture. Expressions of the writers themselves show that they had no thought of their books being taken for scripture. There are also contradictions in them such as are common to human productions. Evil doctrines also are found therein: let one suffice: "Alms doth deliver from death, and shall purge away all sin." Tobit 12:9. The value of holy scripture as the fountain of truth is such that anything that might in any way contaminate that spring should be refused with decision and scorn. Some parts of the Apocryphal books may be true as history, but in every other respect they should be refused as spurious. Nor can it be granted that we need the judgement of the church, could a universal judgement be arrived at, as to what is to be regarded as the canon of scripture. The Bible carries its own credentials to the hearts and consciences of the saints who are willing to let its power be felt. City of Macedonia, in the district of Mygdonia, some 28 miles from Amphipolis and 35 from Thessalonica, through which Paul and Silas passed. Acts 17:1. A convert from Alexandria, an eloquent man and mighty in the scriptures, who, when only knowing the baptism of John, taught diligently the things of Jesus. At Ephesus he was taught more perfectly by Priscilla and Aquila. He laboured at Corinth, following the apostle Paul, who could hence say 'I have planted, Apollos watered,' and subsequently he greatly desired Apollos to revisit Corinth. His name is associated with that of Paul in connection with the party spirit at Corinth, which the apostle strongly rebuked; but from his saying he had 'transferred these things to himself and to Apollos,' it would appear that the Corinthians had local leaders, under whom they ranged themselves, whom he does not name; and that he taught them the needed lesson, and established the general principle by the use of his own name and that of Apollos rather than the names of their leaders. Acts 18:24; Acts 19:1; 1 Cor. 1:12; 1 Cor. 3:4-22; 1 Cor. 4:6; 1 Cor. 16:12; Titus 3:13. The Greek translation of the Hebrew name ABADDON, which signifies 'destroyer.' He is king of the locusts of the bottomless pit, and ruler over the destroying agents that proceed from thence: it is one of the characters of Satan. Rev. 9:11. Though the word 'apostasy' does not occur in the A.V., the Greek word occurs from which the English word is derived. In Acts 21:21 Paul was told that he was accused of teaching the Jews who were among the Gentiles to apostatise from Moses. Paul taught freedom from the law by the death of the Christ and this would appear to a strict Jew as apostasy. The same word is used in 2 Thess. 2:3, where it is taught that the day of the Lord could not come until there came 'the apostasy,' or the falling from Christianity in connection with the manifestation of the man of sin. See ANTICHRIST. Though the general apostasy there spoken of cannot come till after the saints are taken to heaven, yet there may be, as there has been, individual falling away. See, for instance, Heb. 3:12; Heb. 10:26, 28, and the epistle of Jude. There are solemn warnings also that show that such apostasy will be more and more general as the close of the present dispensation approaches. 1 Tim. 4:1-3. Now a falling away necessarily implies a position which can be fallen from, a profession has been made which has been deliberately given up. This is, as scripture says, like the dog returning to his vomit, and the sow to her wallowing in the mire. It is not a Christian falling into some sin, from which grace can recover him; but a definite relinquishing of Christianity. Scripture holds out no hope in a case of deliberate apostasy, though nothing is too hard for the Lord. The Greek word ἀπόστολος signifies 'a messenger,' 'one sent,' and is used in this sense for any messenger in 2 Cor. 8:23; Phil. 2:25; and as 'one sent' in John 13:16. It is also used in a much higher and more emphatic sense, implying a divine commission in the one sent, first of the Lord Himself and then of the twelve disciples whom He chose to be with Him during the time of His ministry here. The Lord in His prayer in John 17:18 said, "As thou hast sent me into the world, even so have I also sent them into the world." He was the Sent One, and in Heb. 3:1 it is written "Wherefore, holy brethren, partakers of the heavenly calling, consider the Apostle and High Priest of our profession, Jesus."* They were to consider this One who had been faithful, and who was superior to Moses, to the Aaronic priests, and to angels, and was in the glory. The ordering of a dispensation depended on the apostolic office as divinely appointed. * The word 'Christ' is omitted by the Editors. APOSTLES, THE TWELVE. The Lord appointed these "that they should be with him, and that he might send them forth to preach, and to have power to heal sicknesses, and to cast out demons," and also to carry out the various commissions given by Christ on earth. It will be seen by the lists that follow that Lebbaeus, Thaddaeus and Judas are the same person; and that Simon the Canaanite (Cananaean) and Simon Zelotes are the same; Peter is also called Simon; and Matthew is called Levi. 1 Peter and 3 James and 5 Philip and 7 Thomas and 10 and Lebbaeus. 11 Simon the Cananean and 12 Judas Iscariot. 12 Judas I. 11 Simon Zelotes. 12 Judas I. 11 Simon Z. Peter is always named first; he with James and John was with the Lord on the mount of transfiguration and also with the Lord at other times, though no one apostle had authority over the others: they were all brethren and the Lord was their Master. Judas Iscariot is always named last. In Matthew the word 'and' divides the twelve into pairs, perhaps corresponding to their being sent out two and two to preach. Bartholomew and Simon Zelotes are not mentioned after their appointment except in Acts 1. When the Lord sent the twelve out to preach He bade them take nothing with them, for the workman was worthy of his food: and on their return they confessed that they had lacked nothing. Their mission was with authority as the sent ones of the Lord; sicknesses were healed and demons cast out; and if any city refused to receive them it should be more tolerable for Sodom and Gomorrha in the day of judgement than for that city. Matt. 10:5-15. They received a new mission from the Lord as risen: see Luke 24; John 20. And before the ascension the apostles were bidden to tarry at Jerusalem until they were endued with power from on high. This was bestowed at the descent of the Holy Spirit on the day of Pentecost. They are also viewed first among the gifts with which the church was endowed by the Head of the body when He ascended up on high. Eph. 4:8-11. These gifts were for "the perfecting of the saints, for the work of the ministry, for the edifying of the body of Christ." The mystery hitherto hid in God was now revealed to His holy apostles and prophets by the Spirit, namely, that the Gentiles should be joint heirs, and a joint body, and partakers of His promise in Christ Jesus. Eph. 3. Paul was the special vessel to make known this grace. His apostleship occupies a peculiar place, he having been called by the Lord from heaven, and being charged with the gospel of the glory. See PAUL. On the death of Judas Iscariot, Matthias, an early disciple, was chosen in his place, for there must be (irrespective of Paul, who, as we have seen, held a unique place) twelve apostles as witnesses of His resurrection, Acts 1:22; Rev. 21:14 as there must still be twelve tribes of Israel. James 1:1 ; Rev. 21:12. At the conference of the church in Jerusalem respecting the Gentiles 'the apostles' took a prominent part, with the elders. Acts 15. How many apostles remained at Jerusalem is not recorded: we do not read of 'the twelve' after Acts 6. Tradition gives the various places where they laboured, which may be found under each of their names. Scripture is silent on the subject, in order that the new order of things committed to Paul might become prominent, as the older things connected with Judaism vanished away: cf. 2 Peter 3:15, 16. There were no successors to the apostles: to be apostles they must have 'seen the Lord.' Acts 1:21, 22; 1 Cor. 9:1; Rev. 2:2. The foundation of the church was laid, and apostolic work being complete the apostles passed away, there remain however, in the goodness of God, such gifts as are needed "till we all come in the unity of the faith, and of the knowledge of the Son of God, unto a perfect man, unto the measure of the stature of the fulness of Christ." Eph. 4:12, 13. This designation is applied to the early Christian writers, who had known the apostles, or had known those who had been acquainted with them. 1. BARNABAS; 2. CLEMENT; 3. HERMAS; are supposed to be the persons so named in the N.T.: see under their respective names. 4. POLYCARP, Bishop of Smyrna. He wrote an epistle to the Philippians about A.D. 125, Irenaeus says Polycarp was "instructed by the apostles, and was brought into contact with many who had seen Christ." He died a martyr's death. An ancient letter gives a particular account of his martyrdom. 5. IGNATIUS, Bishop of Antioch. Seven epistles are supposed to have been written by him, but they have been grossly interpolated; eight or nine others are wholly spurious. He was a martyr. 6. PAPIAS, Bishop of Hierapolis in Phrygia. He is said to have heard the apostle John. Various writings are attributed to him, but of which only fragments remain. He also died a martyr. 7. An unknown author of an eloquent and interesting epistle to Diognetus. Nearly all the above writings are very different from the scripture except where that is quoted. There is a deep dark line of demarcation between them and the writings which are inspired. Some of them however are found at the end of some of the Greek Testaments and were formerly read in the churches. Happily all these are now eliminated from any association with the N.T. Besides the above there are six apocryphal 'Gospels,' a dozen 'Acts,' four 'Revelations,' the 'Passing away of Mary,' etc. This term is not used in scripture in the modern sense of a compounder of drugs for medicine; but in that of a compounder of ointments, etc., such as would now be called a 'perfumer,' as it is rendered in the margin of Ex. 30:25, where the holy anointing oil is an ointment compounded "after the art of the apothecary." The same was said of the holy incense. Ex. 30:35; Ex. 37:29. Asa was buried in a tomb filled with sweet odours and spices prepared by the apothecaries' art. 2 Chr. 16:14: cf. also Neh. 3:8. Spices were also carried to the tomb of the Lord to embalm His body. Son of Nadab, of the tribe of Judah. 1 Chr. 2:30, 31. It would appear from the arrangements made by Moses that some of the judges were accounted as judges of appeal, but that Moses himself, as having the mind of God, was the ultimate judge. Ex. 18:13-26. It is not probable, when the kingdom was established, that all causes were tried at Jerusalem; but only cases of appeal from the tribal judges; and it was such that Absalom alludes to in 2 Sam. 15:2, 3: see also Deut. 16:18. It is evident from Deut. 17:8-12 that the mind of God was to be sought where He put His name, if the matter was too hard for the judges. The Jewish writers say that before and after the time of Christ on earth, appeals could be carried through the various courts to the Grand Sanhedrim at Jerusalem. In the case of Paul appealing to Caesar, it was not an appeal from a judgement already given, as is the case in what is now called an appeal; but Paul, knowing the deadly enmity of the Jews, and the corruption of the governors, elected to be judged at the court of Caesar, which, as a Roman, he had the right to do. Acts 25:11. There is One who "cometh to judge the earth: with righteousness shall he judge the world, and the people with equity." Ps. 98:9. Appearing of Christ. This is to be distinguished from Christ coming for His saints, though intimately connected with it, for He will bring them with Him. "When Christ, who is our life, shall appear, then shall ye also appear with him in glory." Col. 3:4. Here it is the manifestation of Christ with His own, to be followed by the setting up of His kingdom and the apportionment of rewards to His saints. 2 Cor. 5:10. The Lord's servant is exhorted by His appearing and His kingdom to preach the word, etc. 2 Tim. 4:1, 2. The saints will be associated with Christ in His judgements at His appearing. Jude 14, 15. Christ will execute judgement on the Beast and the False Prophet and the western powers. Also on the Assyrian and the eastern powers that will oppress the Jews. The Jews and the ten tribes will be restored to their land in blessing, ushering in the Millennium. See ADVENT, SECOND. Probably the wife of Philemon, whom Paul addresses in that epistle, ver. 2. Appii Forum. [Ap'pii For'um] Station on the Appian Way, the main road from Rome to the Bay of Naples, where brethren went to meet Paul though 43 miles from Rome. Acts 28:15. The road was 18 to 22 feet wide, and parts of the ancient paving stones may still be seen. It was constructed by Appius Claudius, hence its name. Apple, Apple Tree. This is generally supposed to refer to the citron but apples grow in Palestine, and the Arabic name for the apple (tuffuh) differs little from the Hebrew word, tappuach. Others believe the quince is alluded to, which is fragrant and of a golden colour. Cant. 2:3, 5; Cant. 7:8; Cant. 8:5; Joel 1:12. In Prov. 25:11 "a word fitly spoken" is like some elegant device, as "apples of gold in pictures [or baskets] of silver." Apple of the Eye. 1. ishon. Gesenius says this word signifies 'little man' and then 'the little man of the eye; 'that is, "the pupil of the eye in which, as in a mirror, a person sees his own image reflected in miniature." He says "this pleasing image is found in several languages." It is the part of the eye specially to be guarded: God preserved His own as the apple of His eye. Deut. 32:10; Ps. 17:8. His law should be kept as a precious thing. Prov. 7:2. 2. babah, the black or pupil of the eye, or, as others, 'the gate of the eye.' To touch God's people is touching the apple of His eye. Zech. 2:8. 3. bath, daughter. The sense is, Let not the apple (the daughter) of thine eye cease to shed tears. Lam. 2:18. In all places 'the apple of the eye' is a beautifully figurative expression for that which must be tenderly cherished as a most choice treasure. The word chagorah signifies 'anything girded on.' When Adam and Eve had sinned they discovered that they were naked, and sewed fig-leaves together and made aprons, Gen. 3:7; but were soon conscious that this did not cover their nakedness, for when God called to them they owned that they were naked, and hid behind the trees. This teaches that nothing that man can devise can cover him from the eye of God. God clothed Adam and Eve with coats of skins; it was through death, typical of Christ Himself. In Acts 19:12 the word is σιμικίνθιον, and occurs but that once; it signifies a narrow apron or linen covering. A converted Jew of Pontus, husband of Priscilla, whom Paul first met at Corinth. Acts 18:2. He and Paul worked together as tent-makers. Aquila and Priscilla had been driven from Rome as Jews by an edict of the emperor Claudius. They travelled with Paul to Ephesus, where they were able to help Apollos spiritually. Acts 18:18-26. They were still at Ephesus when Paul wrote 1 Corinthians (1 Cor. 16:19); and were at Rome when the epistle to the saints there was written, in which Paul said they had laid down their necks for his life, and that to them all the churches, with Paul, gave thanks. Rom. 16:3, 4. In Paul's last epistle he still sends his greeting to them. 2 Tim. 4:19. A chief city in the Moabite territory. In Jerome's time it was called Areopolis. It is identified with Rabba,, 31 19' N, 35 38' E, about 10 miles from the Dead Sea. Num. 21:15, 28; Isa. 15:1. In other passages the name Ar appears to include the land of the Moabites. Deut. 2:9, 18, 29. Son of Jether, of the tribe of Asher. 1 Chr. 7:38. City in the hill country of Judah. Joshua 15:52. Identified with er-Rabiyeh, 31 26' N, 35 2' E. This occurs as a proper name only once in the A.V. where it should read 'the Arabah,' Joshua 18:18; but it occurs in many other passages where it is translated 'a plain' or 'the plain,' and is also translated 'desert,' 'wilderness,' etc. It refers to the plain situated between two series of hills that run from the slopes of Hermon in the north to the Gulf of Akaba in the far south. It is in this plain that the Jordan runs, and in which is the Sea of Galilee and the Dead Sea, also called 'the Sea of the Plain.' About 7 miles south of the Dead Sea the plain is crossed by some hills: all north of this is now called el-Ghor, but the plain south of it retains the name of the Wady-el-Arabah. This latter part is about 100 miles in length, and the northern part about 150, so that for nearly 250 miles this wonderful plain or valley extends. It might naturally be thought that the Jordan had at some time, after running into the Dead Sea, continued to run south until it poured itself into the Gulf of Akaba. But this is not probable, for the Dead Sea is nearly 1,300 feet below the sea, and the southern part is from end to end higher than the Ghor, The width of the Arabah is in some parts about 15 miles, but further south not more than 3 or 4. The southern end is also called the Wilderness of Zin, and it was in this part of the Arabah that a good deal of the wanderings of the people of Israel took place, before they turned to the east and left the plain on their left. There can be no doubt that scripture uses the name 'Arabah' for the whole of the plain, both north and south. The northern part is referred to in Deut. 3:17; Deut. 4:49; Joshua 3:16; Joshua 12:3; Joshua 18:18: and the southern part in Deut. 1:1; Deut. 2:8. In other passages, especially in the prophetic books, the plain in general may be alluded to. It extends nearly due north and south, but bears toward the west before it reaches the Gulf. A very large country is embraced by this name, lying south, south-east, and east of Palestine. It was of old, as it is now by the natives, divided into three districts. 1. Arabia Proper, being the same as the ancient Arabia Felix, embraces the peninsula which extends southward to the Arabian Sea and northward to the desert. 2. Western Arabia, the same as the ancient Arabia Petraea, embraces Sinai and the desert of Petra, extending from Egypt and the Red Sea to about Petra. 3. Northern Arabia, which joins Western Arabia and extends northward to the Euphrates. 1 Kings 10:15; 2 Chr. 9:14; Isa. 21:13; Jer. 25:24; Ezek. 27:21; Gal. 1:17; Gal. 4:25. See ARABIANS. We read that Abraham sent the sons of Keturah and of his concubines "eastward, to the east country." Gen. 25:6. There were also the descendants of Ishmael and those of Esau. Many of these became 'princes,' and there can be no doubt that their descendants still hold the land. There are some who call themselves Ishmaelite Arabs, and in the south there are still Joktanite Arabs. We read of Solomon receiving gifts or tribute from the kings of Arabia. 1 Kings 10:15. So did Jehoshaphat, 2 Chr. 17:11 ; but in the days of Jehoram they attacked him, plundered his house, and carried away his wives and some of his sons, 2 Chr. 21:17; 2 Chr. 22:1. They were defeated by Uzziah. 2 Chr. 26:7. During the captivity some Arabians became settlers in Palestine and were enemies to Nehemiah. Cf. Neh. 2:19; Neh. 4:7; Neh. 6:1. Among the nations that had relations with Israel, and against whom judgement is pronounced are the Arabians. Isa. 21:13-17; Jer. 25:24. And doubtless they will be included in the confederacies that will be raised against God's ancient people when Israel is again restored to their land. Cf. Ps. 83. In the N.T. 'Arabians' were present on the day of Pentecost, but whether they were Jews or proselytes is not stated. Acts 2:11. 1. A royal city of the Canaanites, in the south, near Mount Hor, whose king fought against Israel, but who was by the help of God destroyed, both he and his people. Num. 21:1-3; Num. 33:40; Joshua 12:14; Judges 1:16. (In the two passages in Numbers read 'the Canaanite king of Arad.') It is identified with Tell Arad, 31 17' N, 35 7' E. 2. Son of Beriah, a descendant of Benjamin. 1 Chr. 8:15. 1. Son of Ulla, a descendant of Asher. 1 Chr. 7:39. 2. Father of a family who returned from exile. Ezra 2:5; Neh. 7:10. 3. A Jew whose grand-daughter married Tobiah the Ammonite, who greatly hindered the building of the city Neh. 6:18. 1. Son of Shem. Gen. 10:22, 23; 1 Chr. 1:17. 2. Son of Kemuel, Abraham's nephew. Gen. 22:21. 3. Son of Shamer, of the tribe of Asher. 1 Chr. 7:34. 4. Son of Esrom, and father of Aminadab. Matt. 1:3, 4; Luke 3:33: called RAM, Ruth 4:19; 1 Chr. 2:9, 10. 5. Place in the land of Gilead, east of the Jordan, which Jair captured. 1 Chr. 2:23. This is the name of a large district lying north of Arabia, north-east of Palestine, east of Phoenicia, south of the Taurus range, and west of the Tigris. It is generally supposed that the name points to the district as the 'Highlands,' though it may be from Aram the son of Shem, as above. The word occurs once untranslated in Num. 23:7, as 'Aram' simply, from whence Balaam was brought, 'out of the mountains of the east;' but it is mostly translated Syria or Syrian. Thus we have - 1. ARAM-DAMMESEK, 2 Sam. 8:5, translated 'Syrians of Damascus,' embracing the highlands of Damascus including the city. 2. ARAM-MAACHAH, 1 Chr. 19:6, translated 'Syria-maachah,' a district on the east of Argob and Bashan. 3. ARAM-BETH-REHOB, 2 Sam. 10:6, translated 'Syrians of Beth-rehob: cf. Judges 18:28, a district in the north, near Dan. 4. ARAM-ZOBAH, 2 Sam. 10:6, 8, translated 'Syrians of Zoba,' a district between and Damascus, but not definitely recognised. 5. ARAM-NAHARAIM signifying 'Aram of two rivers,' Gen. 24:10; Deut. 23:4; Judges 3:8; 1 Chr. 19:6, translated 'Mesopotamia.' The two rivers are the Euphrates and the Tigris. The district would be the highlands from whence the rivers issue to the plain, and the district between the two rivers without extending to the far south. This word occurs 2 Kings 18:26; Ezra 4:7; and Isa. 36:11, where it is translated 'the Syrian language' or 'tongue;' also in Dan. 2:4, where it is 'Syriack.' Aramaic is the language of Aram, and embraces the language of Chaldee and that of Syria. Mesopotamia, Babylonia and Syria were its proper home. The first time we meet with it in scripture is in Gen. 31:47, where Laban called the heap of witness 'Jegar-sahadutha,' which is Chaldee; whereas Jacob gave it a Hebrew name, 'Galeed.' In 2 Kings 18:26; Isa. 36:11 the heads of the people asked Rab-shakeh to speak to them in Aramaic that the uneducated might not understand what was said. In Ezra 4:7 the letter sent to Artaxerxes was written in Aramaic, and interpreted in Aramaic, that is, the copy of the letter and what follows as far as Ezra 6:18 is in that language and not in Hebrew. So also is Ezra 7:12-26. In Daniel 2:4 the Chaldeans spoke to the king in Aramaic, the popular language of Babylon, and what follows to the end of chap. 7: is in that language, though commonly called Chaldee. This must not be confounded with the 'learning and the tongue of the Chaldeans' in Dan. 1:4, which is the Aryan dialect and literature of the Chaldeans, and probably the ordinary language which Daniel spoke in the court of Babylon. Jer. 10:11 is a verse in Aramaic. This language differs from the Hebrew in that it avoids the sibilants. Where the Hebrew has ז z, שׁ sh, צ tz, the Aramaic has ד d, ת th, and ט t. Letters of the same organ are also interchanged, the Aramaic choosing the rough harder sounds. The latter has fewer vowels, with many variations in the conjugation of verbs, etc. When the ten tribes were carried away, the colonists, who took their place, brought the Aramaic language with them. The Jews also who returned from Babylon brought many words of the same language. And, though it doubtless underwent various changes, this was the language commonly spoken in Palestine when our Lord was on earth, and is the language called HEBREW in the N.T., and is the same as the Chaldee of the Targums. In the ninth century the language in Palestine gave way to the Arabic, and now Aramaic is a living tongue only among the Syrian Christians in the district around Mosul. A female belonging to Aram. 1 Chr. 7:14. Descendant of Seir the Horite. Gen. 36:28; 1 Chr. 1:42. A kingdom which was called upon by God, in conjunction with Medes, Persians, and others, under one captain, Cyrus, to punish Babylon in revenge of Israel. Jer. 51:27. It is identified with Urartu or Urardhu of the Assyrian inscriptions, a district in Armenia, in which is Mount Ararat, on some part of which the ark of Noah rested. Gen. 8:4. The mount is situate 39 45' N, 44 28' E, and its extreme height is about 17,000 feet above the sea, covered with perpetual snow. Objection has been taken to its great height, but it may not have been on its highest part that the ark rested. The Jebusite from whom David purchased the place on which to build the altar of the Lord. 2 Sam. 24:16-24. Called ORNAN in 1 Chr. 21:15-28. In Samuel it is stated that David bought the threshing floor and the oxen for fifty shekels of silver. He there built an altar, and offered burnt offerings and peace offerings, without anything being said of his building a house for the Lord on the spot: whereas in Chronicles David gave to Ornan 600 shekels of gold by weight for the place. In 2 Chr. 3:1, 2 we learn that the threshing floor was on Mount Moriah, and that the site was prepared by David for the temple, which was built by Solomon. Doubtless therefore 'the place' included a much larger area than was needed for David's altar, and perhaps included the homestead of Araunah. This no doubt formed a part of what is now called the Temple area, or Mosque enclosure, in the S.E. of Jerusalem, but on what part of that area the temple was built is not known. Arba, Arbah. [Ar'ba, Ar'bah] Father of Anak, head of the Anakim, who were also giants. Num. 13:33. Their city was Hebron. Gen. 35:27; Joshua 14:15; Joshua 15:13; Joshua 21:11. The 'city of Arba' is elsewhere called KIRJATH-ARBA, which was afterwards called HEBRON. Native of the northern Arabah, or el-Ghor. 2 Sam. 23:31; Chr. 11:32. Designation of Paarai, one of David's mighty men. 2 Sam. 23:35. The word elam occurs only in Ezek. 40:21-36, and in the A.V. is translated 'arch;' but this is judged not to be its meaning, though it is not at all certain as to what it really refers. In the margin it reads, 'galleries' or 'porches,' elsewhere 'vestibule,' and again 'projection.' Son of Herod the Great by Malthace, a Samaritan. He succeeded his father as Ethnarch of Idumea, Judaea, Samaria, and the maritime cities of Palestine. From his known oppressive character Joseph feared to bring back the infant Jesus into his territory, and turned aside to Galilee, which was under the jurisdiction of his brother Antipas. Matt. 2:22. He reigned 10 years. Josephus relates that soon after his accession he put to death 3,000 Jews: eventually, for his tyranny to the Jews and the Samaritans he was deposed and banished to Vienne in Gaul. People removed from Assyria to Samaria. They joined in the petition to Artaxerxes against the Jews. Ezra 4:9. The origin of the name is unknown. City on the border of Ephraim. Joshua 16:2. Identified with Ain Arik, 31 54' N, 35 8' E. A Christian teacher at Colosse, whom Paul calls his fellow soldier, and exhorts to fulfil his ministry. Col. 4:17; Philemon 2. The designation of Hushai, David's friend. 2 Sam. 15:32; 2 Sam. 16:16; 2 Sam. 17:5, 14; 1 Chr. 27:33. The word ash or aish has always been a difficult one to translate, the versions differing much; but it is now pretty well agreed that the allusion is not to the star known as Arcturus, but to the constellation known as the Great Bear; 'his sons' are supposed to be the stars in the tail of the bear. In the northern hemisphere this constellation is seen all the year round, with its apparent ceaseless motion around the north star, which none but the mighty God can guide. Job 9:9; Job 38:32. It is translated 'the Bear' in the R.V. 1. Son of Benjamin. Gen. 46:21. 2. Son of Bela, son of Benjamin (called ADDAR in 1 Chr. 8:3), whose descendants are ARDITES. Num. 26:40. Son of Caleb, son of Hezron. 1 Chr. 2:18. Areli, Arelites. [Are'li, Are'lites] Son of Gad, and his descendants. Gen. 46:16; Num. 26:17. One connected with the court of Areopagus at Athens, where Dionysius heard Paul and "clave to him and believed." Acts 17:34. Areopagus, or Mars Hill. [Areop'agus, or Mars' Hill] The hill of Ares, or Mars. Here was held the highest and most ancient and venerable court of justice in Athens for moral and political matters. It was composed of those who had held the office of Archon unless expelled for misconduct. Paul, who had been disputing daily in the market place, was conducted by some of the Epicurean and Stoic philosophers to Mars' Hill, not for any judicial purpose, but doubtless that they might hear him more quietly. Here he delivered his address respecting God, so suited to the heathen philosophers who heard him, and which was not without its fruit. Acts 17:19. The Greek words are Areios-pagos, but are translated Mars' Hill in Acts 17:22. The court was situate on a rocky hill opposite the west end of the Acropolis. Sixteen stone steps still lead up to the spot. The common appellation (like Pharaoh for Egyptian kings) of the Arabian kings of the northern part of Arabia. The deputy of Aretas in Damascus sought to arrest Paul. 2 Cor. 11:32. This king, who was father-in-law to Herod Antipas, made war against him for divorcing his daughter, and defeated him. Vitellius, governor of Syria was ordered to take Aretas dead or alive; but Tiberius died before this was accomplished. Caligula, who succeeded to the empire, banished Antipas. He made certain changes in the East, and it is supposed that Damascus was detached from the province of Syria and given to Aretas. 1. A district lying to the south of Damascus and which formed a part of Bashan, where the giants resided. It had at one time 60 cities, which were ruled over by Og. Its name signifies 'stony' and it forms a remarkable plateau of basalt, which rises some 30 feet above the surrounding fertile plain, and extends 22 miles N. and S. and 14 miles E. and W., the boundary line being marked by the Bible word chebel, which signifies 'as by a rope.' Og was conquered by Moses, and Jair of Manasseh took the fortified cities, and it became a part of Manasseh's lot. Later it was called Trachonitis, and is now known as el-Lejah. There are many houses still in the district which, because of their massive proportions, are supposed to have been built by the giants. Deut. 3:3, 4, 13, 14; 1 Kings 4:13. 2. One, apparently in the service of Pekahiah, killed by Pekah. 2 Kings 15:25. Son of Haman, slain and hanged. Esther 9:9. Son of Haman, slain and hanged. Esther 9:8. One, apparently in the service of Pekahiah, killed by Pekah. 2 Kings 15:25. 1. Symbolical name of Jerusalem, signifying 'Lion of God,' probably in reference to the lion being the emblem of Judah. Isa. 29:1, 2, 7. In the margin of Ezek. 43:15, the altar is called the 'lion of God;' but the word is slightly different and is translated by some the 'hearth of God,' the place for offering all sacrifices to God. 2. One whom Ezra sent to Iddo at Casiphia. Ezra 8:16. 3. In 2 Sam. 23:20; 1 Chr. 11:22, we read that Benaiah slow two 'lion-like men,' which some prefer to translate 'two [sons] of Ariel.' The Hebrew is literally 'two lions of God.' The city of Joseph, the 'honourable counsellor,' who was permitted by Pilate to take down the body of the Lord and bury it in his own new to tomb. Matt. 27:57; Mark 15:43; Luke 23:51; John 19:38. It has not been identified, but has been supposed to be the same as Ramah, the birth-place of Samuel. 1. King of Ellasar in the East. Gen. 14:1, 9. 2. Captain of Nebuchadnezzar's guard. Dan. 2:14, 15, 24, 25. Son of Haman the Agagite, slain and hanged. Esther 9:9. A Macedonian of Thessalonica, companion of Paul on several journeys and on his way to Rome. Paul once calls him 'my fellow prisoner.' Acts 19:29; Acts 20:4; Acts 27:2; Col. 4:10; Philemon 24. A resident at Rome whose household Paul saluted Rom. 16:10. Ark of God. This is also called 'ARK OF THE COVENANT,' 'ARK OF THE TESTIMONY,' 'ARK OF JEHOVAH.' The sacred chest belonging to the Tabernacle and the Temple. It was made of shittim wood, overlaid within and without with pure gold. It was 2-1/2 cubits long, 1-1/2 cubits in breadth, and the same in height, with a crown or cornice of gold. On each side were rings of gold in which were inserted the staves by which it was carried. Its lid, on which were the two cherubim made wholly of gold, was called the MERCY-SEAT, q.v. The ark was typical of Christ, in that it figured the manifestation of divine righteousness (gold) in man; the mercy-seat was Jehovah's throne, the place of His dwelling on earth. In the ark were placed the two tables of stone (the righteousness demanded by God from man), and afterwards the golden pot that had manna, and Aaron's rod that budded. For the place of the ark and the manner of its being moved see the TABERNACLE. In the first journey of the children of Israel from Mount Sinai the ark of the covenant went before them to "search out a resting place for them," type of God's tender care for them. When the ark set forward Moses said, "Rise up, Lord, and let thine enemies be scattered;" and when it rested he said, "Return, O Lord, unto the many thousands of Israel." Num. 10:33-36. When they arrived at Jordan, the ark was carried by the priests 2000 cubits in front of the host that they might know the way they must go, Joshua 3:3, 4, and the ark remained on the shoulders of the priests in the bed of the river, until all had passed over. Joshua 3:17. This typifies association with Christ's death and resurrection. The ark accompanied them in their first victory: it was carried by the priests around Jericho. It is only in the power of Christ in resurrection that the saint can be victorious. The tabernacle was set up at Shiloh, and doubtless the ark was placed therein, Joshua 18:1, though it may have been carried elsewhere. In Eli's days when Israel was defeated they fetched the ark from Shiloh that it might save them, but they were again defeated, and the ark, in which they had placed their confidence instead of in Jehovah, was seized by the Philistines. 1 Sam. 5:1. When put into the house of their god Dagon the idol fell down before it on two occasions, and on the second was broken to pieces. Subsequently it was taken from Ashdod to Gath, and from Gath to Ekron, and the people were smitten by the hand of God in each city. After seven months a new cart was made, to which two milch kine were yoked, and the ark sent back to the Israelites with a trespass offering to the God of Israel. The kine, contrary to nature, went away from their calves, and went direct to Beth-shemesh, for it was God who restored the ark. There God smote the men of the place for looking into the ark. It was then taken to Kirjath-jearim and placed in the house of Abinadab. 1 Sam. 6; 1 Sam. 7:1, 2. See ABINADAB. In after years David fetched the ark from thence on a new cart, but the ark being shaken, Uzzah put forth his hand to steady it, and was smitten of God. This frightened David and the ark was carried aside to the house of Obed-edom. The law had directed how the ark was to be carried, and the new cart was following the example of the Philistines: Uzzah disregarded God's plain direction and heeded not the sacredness of that which represented the presence of God. David however, hearing that God had blessed the house of Obed-edom, again went for the ark, and now it was carried by the Levites according to divine order, and with sacrifices and rejoicing it was placed in the tabernacle or tent that David had pitched for it. 2 Sam. 6. When Solomon had built the temple, the ark was removed thither, and the staves by which it had been carried were taken out: the ark had now found its resting place in the kingdom of Solomon, whose reign is typical of the millennium. It is significant too that now there were only the two tables of stone in the ark, 1 Kings 8:1-11: the manna had ceased when they ate of the old corn of the land, which is typical of a heavenly Christ; and the witness of Aaron's rod was no longer needed now they were in the kingdom. The wilderness circumstances, in which the manna and the priesthood of Christ were so necessary, were now passed. These are both mentioned in Heb. 9:4, for there the tabernacle, and not the temple is in contemplation. No further mention is made of the ark: it is supposed to have been carried away with the sacred vessels to Babylon, and to have never been returned: if so there was no ark in the second temple nor in the temple built by Herod, nor do we read of the ark in connection with the temple described by Ezekiel. In Rev. 11:19 the ark of God's covenant is seen in the temple of God in heaven: symbol here of the resumption of God's dealings with His earthly people Israel. Ark of Noah. The vessel constructed by the command of God, by which Noah and his household and some of every living creature of the earth were saved when the world was destroyed by the flood. Precise instructions were given by God as to the construction of the ark. It was to be made of 'gopher' wood, a kind known at the time, but which cannot now be identified with certainty; and it was to be pitched within and without with pitch, or bitumen, to make it water-tight. Its proportions were to be 300 cubits long, 50 cubits broad, and 30 cubits high. If the cubit be taken at 18 inches, its length would have been 450 feet, its breadth 75 feet and its height 45 feet. If the cubit used had been 21 inches, the dimensions would be one-sixth larger. A window was to be made to the ark. Gen. 6:16. The word tsohar signifies 'a place of light' and was probably placed in the roof, and may have served in some way for ventilation as well as for giving light. Another word for window is used in Gen. 8:6 (challon) which could be opened from the inside. This word is used for the windows or casements of houses, and would give ventilation. In Gen. 6:16, after speaking of the window, it says, "and in a cubit shalt thou finish it above;" it is a question whether this refers to the size of the window or whether the word 'it' refers to the ark. It has been said that the feminine suffix, which is rendered 'it' cannot refer to the word window, which is masculine: so that it is possible the cubit refers to the roof; that the middle of the roof should be raised, giving a cubit for the pitch of the roof. A door was to be made in the side of the ark; and the ark was to be divided into three stories. 'Rooms,' or 'nests' (margin) are also mentioned. Gen. 6:14. Such is the description given us of the form of the ark. It was by faith Noah prepared the ark, by which he condemned the world, and became heir of the righteousness which is by faith. Heb. 11:7. It is thus referred to in 1 Peter 3:20, 21, "into which few, that is, eight souls, were saved through water: which figure also now saves you, [even] baptism, not a putting away of [the] filth of flesh, but [the] demand as before God of a good conscience, by [the] resurrection of Jesus Christ." It may just be added that the form of the ark was not intended for navigation amid storms and billows, but it was exactly suited for the purpose for which it was constructed. A ship for freight was once made in like proportions, to be used in quiet waters, and was declared to be a great success. Various questions have been raised as to the veracity of the Bible account of the Deluge, for which see FLOOD. Ark of Bulrushes. The little boat or cradle in which Moses was placed by his mother. It was made of bulrushes, or rather paper-reeds or papyrus which grew in the river Nile. It was daubed with slime and with pitch, that is, most probably first covered with wet earth or clay, and then with bitumen. Ex. 2:3, 5. Some of the heathen writers speak of the papyrus-woven craft of the Nile. God answered the faith of the parents, and Moses was drawn out of the water to be the saviour of His people. Tribe descended from Canaan, son of Ham; it probably resided in Arca, in the north of Phoenicia, about 15 miles north of Tripoli, now called Tell Arka. Gen. 10:17; 1 Chr. 1:15. The member of the body which is capable of lifting burdens and defending the person: it is used symbolically for the power and strength of God on behalf of His saints. Ex. 15:16; Ps. 77:15; Isa. 51:9; Isa. 53:1. The arm of Jehovah is often spoken of in the O.T. It redeemed, Ex. 6:6; etc.; gathers His own, Isa. 40:11; and rules for Him, Isa. 40:10, as in the kingdom. It is a holy arm, Isa. 52:10; Ps. 98:1; and it is a glorious arm, Isa. 63:12. The arm of the Lord is revealed to souls where there is repentance and faith in the report which God sends. Isa. 53:1; Rom. 10:16. It is to be trusted in even by the isles of the Gentiles, that is, by sinners everywhere in creation. Isa. 51:5. The Hebrew name of the place where the kings of the earth and of the whole world will be gathered together to make war against the Lord Jesus in the great day of Almighty God. Rev. 16:16. There seems to be an allusion to the great battle field of Palestine in the Esdraelon, and to the Megiddo mentioned in Judges 5:19; 1 Kings 4:12; 2 Kings 23:29, 30. The word itself is translated 'the mountain of slaughter,' and may be used symbolically for the destruction that will surely fall upon the enemies of the Lord Jesus. This name occurs in the A.V. in 2 Kings 19:37; Isa. 37:38, as the place to which two sons of Sennacherib fled after killing their father; but in both these passages the Hebrew word is Ararat. Armenia occurs in the LXX in the passage in Isaiah. Armenia lies west of the Caspian Sea, and extends northward of 38 N. lat. It is now partly in the Russian and partly in the Turkish empires. Son of Saul and Rizpah, hanged by the Gibeonites. 2 Sam. 21:8. None of the Hebrew words translated 'armour' refer definitely to what is understood now by armour worn on the person. Saul armed David with his 'armour,' 1 Sam. 17:38, but the word used is also translated 'clothes,' etc., and it may refer to Saul's warrior-dress. The articles named are somewhat more definite. 1. Saul put on David a 'HELMET of brass.' These were raised a little above the head, as may be seen by some of the sculptures from Nineveh. 1 Sam. 17:38; Ezek. 23:24: the word is qoba. Another word, koba, meaning the same, is found in 1 Sam. 17:5; 2 Chr. 26:14; Isa. 59:17; Jer. 46:4; Ezek. 27:10; Ezek. 38:5. 2. COAT OF MAIL. Saul put on David a 'Coat of Mail,' shiryon. 1 Sam. 17:5, 38. This word is translated 'HABERGEON ' in 2 Chr. 26:14 ; Neh 4:16, which also signifies 'coat of mail,' and there is a similar word in Job 41:26. It was made of brass scales fastened together. The weight of Goliath's coat of mail was 5,000 shekels. 3. GREAVES. The giant wore Greaves of brass upon his legs. 1 Sam. 17:6. The word is mitschah, and occurs nowhere else. 4. TARGET. He had a Target of brass between his shoulders, 1 Sam. 17:6: the word is kidon, and is elsewhere translated both 'shield' and 'spear.' In this case it was probably a small spear carried between the shoulders. 5. SHIELD. A Shield was carried before him. This was a tsinnah, a shield of large size to protect the whole body, with a large boss in the centre rising to a point which could be used as a weapon. It is employed figuratively for God's protecting care of His people. Ps. 5:12; Ps. 91:4. The same word is translated BUCKLER. Ps. 35:2; Ezek. 23:24; Ezek. 26:8, etc. Another word is used for a smaller shield, magen, and this is the word which occurs most commonly in the O.T., especially in the Psalms, referring to God's protection, as Ps. 28:7; Ps. 33:20; Ps. 84:11; Ps. 119:114, etc. The same word is translated BUCKLER. 2 Sam. 22:31; 1 Chr. 5:18; Cant. 4:4; Jer. 46:3, etc. The word shelet is translated Shield, but is also applied to Shields of gold, 2 Sam. 8:7, and those suspended for ornament. Ezek. 27:11. It occurs also in 2 Kings 11:10; 1 Chr. 18:7; 2 Chr. 23:9; Cant. 4:4; Jer. 51:11. In the N.T. 'armour' is used symbolically. 1. ὅπλα in contrast to 'the works of darkness' we are exhorted to put on 'the armour of light.' Rom. 13:12. Paul and his fellow-labourers commended themselves as God's ministers by the "armour, or arms, of righteousness on the right hand and on the left." 2 Cor. 6:7. 2. πανοπλία, 'whole armour.' One stronger than Satan takes away all his 'armour.' Luke 11:22. The Christian is exhorted to put on the 'whole armour of God,' the panoply, that he may stand in the evil day in his conflict with the spiritual powers of wickedness in the heavenlies. Eph. 6:11, 13. See BREASTPLATE, HELMET, etc. An attendant on a warrior, filling a place of trust and honour. When Saul loved David he made him his armourbearer. 1 Sam. 16:21. On Saul being wounded, his armourbearer refused to kill him; but when Saul was dead the armourbearer fell upon his sword and died also. 1 Sam. 31:5. In Neh. 3:19 the word is nesheq also translated 'armour.' In Cant. 4:4 it is talpiyyoth, 'armoury' or heap of swords. In Jer. 50:25 it is otsar, signifying 'treasury.' The offensive arms found in the O.T. are: 1. The SWORD, for which several Hebrew words are used: a. baraq, often translated 'lightning;' it is 'glittering sword' in Job 20:25. b. chereb, a sword, as laying waste. It is the word commonly used in the O.T. for sword (everywhere indeed except in the references given here under the other words): it was a straight tapering weapon, with two edges and a sharp point. Ps. 149:6; Isa. 14:19. It is used metaphorically for keen and piercing words, as in Ps. 57:4; Ps. 64:3. c. retsach, an undefined slaying weapon, translated 'sword' only in Ps. 42:10. d. shelach, a missile of death, as a dart. Job 33:18; Job 36:12; Joel 2:8. e. pethichoth, from 'to open,' is translated 'drawn sword' in Ps. 55:21. 2. SPEARS. a. chanith, thus named as being flexible: it is the word mostly used for the spear. 1 Sam. 13:19; Ps. 57:4. It is this weapon that will be beaten into pruning hooks. Isa. 2:4; Micah 4:3. b. kidon, a smaller kind of lance, or javelin. Joshua 8:18, 26; Job 41:29; Jer. 6:23. c. tselatsal, harpoon. Job 41:7. d. qayin, lance, 2 Sam. 21:16. e. romach, spear used by heavy-armed troops, the iron head of a spear. Judges 5:8, etc. The pruning hooks are to be beaten into spears in the time of God's judgements. Joel 3:10. 3. BOW, from which arrows are discharged, qesheth, generally made of wood, but sometimes of steel or brass. Job 20:24. It is constantly found in the O.T. from Genesis to Zechariah. It is used to express punishment from God, Lam. 2:4; Lam. 3:12; and of men to show their power to injure. Ps. 37:14, 15. 'A deceitful bow' expresses a man who fails just when his aid is most needed, as when a bow breaks suddenly. Ps. 78:57; Hosea 7:16. 4. The SLING, by which stones are discharged, qela. It was by means of this that David smote Goliath. 1 Sam. 17:40, 49, 50. Of the Benjamites there were 700 men lefthanded; "every one could sling stones at an hair breadth, and not miss." Judges 20:16. (In Prov. 26:8 occurs another word for sling margemah, but the passage is considered better translated "as he that putteth a precious stone in a heap of stones," as in the margin.) 5. 'ENGINES,' with which Uzziah shot arrows and great stones. 2 Chr. 26:15. It must be remembered that Israel were the hosts of Jehovah, keeping His charge and fighting His battles. Ex. 12:41; Joshua 5:14. It appears that all who reached the age of twenty years were contemplated as able to bear arms, Num. 1:3; and they marched and encamped in 4 divisions of 3 tribes each, with a captain over every tribe. The subdivisions were into thousands and hundreds, Num. 31:14, and into families. Joshua 7:17. There were also trumpet calls, Num. 10:9 (cf. 1 Cor. 14:8), and all the appearance of careful organisation. Until the time of the kings this natural or tribal organisation seems to have been usual, but in the time of Saul there was a body guard, 1 Sam. 13:2, and a captain of the host, 1 Sam. 17:55. In David's days those heroes who were with him in the cave of Adullam formed the nucleus of his 'mighty men.' 2 Sam. 23:8-39. They were devoted to the service of God's king. David afterwards organised a monthly militia of 24,000 man under 12 captains. 1 Chr. 27:1-15. The general gradation of ranks was into privates; 'men of war;' officers; Solomon's 'servants;' captains or 'princes;' and others variously described as head captains, or knights or staff officers; with rulers of his chariots and his horsemen. 1 Kings 9:22. It may be noticed that horses having been forbidden, Deut. 17:16, it was not until Solomon's time that this was organised, though David had reserved horses for a hundred chariots from the spoil of the Syrians. 2 Sam. 8:4. Solomon, trading with Egypt, 1 Kings 10:28, 29, enlarged their number until the force amounted to 1,400 chariots, and 12,000 horsemen, 1 Kings 10:26; 2 Chr. 1:14. Every able man being a soldier gave David the immense army of 1,570,000 men that 'drew sword.' 1 Chr. 21:5. After the division, Judah under Abijah had an army of 400,000 'valiant men,' and Israel at the same time of 800,000 'chosen men.' Afterwards Asa had 580,000 'mighty men of valour;' and Jehoshaphat, who had waxed great exceedingly, had as many as 1,160,000 men, besides those left in the fenced cities. 2 Chr. 17:14-19. In the N.T. a few references are made to the Roman army. A 'Legion' was a body that contained within itself all the gradations of the army. It might be called under the empire, in round numbers, a force of not more than 6,000 men. Every legion at times contained 10 cohorts of 600 each; every cohort 3 maniples of 200; and every maniple 2 centuries of 100: hence the name of centurion or commander of 100 men, as found in Acts 10:1, 22, etc. Each legion was presided over by 6 chiefs, χιλίαρθος, each commanding 1,000 men, mostly translated 'chief captain,' as in Acts 21:31-37, etc.: it is 'high captain' in Mark 6:21; and 'captain' in John 18:12; Rev. 19:18. A cohort, σπεῖρα, is translated 'band' in Acts 10:1; Acts 21:31, etc. A 'quaternion' embraced 4 soldiers. Acts 12:4. The head quarters of the Roman troops was at Caesarea, with a cohort at Jerusalem; but at the time of the feast, when, alas, the mutinous disposition of the Jews was sure to appear, additional troops were present in the city but without their standards of the eagle, etc., which were especially obnoxious to the Jews. Though the Romans were God's rod to punish them, their stiff necks could not bow, nor receive the punishment as from Jehovah. Descendant of David. 1 Chr. 3:21. Ravine or wady with its mountain torrent, which formed the border between Moab and Ammon, now known as Wady Mojib. It has sources both north and south which unite, and its stream running nearly east and west, rushes through a deep ravine and falls into the Dead Sea at about its centre north and south. Num. 21:13-28; Num. 22:36; Deut. 2:24, 36; Judges 11:13-26; Isa. 16:2; Jer. 48:20; etc. Arod, Arodi, Arodites. [A'rod, Aro'di, Aro'dites] Son of Gad, and his descendants. Gen. 46:16; Num. 26:17. 1. City 'before Rabbah,' that is, near Rabbath Ammon, in the valley of the Jabbok, built or rebuilt by the tribe of Gad. Num. 32:34; Joshua 13:25; 2 Sam. 24:5. 2. Moabite city on the north bank of the Arnon. Deut. 2:36; Joshua 13:9, 16; Judges 11:26; 2 Kings 10:33. Identified with Arair, 31 27' N, 35 43' E. 3. District near Damascus. Isa. 17:2. 4. City in Judah, S.E. of Beersheba. 1 Sam. 30:28. Identified with Ararah, 31 11' N, 34 56' E. Designation of Hothan, father of two of David's captains. 1 Chr. 11:44. Arpad, Arphad. [Ar'pad, Ar'phad] Fortified city near Hamath. 2 Kings 18:34; 2 Kings 19:13; Isa. 10:9; Isa. 36:19; Isa. 37:13; Jer. 49:23. Son of Shem, born two years after the flood, from whom Abraham descended. Gen. 10:22, 24; Gen. 11:10-13; 1 Chr. 1:17,18, 24. Stated as the father of Cainan in Luke 3:36. See CAINAN. With the bow, a common weapon of the ancients. We know not of what wood the arrows of the Israelites were made. Apparently the arrows were sometimes poisoned. Job 6:4; Ps. 120:4; Num. 24:8; Deut. 32:23, etc. Arrows are used metaphorically for the judgements of God, Ps. 38:2; Ps. 45:5: also for anything sharp and painful, as smiting by the tongue. Jer. 9:8. 1. Persian king, identified as the magian impostor who pretended to be Smerdis the brother of Cambyses. When appealed to by the adversaries of the Jews, he stopped the building of the temple. He was slain after a reign of eight months. Ezra 4:7, 8, 11, 23. 2. Another Persian king identified as Artaxerxes Longimanus B.C. 474-434, son of Xerxes, the Ahasuerus of Esther. He greatly favoured both Ezra and Nehemiah; he beautified the temple or bore the expense of its being done, Ezra 7:27, and under his protection the wall of the city was finished. Ezra 6:14; Ezra 7:1-21; Ezra 8:1; Neh. 2:1; Neh. 5:14; Neh. 13:6. It was in the 20th year of this king that the command to build the city was given, from which began the dates of the prophecy of the Seventy weeks of Daniel, which is fixed by Usher and Hengstenburg at B.C. 454-5. For the succession of the Persian kings see PERSIA. Companion of Paul at Nicopolis. Titus 3:12. Name of the heathen goddess Diana, as given in the Greek of Acts 19:24-35: she was regarded as presiding over the productive and nutritive powers of nature. A general name for skilled artisans, whether in metal, stone, or wood. Tubal-cain was the first named as an artificer in brass and iron. Jubal was the father of all such as handled, or invented and made, the harp and the organ. Cain also built a city. Gen. 4:17, 21, 22. In the above we see the application of the arts by man at a distance from God to promote their own welfare in independence of God. In after times the spirit of wisdom was given to Bezaleel for the work of the tabernacle in "all manner of workmanship." Ex. 35:31: cf. also 1 Chr. 29:5; 2 Chr. 34:11. It would seem that the Jews never afterwards lost this skill, as the remains of the walls of Jerusalem indicate. Nebuchadnezzar carried off all the craftsmen (same word as artificers) and smiths from Jerusalem, 2 Kings 24:14, and he may have made use of their skill to adorn Babylon. A general term for tools, armour, etc. In 1 Sam. 20:40 it refers to the bow and arrows Jonathan had used. The third commissariat district of Solomon, probably the rich corn-growing country in the Shephelah or low hills of Judah. 1 Kings 4:10. City or district apparently near Shechem, the abode of Abimelech. Judges 9:41. Identified with el-Ormeh, 32 9' N, 35 19' E. Island on the Phoenician coast: now called Ruad, about 34 51' N, 35 52' E. Ezek. 27:8, 11. Family name of one of the sons of Canaan. Gen. 10:18; 1 Chr. 1:16: doubtless connected with the island of Arvad. Steward of Elah, king of Israel. 1 Kings 16:9. 1. Great grandson of Solomon and king of Judah, B.C. 955-914. "Asa did that which was right in the sight of the Lord, as did David his father." He removed the idols his fathers had made, 1 Kings 15:11, and he deposed Maachah, his mother, or perhaps grandmother, from being queen because she favoured idolatry. On the country being invaded by the Ethiopians with a million troops and 300 chariots, he cried to the Lord, who fought for him, and the enemy was smitten. He was counselled by Azariah not to forsake the Lord, which led to the spoil being offered to God, and to the king and his people entering into a covenant to seek the Lord. Subsequently Asa was threatened by Baasha king of Israel who began to build Ramah, a fortified city only a few miles from Jerusalem. To stop this Asa paid a large sum of money to Benhadad king of Syria to invade Israel. This was for the time successful: the building of Ramah was stopped, and Asa carried away the stones thereof and built Geba and Mizpah. This recourse for aid to the king of Syria, who was an idolater, was very displeasing to God, and the king was rebuked by Hanani the seer. While Asa trusted in the Lord he had deliverance, but having relied on the king of Syria, he should have war all his days. Asa, alas, did not humble himself, but put Hanani in prison, and oppressed some of the people. He was disciplined in his person, for he was diseased in his feet, and the disease increased exceedingly; yet he sought not the Lord, but to the physicians (perhaps these were healers by magic arts in connection with idolatry, on which God's blessing could not be asked) and he died after a reign of 41 years. 1 Kings 15.; 2 Chr. 14, 15, 16.; Matt. 1:7, 8. 2. A Levite, the father of Berechiah. 1 Chr. 9:16. 1. Nephew of David, being son of his sister Zeruiah; he was a valiant man and one of David's captains; was slain by Abner while pursuing him. 2 Sam. 2:18-32; 2 Sam. 3:27, 30; 1 Chr. 11:26; 1 Chr. 27:7. 2. Levite sent by Jehoshaphat to teach the law in the cities of Judah. 2 Chr. 17:8. 3. Levite in Hezekiah's time, an overseer of tithes, etc. 2 Chr. 31:13. 4. Father of Jonathan who returned from exile. Ezra 10:15. Asahiah, Asaiah. [Asahi'ah, Asai'ah] 1. An officer sent by Josiah to Huldah the prophetess after the book of the law had been found. 2 Kings 22:12, 14; 2 Chr. 34:20. 2. Descendant of Simeon. 1 Chr. 4:36. 3. Descendant of Merari. 1 Chr. 6:30. 4. A Shilonite who became a dweller in Jerusalem. 1 Chr. 9:5. 5. Descendant of Merari who assisted in bringing up the ark from Obed-edom's house, 1 Chr. 15:6, 11 (possibly the same as No. 3). 1. A leader of the choir in David's time, and once called a 'seer.' 2 Chr. 29:30. He was descended from Gershom the Levite. 1 Chr. 6:39; 1 Chr. 15:17, 19; 1 Chr. 16:5, 7, 37, etc. Twelve psalms are attributed to him, namely, 50, 73 to 83. His office seems to have been hereditary. Ezra 2:41; Ezra 3:10; Neh. 7:44, etc. 2. Father of Joah recorder to Hezekiah. 2 Kings 18:18, 37; Isa. 36:3, 22. 3. A Levite, whose descendants dwelt in Jerusalem after the exile. 1 Chr. 9:15. 4. A Korhite, whose posterity were porters in the tabernacle in the time of David. 1 Chr. 26:1. 5. An officer, probably a Jew, controller of the forests of king Artaxerxes in Judaea. Neh. 2:8. Son of Jehaleleel, a descendant of Judah. 1 Chr. 4:16. Son of Asaph appointed by David to the service of song. 1 Chr. 25:2. Supposed by some to be the same as JESHARELAH in 1 Chr. 25:14, as noted in the margin; and by others to be the same as AZAREEL in 1 Chr. 25:18. This term is constantly applied to the return of the Lord Jesus Christ to heaven from whence He came. John 3:13. Leading His eleven apostles out as far as Bethany, on the eastern slope of the Mount of Olives, in the act of blessing them He ascended up to heaven, and a cloud hid Him from their sight. Mark 16:19; Luke 24:50, 51; Acts 1:9. The ascension of the Lord Jesus is a momentous fact for His saints: the One who bore their sins on the cross has been received up in glory, and sits on the right hand of God. As forerunner He has entered into heaven for the saints, and has been made a high priest for ever after the order of Melchisedec. Heb. 6:20. His ascension assured, according to His promise, the descent of the Holy Spirit, which was accomplished at Pentecost. John 16:7; Acts 1:4, 8; Acts 2:1-47. As ascended He became Head of His body the church, Eph. 1:22, and gave gifts to men, among which gifts are evangelists who preach to the world, and pastors and teachers to care for and instruct the saints. Ps. 68:18; Eph. 4:8-13. His ascension is a demonstration through the presence of the Holy Spirit that sin is in the world and righteousness in heaven, for the very One they rejected has been received by the Father into heaven. John 16:10. The ascension is also a tremendous fact for Satan: the prince of this world has been judged who led the world to put the Lord to death; and in His ascension He led captivity captive, having broken the power of death in which men were held, Eph. 4:8, for He had in the cross spoiled principalities and powers and made a show of them openly, triumphing over them in it. Col. 2:15. Above all, the ascension is a glorious fact for the blessed Lord Himself. Jehovah said unto Him, "Sit thou at my right hand, until I make thine enemies thy footstool." Ps. 110:1. He has taken His place as man where man never was before, and He is also glorified with the glory which He had before the world was, besides the glory which He graciously shares with His saints. John 17:5, 22. Daughter of Poti-pherah, priest of On, wife of Joseph, and mother of Manasseh and Ephraim. Gen. 41:45, 50; Gen. 46:20. The particular tree pointed out by the Hebrew word oren is not known. Isa. 44:14. The LXX and the Vulgate call it 'pine.' 1. Levitical city in Judah. Joshua 15:42; 1 Chr. 6:59: not identified. 2. City in Simeon. Joshua 19:7; 1 Chr. 4:32. See AIN. A family apparently descended from Shelah who 'wrought fine linen.' 1 Chr. 4:21. Ashbel, Ashbelites. [Ash'bel, Ash'belites] Son of Benjamin and family descended from him. Gen. 46:21; Num. 26:38; 1 Chr. 8:1. One of the five chief cities of the Philistines. It was assigned to Judah, but was not subdued by them, and thus became a thorn in their sides. Num. 33:55. It was to this city that the ark was taken by the Philistines, and where Dagon their fish-god fell before it. 1 Sam. 5:1-7. Uzziah broke down its wall, and built cities near it. 2 Chr. 26:6. It was on the high road from Palestine to Egypt which doubtless led Sargon king of Assyria to take it by his general, about B.C. 714. Isa. 20:1. Herodotus records that Psammetichus, king of Egypt, besieged it for 29 years. Jeremiah speaks of Ashdod as one of the places which was made to drink of the fury of God. Jer. 25:15-20. The Maccabees destroyed the city, but Gabinius rebuilt it at the time of the conquest of Judaea by the Romans, B.C. 55, and it was afterwards assigned on the death of Herod the Great to his sister Salome. It was situated about 3 miles from the Mediterranean, and midway between Gaza and Joppa. It is now called Esdud, or Esdood, 31 46' N, 34 40' E, and is wretched in the extreme, though lying in a fertile plain. It is called in the N.T. AZOTUS, where Philip was found after baptising the eunuch. Acts 8:40. Its inhabitants are referred to as ASHDODITES, ASHDOTHITES. Joshua 13:3; Neh. 4:7. This is once translated 'springs of Pisgah,' pointing it out as a place from whence water issued, being the sides of the mountain called Pisgah, or it may apply to the range of mountains on the east of the Dead Sea, of which Pisgah was a part. Deut. 3:17; Deut. 4:49; Joshua 12:3; Joshua 13:20. It lies due east of the north end of the Dead Sea, and is now called Ayun Musa. Asher, Aser. [Ash'er, A'ser] Eighth son of Jacob by Zilpah, Leah's handmaid. Gen. 13. The signification of the name as in the margin is 'happy.' His posterity formed one of the twelve tribes. Its portion in the land was in the extreme north, extending northward from Mount Carmel. It was bounded on the east by Naphtali, and on the south east by Zebulon. It was doubtless intended that their west border should have been the Great Sea, but we read that they did not drive out the inhabitants of Accho, Zidon, Ahlab, Achzib, Helbah, Aphik and Rehob; but the Asherites dwelt among the Canaanites. Judges 1:31, 32. This left a tract of land on the sea coast unoccupied by Asher. When Jacob called his sons about him to tell them what should befall them in the last days, he said of Asher, "Out of Asher his bread shall be fat, and he shall yield royal dainties." Gen. 49:20. When Moses ordained that certain of the tribes should stand on Mount Gerizim to bless the people, and certain others on Mount Ebal to curse, Asher was one of those chosen to stand on the latter. Deut. 27:13. And when Moses blessed the tribes before he died, he said of Asher, "Let Asher be blessed with children; let him be acceptable to his brethren, and let him dip his foot in oil. Thy shoes shall be iron and brass; and as thy days, so shall thy strength be." Deut. 33:24, 25. In Jacob's prophecy as to this tribe there is depicted the future blessing of all Israel after the salvation of the Lord has come in, announced at the close of Dan's apostasy. In Deuteronomy, what is future also as to Israel, is probably presented, but connected rather with the government of God in His hands who is King in Jeshurun. When Deborah and Barak went to the war they had to lament in their song that Asher abode by the sea coast, and came not to their aid, Judges 5:17; but when subsequently the Midianites and the Amalekites invaded the land Asher responded to the call of Gideon. Judges 6:35; Judges 7:23. At the secession of the ten tribes Asher became a part of Israel, and very little more is heard of this tribe. When Hezekiah proclaimed a solemn passover and sent invitations to the cities of Israel as well as to Judah, though many laughed the messengers to scorn, divers of Asher humbled themselves and came to Jerusalem. 2 Chr. 30:11. When numbered at Sinai there were 41,500 able to go forth to war, and when near the promised land they were 53,400; but when the rulers of the tribes are mentioned in the time of David, Asher is omitted. Num. 1:41; Num. 26:47; 1 Chr. 27:16-22. The tribe is twice referred to in the N.T. as ASER. In Rev. 7:6, twelve thousand of Asher will be sealed, and in Luke 2:36, Anna a prophetess, of the tribe of Asher, gave thanks in the temple at the birth of the Saviour. Asher is one of the tribes still to come into blessing, and have a portion in the land. Ezek. 48:2, 3. See THE TWELVE TRIBES One of the tribe of Asher. Judges 1:32. Ashes, mostly from burnt wood, were used as a sign of sorrow or mourning, either put on the head, 2 Sam. 13:19, or on the body with sackcloth, Esther 4:1; Jer. 6:26; Lam. 3:16; Matt. 11:21; Luke 10:13; or strewn on a couch on which to lie, Esther 4:3; Isa. 58:5; Jonah 3:6. To eat ashes expresses great sorrow, Ps. 102:9; and to be reduced to them is a figure of complete destruction, Ezek. 28:18; Malachi 4:3; to feed on them tells of the vanities with which the soul may be occupied. Isa. 44:20. 'Dust and ashes' was the figure Abraham used of himself before Jehovah, Gen. 18:27; and Job said he had become like them by the hand of God. Job 30:19. For the ashes of the Red Heifer see HEIFER. An idol introduced into Samaria by the colonists sent from Hamath by the king of Assyria. 2 Kings 17:30. Ashkelon, Askelon. [Ash'kelon, As'kelon] One of the five principal cities of the Philistines. It fell to the lot of Judah, who took Askelon and the coasts thereof, Judges 1:18, but they did not really subdue it, for it was in the hands of the Philistines when Samson, with the Spirit of the Lord upon him, slew thirty men in the city and took their spoil, Judges 14:19, and that it remained so we see from 1 Sam. 6:17, and 2 Sam. 1:20. The judgements of God were denounced against this city, Jer. 25:20; Jer. 47:5, 7; Amos 1:8; Zech. 9:5; and the remnant of Judah should dwell there. Zeph. 2:4, 7. The city was situated on the sea coast, midway between Gaza and Ashdod: it is now called Askulan or Askalan, 31 40' N, 34 33' E. In modern times the city was held by the Crusaders, and within its walls Richard of England held his court: the walls which this king aided with his own hands to repair may, it is thought, still be traced, and masses of masonry and broken columns of granite still lie about. By the Mahometan geographers it was called the Bride of Syria. Ashkenaz, Ashchenaz. [Ash'kenaz, Ash'chenaz] Son of Gomer, the son of Japheth, and his descendants, who settled in the vicinity of Armenia. Gen. 10:3; 1 Chr. 1:6; Jer. 51:27. 1. Town in the west of Judah near Dan. Joshua 15:33. Identified with Hasan, 31 47' N, 34 59' E. 2. Town in the low hills of Judah, probably to the S.W. of Jerusalem. Joshua 15:43. Prince of the eunuchs under Nebuchadnezzar. Dan. 1:3. Descendant of Manasseh. 1 Chr. 7:14. See ASRIEL.
<urn:uuid:5f07ac27-9b02-4f3c-b5df-b99eb1c6253b>
CC-MAIN-2013-20
http://stempublishing.com/dictionary/054_080.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96921
24,091
2.65625
3
CureVac and the German Federal Research Institute for Animal Health, Friedrich-Loeffler-Institute (FLI), Germany, reported that mRNA vaccines could induce balanced, long-lived, and protective immunity to influenza A virus infections in various animal models. In the paper (published in Nature Biotechnology on November 25), the authors described studies showing that an mRNA vaccine encoding full-length influenza A/PuertoRico/8/1934 (PR8HA) hemagglutinin (HA) was immunogenic and induced anti-influenza B- and T-cell responses in mice. They also reported that they targeted additional influenza A virus strains by sequence-matched, HA-specific vaccines, and that all vaccines induced full protection against lethal infections, including H1N1pdm09 swine flu and H5N1 bird flu virus. The mRNA vaccine was immunogenic and provided long-term protection in newborn as well as in aged mice. Further, the immunized mice were protected through an antibody-dependent mechanism against death and disease upon challenge with the influenza virus. In ferrets and pigs, mRNA vaccines induced immunological correlates of protection, and protective effects similar to those of a licensed influenza vaccine in pigs. This synthetic vaccine was developed based on CureVac’s RNActive® technology. Each vaccine comprises several different mRNAs encoding for virus-specific antigens, or tumor antigens for cancer treatment. According to the company, modified mRNA, when administered intradermally, becomes incorporated by various cells and translated into protein. Protein fragments are presented as antigens to the immune system, eliciting a balanced T-cell and B-cell immune response as observed in preclinical settings as well as in clinical trials. CureVac is currently evaluating its vaccines in clinical trials for prostate and non-small-cell lung cancers. Commenting on the potential advantages of mRNA-based vaccines, Ingmar Hoerr, Ph.D., CureVac’s CEO, said, “The synthetic nature of our RNActive vaccines reduces production time dramatically and allows for sequence-matched vaccines that can be produced quickly and reliably in a scalable process. Additionally, our vaccines can be stored at room temperature, thereby avoiding the cold-chain in contrast to all other vaccines on the market and making worldwide distribution of our vaccines logistically and financially attractive.”
<urn:uuid:c8663fd4-7e7c-405c-9c68-0be99abe8c69>
CC-MAIN-2013-20
http://genengnews.com/gen-news-highlights/mrna-vaccines-protect-animals-against-influenza/81247671/?kwrd=Influenza
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950992
493
2.859375
3
Characteristics | Technology | Wonder Qualities| Natural - Located in northwestern Arizona, - The canon was cut by the Colorado - The widths of the gorges range from one-tenth of a mile to 18 mi. - The Canon wall have a wide range of colors, including red, gray, green, pink, brown, and violet. - A wide assortments of plant and animal life can be found in the canyon. - The Colorado River flowing through the high plateaus forms the mile-deep gorge we now call the Grand - This remarkable depth was caused by the fast speed, large amount of water, and large quantities of mud, sand, and gravel of the Colorado River. - The Colorado River carries rocks at a speed of about a half a million tons a day. - The Grand Canyon has the most extensive geological records. - The rocks at the bottom of the canyon date back to 4 billion years ago. - Fossils inside the rocky slopes date back to primitive algae that predate the dinosaurs.
<urn:uuid:8c6c3365-1293-4a15-9948-74dd02749fce>
CC-MAIN-2013-20
http://library.thinkquest.org/TQ0311400/Natural_Wonders/natural_3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.828438
230
3.53125
4
This study utilized the electronic medical records of six veterinary hospitals (operated by Banfield, The Pet Hospital®) in the vicinity of Fairburn, Georgia, to assess the health of dogs and cats following the unintentional release of propyl mercaptan from a waste-processing facility. Standardized electronic medical records were used to define clinical syndromes (eye inflammation, gastrointestinal, respiratory, fever, general weakness/change in mental state) in dogs and cats. The frequency and geographic distribution of each syndrome was evaluated before, during, and after the chemical release, using control charts, density maps, change in average mean distance from a suspected point source of chemical release, space-time statistics, and autoregressive integrated moving averages. No consistent pattern of change in syndromic events was observed following the suspected release of propyl mercaptan. Some syndromes, including respiratory syndrome in cats, gastrointestinal syndrome in dogs, and eye inflammation syndrome in both cats and dogs, showed a change in time and spatial patterns following the release of propyl mercaptan into the community. These changes were consistent with clinical signs observed in people during a previous propyl mercaptan release in California as well as the release in Fairburn. A systematic review of electronic medical records of dogs and cats exposed to release of propyl mercaptan showed no conclusive and consistent evidence of adverse health effects. Methods for the use of medical records of pets for detecting environmental hazards require further development and evaluation.
<urn:uuid:cf4395d6-c0e3-4e7e-b39a-af612993fab7>
CC-MAIN-2013-20
http://pubmedcentralcanada.ca/pmcc/solr/reg?pageSize=25&term=jtitle_s%3A(%22Ann+Bot%22)&sortby=score+desc&filterAuthor=author%3A(%22Zheng%2C+Cheng%22)
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936669
299
2.640625
3
People with migraines may feel like time passes a bit more slowly than it actually does, if a small study is correct. The difference in time perception seems subtle - it's seen in people's perception of milliseconds. But the findings help validate the common complaint of migraine sufferers that they feel a bit "off" at times, according to a neurologist not involved in the study. The new research, reported online in Headache, involved 27 adults with migraines and 27 age-matched controls. All of them took a test of time perception in which they estimated the amount of time a series of rectangles appeared on their computer screen. Sometimes the image appeared for 600 msec (six tenths of a second), sometimes for three seconds and other times five seconds. Migraine affects cognitive function In general, people with migraines overestimated the 600-msec time window. They thought it lasted twice as long - about 1.2 seconds, on average - while the non-migraine group gave an estimate of about 0.9 seconds. That's a small gap. But the findings support the idea that "migraine does indeed affect cognitive function," write Dr Kai Wang and colleagues at Anhui Medical Center in Hefei, China. Dr Jennifer Kriegler, a neurologist at the Cleveland Clinic Lerner College of Medicine, agreed. "A lot of people who have migraines report that when they are in a bad headache period, they just feel like they are in a fog," said Dr Kriegler, who was not involved in the current study. "They don't feel like they're processing information as clearly." An extreme and very rare version of this effect, dubbed Alice in Wonderland Syndrome, has been seen in migraine and epilepsy sufferers. It involves distorted time perception and a sense of disconnection from reality and even self. The current time perception study was small, Dr Kriegler said, but it was well done. And it suggests that the foggy feeling migraine sufferers report is not just due to the pain. "It may be because of differences in brain processing," Dr Kriegler said. Study too small In this study, migraine sufferers were different in their perception of the 600-msec time frame, but not the longer, three- to five-second windows. It's not clear what to make of that. Dr Kriegler said that since the study was so small more research is needed to see whether it really is only the millisecond arena where people with migraines differ. She also said it was significant that the migraine patients were not actively having headaches during the testing. So even in between migraines, Dr Kriegler said, there's a difference in brain functioning. "Is this something that's going to affect people's daily functioning? Probably not," Dr Kriegler said. Or at least not in the short term, she added. One unanswered question is what kind of treatment people with migraines in the study were taking. The researchers note that 21 patients "received medicine," and the majority took painkillers for headaches. But there is no indication they were on drugs that prevent migraines - which people with recurrent migraines often take. (Dr Wang, the senior researcher, did not respond to an email seeking comment.) So it's not clear whether preventive medications might have any effect on time perception, according to Dr Kriegler. Also unclear is whether any cognitive differences might have implications for migraine sufferers' long-term brain health. Right now, there is no evidence that people with migraines have, for example, a faster mental decline as they age or a higher risk of Alzheimer's, Dr Kriegler said. The current findings, she said, offer some "validation" to people who have felt their migraines put them "off their game." Doctors, she noted, may brush off such complaints. "But," she said, "that patient knows there's something wrong." (Reuters Health, August 2012) Migraines tied to coeliac disease Hormonal changes trigger migraines
<urn:uuid:0e139e99-4b19-4014-88cb-259fb479f857>
CC-MAIN-2013-20
http://www.health24.com/Medical/Headache-and-migraine/News/Time-perception-may-be-off-in-migraine-sufferers-20130210
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972092
860
2.546875
3
Considerations for giving feedback on skill performance When giving athletes extrinsic feedback about their technical skills, you can either tell them what you saw (descriptive feedback) or tell them what you think they need to do based on what you saw (prescriptive feedback). Guidelines for helping athletes develop tactical skills It has been said that “experience is knowledge acquired too late.” As a coach you want to do all you can to speed up your athletes’ learning of tactical skills rather than wait for them to learn by experience. Sport Skill Instruction for Coaches is designed to help current and aspiring coaches teach the skills athletes need in order to perform at their best. Written from a real-world perspective primarily for high school coaches, this practical, user-friendly text addresses the who, what, and how questions facing every coach: Who are the athletes I’m coaching? What are the skills I need to teach? How do I teach the skills effectively? Coaches will address these questions by thoroughly examining such concepts as individual differences exhibited by athletes; technical, tactical, and mental skills athletes need to learn; content and structure of skill practice; the art of providing feedback; and the preparation of athletes for competition. This exploration prepares coaches to work with athletes competently and confidently. The easy-to-follow format of the text includes learning objectives that introduce each chapter, sidebars illustrating sport-specific applications of key concepts and principles, chapter summaries organized by content and sequence, key terms, chapter review questions, activities that challenge readers to apply concepts to real-world situations, and a comprehensive glossary. ASEP Silver Level Series Preface Part I Foundations of Skill Instruction Chapter 1 Basics of Good Teaching Differences Between Learning and Performing Three Basic Ingredients of Skill Instruction Process-Focused Approach to Providing Sport Skill Instruction Chapter 2 It All Starts With the Athlete Difficulties in Predicting Future Performance Success Part II Skills Your Athletes Need Chapter 3 Technical Skills What Are Technical Skills? Classifications of Technical Skills Chapter 4 Tactical Skills Understanding Tactical Skills Identifying Important Tactical Skills Helping Your Athletes Develop Their Tactical Skills Creating a Blueprint of Tactical Options Chapter 5 Mental Skills Emotional Arousal in Athletic Performance Attention During Sport Competition Connection Between Arousal and Attention Memory in Performance Preparation Using Mental Skills to Maximize Performance Combining Mental and Physical Rehearsal Part III Designing Practice Sessions Chapter 6 Skill Analysis: Deciding What to Teach Identifying the Skills Your Athletes Need to Learn Analyzing Technical Skills Identifying Target Behaviors Chapter 7 Deciding on the Content and Structure of Practice Games Approach to Skill Practice Establishing Two-Way Communication Instructions, Demonstrations, and Guidance Modifications of Technical Skill Rehearsal Developing Athletes’ Anticipation Games Approach to Practicing for Competition Chapter 8 Providing Feedback Intrinsic and Extrinsic Feedback Verbal and Visual Feedback Outcome and Performance Feedback Program and Parameter Feedback Descriptive and Prescriptive Feedback Practical Considerations for Giving Feedback Chapter 9 Combining the Practice of Technical, Tactical, and Mental Skills Planning Effective Practices Creating Practice Activities Evaluating the Effectiveness of Practice Activities A Final Comment Appendix A: Answers to Review Questions Appendix B: Answers to Practical Activities About the Author High school coaches; also for college undergraduates pursuing professions as coaches, physical education teachers, and sport fitness practitioners. Craig A. Wrisberg, PhD, is a professor of sport psychology in the department of exercise, sport, and leisure studies at the University of Tennessee at Knoxville, where he has taught since 1977. During the past 30 years he has published numerous research articles on the topics of anticipation and timing in performance, knowledge of results and motor learning, and the role of cognitive strategies in sport performance. He is also the coauthor (with Richard Schmidt) of the popular text Motor Learning and Performance, published by Human Kinetics. In 1982 he received the Brady Award for Excellence in Teaching and in 1994 the Chancellor's Award for Research and Creative Achievement. A former president of the Association for Applied Sport Psychology (AASP) and the North American Society for the Psychology of Sport and Physical Activity, Dr. Wrisberg is a fellow of both AASP and the American Academy of Kinesiology and Physical Education. In addition to teaching and conducting research, Dr. Wrisberg provides mental training services for student-athletes in the men's and women's athletics departments at Tennessee. In his work with athletes, he applies many of the important concepts and principles covered in Sport Skill Instruction for Coaches. Dr. Wrisberg enjoys several outdoor activities, including tennis, canoeing, and hiking in the Great Smoky Mountains.
<urn:uuid:0a39fa8a-2c1c-4f85-b917-34b20c144281>
CC-MAIN-2013-20
http://www.humankinetics.com/products/all-products/sport-skill-instruction-for-coaches?beenCurRedir=1&ActionType=2_SetCurrency&CurrencyCode=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.900081
1,027
2.609375
3
Name: Mara N. I have a spider in my house that is in a cocoon. Can you please tell what it is doing? Is it laying eggs that in turn will create more spiders? Please help. Thank you. Spiders do not pupate in cocoons like insects, as far as I know the only stage of a spider life cycle that includes a cocoon-like structure is the egg case, which the adult forms around the eggs. Yes, they will hatch into more spiders! The spider in question may be the sac spider. "These spiders hide in silken tubes or sacs that resemble cocoons during the day and hunt at night. They will bite humans even when unprovoked. These bites are rather painful, but generally not life-threatening." Thaks Pamela! Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:a4436544-8630-40be-aae3-d21af48b5a8c>
CC-MAIN-2013-20
http://www.newton.dep.anl.gov/askasci/zoo00/zoo00485.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921236
188
3.109375
3
Alternative treatments for Asthma are many…from herbs to homeopathy, from breathing exercises to meditation, etc….But most of the ‘accepted’ Western medical doctors are hesitant to accept the possibility that these methods can help in conjunction with their prescriptions, let alone do the job alone! There is insufficient research done on any of these possible treatments for most doctors to endorse their use; and as long as there is insufficient research being funded and conducted it will remain the same! Also the research needs more than funding and conducting …it also needs to be accomplished by people who are not prejudiced before hand by Western medicine to keep the results pure! Dietary Changes: There have been limited numbers of studies that show that diet influences Asthma significantly. Therefore fixing specific issues could contribute just as significantly to its treatment! - Increase Water consumption: Drinking water may seem a little too simple, but…..water will help thin out the mucous secretions that increase during an asthma attack, making it easier to clear the lungs and bronchial tubes. - Milk elimination: Milk has been shown to be one of the leading dietary causes of allergies, which are one of the leading triggers of an asthma attack! When I was growing up it was considered general knowledge that consumption of milk (or other milk products) could increase mucous production! - Coffee may help in an attack: Because coffee (black coffee no sugar or milk added) is a stimulant it can often encourage the lungs to function more normally. It will help to break up the mucous and alleviate the tightness in the chest and throat. But this is not necessarily recommended for daily use! - Ensure daily consumption of magnesium in the diet: Magnesium has bronchio-dilating effect in the body. Studies have shown that magnesium levels are often low in people experiencing Asthma. Food sources of Magnesium are: almonds, spinach, cashews, soybeans, peanuts (including peanut butter), potato, blackeyed peas, pinto beans, brown rice, lentils, kidney beans, bananas, etc. - Supplement with Omega-3: Omega-3’s are believed to decrease inflammation. You can increase the Omega-3’s in your diet by eating it in Salmon, scallops, cauliflower, broccoli, spinach, walnuts, almonds, kale, tofu, shrimp, tuna, mussels, sardines, etc.’ - Increase antioxidants in the diet: Antioxidants tend to stimulate free radical activity; they in turn help to decrease inflammation! These can be found in the Vitamins A, C, and E among other sources! Some dietary sources include…blueberries, blackberries, strawberries, cranberries, Kidney beans, pinto beans, avocados, pineapple, cherries, kiwi, plums, artichokes, spinach, red cabbage, sweet potatoes, broccoli, green tea, walnuts, pecans, hazelnuts, and oat products (like oatmeal) Increase Exercise: Often exercise can cause an asthma attack, but in good asthma management the goal is to return the patient back to normal or near normal activity, and that includes exercise. Just try to tell a child they cannot run, and jump, and ride their bike! I dare you! Regular exercise helps the Asthmatic in multiple ways including, but not limited to the following benefits: stress reduction, increased energy, and sleep improvements. Exercise also lowers your risk for obesity and heart disease, common problems with adult asthmatics!! - Recommended types of exercise include: volleyball, gymnastics, baseball and wrestling, and any other exercise that allows occasional breaks in the activity! - Use of precautions: Use common sense when exercising….never exercise when you feel ‘off,’ and use some of the following suggestions to prepare for each session: Stretching exercises to help warm up, avoid allergen areas (…use an exercise pad when working out on carpet, avoid areas where there is high pollen or car exhaust). Breath through your nose, rather than your mouth. Move: This may seem extreme, but if you live in a very moist, humid location moving to a dry, hot atmosphere may just seem to cure your asthma! Try vacationing in different locals….desert southwest, mountainous areas, by the sea, in a city, out in the country. Where are you the most comfortable? Consider if this dramatic a change may just be worth the expense! In lieu of an actual move try some of these ideas: - Find a good, dry exercise area, such as a local gym - Avoid high allergen areas - Make sure you sleep with your chest and head elevated! Acupuncture: Acupuncture has been used to treat Asthma because it balances the positive and negative flows of energy in the body, and that is precisely how Acupuncture is believed to work. Asthma is believed to be an imbalance of these opposing energies, with acupuncture restoring the stability to normal levels. It is believed by many to restore lung function and to reduce the severity of an asthma attack should it occur. Research however has not been able to prove the effectiveness of acupuncture in asthma, but rather the results are inconclusive. The most recent Cochrane Collaborative Review feels that the information thus far does not allow recommendation of this treatment. Aromatherapy: Essential oil blends can reduce the severity and occurrence of asthma attacks, and help relieve them once they commence! The best time for aromatherapy treatments is between attacks since you do not want the scent to make the problem a worse issue. E.O.’s to use: To reduce Bronchial Spasms: chamomile, lavender, rose, geranium and marjoram Decongestant and Antihistamine: peppermint and ginger To Encourage deep breathing and to allow lung expansion: Frankincense and marjoram These Essential oils can be incorporated into chest rubs, steams, humidifiers, the bath or in massage oils. Biofeedback: Biofeedback is the use of electronic devices to help the patient control or influence normally automatic body functions such as heartbeat and breathing. In controlled studies it has shown to have benefit for most patients, where they maintain control of their asthma while reducing their inhaled steroids. Herbal Medicine: Many herbs will decrease the inflammation and relieve bronchio-spasms. Homeopathy: Homeopathy uses very small amounts of natural substance to stimulate the body’s immune function and natural defenses. Each person, and each asthma attack is different so the choices here are prescribed as a tailored approach to each patients needs. Massage: Massage is believed effective due to the relaxation in the whole being that can be achieved. It can be done using aromatherapy massage oils to enhance its effectiveness. Massage can retrain the muscles to a state of relaxation and reduced stress. Although not well studied, massage can be beneficial by relieving a known trigger for asthma attacks, stress. Additional research is needed in this area. Pulmonary Therapy (Breathing Exercises): It has been demonstrated that people with asthma that use Breathing Exercises can reduce their use of medications by up to 86%. At Sydney’s Woolcock Institute of Medical Research and Melbourne’s Alfred Hospital conducted this research and released their findings in the August, 2006 edition of Thorax. Relaxation Techniques – Meditation: It has been found that people who practice meditation and the resulting relaxation have a marked decrease in asthma symptoms and attacks. Meditation can eve help to bring an active attack under control. Often the fear of the attack will trigger panic which will make everything worse. The meditation can counteract this phenomenon. Yoga: Through gentle stretching and exercise Yoga helps to relax the body and the mind, allowing improvement in circulation and respiration. It will relieve tension and assist the body in its own healing. Folklore: These are for entertainment only! Own a Chihuahua: in Mexican Folk Medicine it is believe to cure asthma! To cure asthma: Drill a hole into a black-oak tree at the height of the patient’s head. Place a lock of hair in the hole, and drive a wooden peg into the hole to hold the hair in place. Now cut the peg and hair flush with the tree bark. When the bark has grown over the peg and hair, hiding it from sight and the hair has re-grown, the asthma will be cured! Asthma could be cured by tying a live frog to the patient’s throat. When the frog died the disease was “completely absorbed” by the frog. Kill a steer, cut open its gut, and place the patient’s feet into the abdomen. When the entrails have cooled the asthma will be cured. Boil the comb of a hornet’s nest and sweeten it with honey, take to cure asthma! Chihuahua for Asthma
<urn:uuid:cd9a1e58-4d26-4068-845e-2de15e56f4c4>
CC-MAIN-2013-20
http://herberowe.wordpress.com/category/article/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934272
1,859
2.5625
3
Matching Utility Loads with Solar and Wind Power in North Carolina: Dealing with Intermittent Electricity Sources by John Blackburn, Ph.D. Professor of Economics Emeritus, Duke University with a Foreword by Arjun Makhijani GROUNDBREAKING STUDY FINDS SOLAR, WIND, OTHER RENEWABLE POWER SOURCES COULD MEET NEARLY ALL NORTH CAROLINA ELECTRICITY NEEDS Takoma Park, Maryland, and Durham, North Carolina, March 4, 2010: Solar and wind power can supply the vast majority of North Carolina’s electricity needs, according to a major report released today. Combined with generation from hydroelectric and other renewable sources, such as landfill gas, only six percent of electricity would have to be purchased from outside the system or produced at conventional plants. “Even though the wind does not blow nor the sun shine all the time, careful management, readily available storage and other renewable sources, can produce nearly all the electricity North Carolinians consume,” explained Dr. John Blackburn, the study’s author. Dr. Blackburn is Professor Emeritus of Economics and former Chancellor at Duke University. “Critics of renewable power point out that solar and wind sources are intermittent,” Dr. Blackburn continued. “The truth is that solar and wind are complementary in North Carolina. Wind speeds are usually higher at night than in the daytime. They also blow faster in winter than summer. Solar generation, on the other hand, takes place in the daytime. Sunlight is only half as strong in winter as in summertime. Drawing wind power from different areas — the coast, mountains, the sounds or the ocean — reduces variations in generation. Using wind and solar in tandem is even more reliable. Together, they can generate three-fourths of the state’s electricity. When hydroelectric and other renewable sources are added, the gap to be filled is surprisingly small. Only six percent of North Carolina’s electricity would have to come from conventional power plants or from other systems.” Jim Warren, Executive Director of the North Carolina Waste Awareness and Reduction Network (NC WARN), added, “Utilities and their allies are pressing policy-makers to allow construction of expensive and problem-ridden nuclear reactors – with ratepayers and taxpayers absorbing enormous financial risks. Prof. Blackburn’s groundbreaking study demonstrates that such risks are not necessary. Solar, wind and other renewable sources can meet nearly all of North Carolina’s energy needs.” Dr. Arjun Makhijani, President of the Institute for Energy and Environmental Research (IEER), explained why his center published Dr. Blackburn’s report. “This is a landmark case study of how solar and wind generation can be combined to provide round-the-clock electric power throughout the year. North Carolina utilities and regulators and those in other states should take this template, refine it, and make a renewable electricity future a reality.” Dr. Makhijani is the author of Carbon-Free and Nuclear-Free: A Roadmap for U.S. Energy Policy. You can download Dr. Blackburn’s report, Matching Utility Loads with Solar and Wind Power in North Carolina: Dealing with Intermittent Electricity Sources, by entering your contact information below, if you haven’t already provided it elsewhere on this site. You will receive an email with the download link. Download is free, donations are welcome.
<urn:uuid:ab97662c-0771-4f95-b27a-01d7c83819a9>
CC-MAIN-2013-20
http://ieer.org/resource/climate-change/matching-utility-loads-solar-wind/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915221
721
2.609375
3
International Perspectives on Food and Fuel Agricultural Marketing Resource Center Co-Director – Ag Marketing Resource Center Iowa State University (Second in a series) Last month we provided perspectives on the food and fuel debate from the viewpoint of U.S. consumers. This article provides world perspectives on the debate. When discussing food issues, we must preface the discussion with the understanding that food is the most personal of all consumer purchases. We can do without or find ways to circumvent the need for most consumer products. But the specter of not having enough food is a basic need and elicits an emotional response. Although this specter is foreign to U.S. consumers, it is very real for millions of people around the world. Dwindling Grain Reserves Although grains used for biofuels have impacted world grain usage in recent years, the conditions for limited supplies and higher prices were already in motion. Over the last ten years, world grain reserves have been dwindling. As shown in Figure 1, world stocks-to-use ratios for wheat and coarse grains have fallen by half. In 1998/99 we had reserves equal to 30% of a year’s usage (about 3.5 months). While not large, these reserves could cushion the impact of a sudden disruption in grain supplies (e.g.widespread drought). Also, these reserves were large enough to substantially dampen the price of grains. For example, the Iowa average corn price in 1998/99 was $1.87 per bushel. These low prices provided ample incentive for farmers to search for new uses for grains like biofuels to strengthen grain prices. They also discouraged expanding grain production. Source: Foreign Ag Service, USDA However, grain prices were already on the way up, regardless of biofuels. We had entered a period when grain usage outstripped production. Non-biofuel grain usage was growing faster than production. The deficit was covered by drawing down reserves. Today’s world reserves are only 15 percent of a year’s usage, of which a significant portion is needed to smoothly transition from one year to the next. So we cannot continue to cover the deficit by drawing down reserves. The result is high prices that will ration existing supplies and stimulate future production. While dwindling grain reserves and higher prices have an adverse impact on consumers, it has a positive impact on producers. Around the world, 2.5 billion people depend on agriculture for their livelihood (FAO). This is close to 40% of the world’s population. So higher prices have both negative and positive implications. Adverse Crop Events We experienced adverse crop events at precisely the time when the world’s grain situation was most vulnerable to supply shocks. Periods of low production can easily upset the delicate balance between surplus and shortage, especially when reserves are low. Adverse crop events in 2007 were a driving factor in grain price increases. It was the second consecutive year of a drop in average yields around the world. An overview of 2007 crop problems is shown below. - Australia – multi-year drought - U.S. winter wheat – late freeze - Northern Europe – dry spring & wet harvest - Southeast Europe – drought - Ukraine and Russia – drought - Canada – summer hot and dry - Northwest Africa – drought - Turkey – drought - Argentina – late freeze & drought followed by flooding in parts of the corn-soybean belt. Most of the world’s grains are consumed in the country in which they are produced. A relatively small amount is traded on the international market, as shown in Table 1. However, if trade is disrupted, this small amount can have a significant impact on food distribution and prices, especially during periods of low reserves. Table 1. Percent of World Consumption from International Trade Source: What’s Driving Food Prices, Farm Foundation, 2008 Although unknown in this country in recent years, it is not uncommon for countries to impose restrictions such as “export taxes” or “export embargoes” on agricultural commodities sold to other countries. These restrictions become increasingly common when world shortages and high prices appear. These policies are meant to discourage exports and keep food within the country for domestic consumers. Essentially, the restriction means that “our citizens eat first”, if there is anything left over, your citizens can have it. A prominent example is Argentina, a large producer of agricultural commodities such as soybeans. Argentina already had a 35 percent export tax and in March its president, Christina Fernandez de Kirchner, increased the tax. The decision led to riots and demonstrations by Argentina’s farmers. In July the measure was narrowly rescinded by Argentina’s Senate. These trade distortions can take many forms in addition to export taxes. Below is a listing of policies that have recently been implemented by both exporting and importing countries due to high food prices and food shortages. - Export Bans - Ukraine, Serbia, India, Egypt, Cambodia, Vietnam, Indonesia, Kazakhstan - Export Restriction (quantitative) – Argentina, Ukraine, India, Vietnam - Export Taxes – China, Argentina, Russian, Kazakhstan, Malaysia - Eliminate Export Subsidies – China - Reduced Import Tariffs – India, Indonesia, Serbia, Thailand, EU, Korean, Mongolia - Subsidize Consumers – Morocco, Venezuela The long-term implications of export restrictions are negative to the world’s consumers and world agriculture. It distorts trade in agriculture commodities at the precise time when there should be no distortion. It greatly increases the vulnerability of poor countries that are net food importers. It penalizes long-term agricultural development and growth in exporting countries. Commodity Price Impact on Food Budgets Although notable exceptions exist, most hunger situations are not caused by an actual shortage of food. Rather hunger is caused by the financial inability to buy food. So how do high food prices this impact the food consumers in low-income, food deficit countries? As we discussed last month, the average U.S. consumer spends only 10 percent of his/her disposable income on food (although food expenditures for low-income consumers are substantially higher). And the food the consumer buys is highly processed, packaged and often ready to eat. So, of the money spent on food, only 20 percent goes to farmers for producing basic commodities like wheat, milk, meat, etc. The situation is much different for consumers living in low-income, food-deficit countries. An illustrative example is shown in Table 2. Half of a consumer’s disposable income may be spent on food. And this is primarily for staples (basic commodities). People in developing countries tend to buy basic staples and prepare them rather than buying processed/prepared food. In our example, 70 percent of their food expenditures are for staples compared to 20 percent in high income countries. If the prices of staples increase by 50 percent, the amount of disposable income spent by consumers in high income countries will only increase by one percentage point, going from 10 percent to 11 percent, or a 10 percent increase ((11 – 10)/10). However, the amount spent by consumers in low income countries increases by 18 percentage points, going from 50 percent to 67.5 percent, or a 35% increase ((67.5 – 50)/50). So, people in low income countries, who already spend a disproportionately large amount on food, are the hardest hit by increased commodity prices. Table 2. Impact of Higher Commodity Prices on Food Budgets * |High-income Countries||Low-Income Food-Deficit Countries| |Food Cost as % of Income||10%||50%| |Staples as % of total food spending||20%||70%| |Expenditures on staples||$800||$280| |50% price increase in staples| |Increase in cost of staples||$400||$140| |New cost of staples||$1,200||$420| |New total food costs||$4,400||$540| |Food Cost as % of Income||11%||67.5%| |Percent increase in food cost||10%||35%| * These are illustrative food budgets that characterize the situations for consumers in high- and low-income countries. Source: Based on information from Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices/WFS-0801, July, 2008. Economic Research Service, USDA. Future Demand Growth The demand for grains will continue to grown in the future. One of the driving factors is expanding world population. From 2000 to 2005, world population grew by more than the entire population of the United States. As shown in Table 3, 85 percent of the growth occurred in Asia and Africa. The population of Europe actually declined. So, most of the growth is occurring in developing countries. Table 3. World Population Growth (2000 to 2005) In combination with population growth, the expanding world “middle class” is demanding high value food products that put additional demands on world agricultural production. A future article titled “China on a Western Diet” will address this topic. The Impact of Biofuels A reason commonly given for the current high world food prices is the diversion of cropland acreage away from food production to energy production. In the U.S. ethanol is made from corn. As shown in Table 4, U.S. corn acreage increased substantially in 2007 from its 2003-06 average. The majority of those acres were taken from soybean production. In 2008, corn acreage retreated and soybean acreage rebounded to its 2003-06 level. Corn and soybeans are used primarily for livestock feed. For example, only about 10 percent of U.S. corn production is processed directly into food products. Most is fed to livestock that are mostly consumed by high income consumers. However, U.S. feed corn prices have driven up food corn prices in Africa and Mexico, where corn goes directly into the human food chain. Moreover, soybean oil is an important staple in the Far East. In China, almost everything is cooked in soybean oil or other vegetable oil. Table 4. U.S. Planted Acreage of Major Crops 1/ Represents 59% of world corn acreage 2/ Represents less than 10% of world wheat acreage 3/ Represents less than 1% of world rice acreage The basic staples of many poor countries are grains that can be consumed directly like wheat and rice. Although world wheat price increased substantially in 2007 and early 2008, it was not due to the encroachment of corn for U.S. biofuels production. As shown in Table 1, U.S. wheat acreage actually increased in 2007 from the 2003-06 average, and increased again in 2008. However, in the absence of corn for biofuels production, wheat acres may have expanded more or the actual expansion may have been achieved at lower wheat prices. Moreover, the need for expanded corn acreage in the future will place continued pressure on wheat acreage and the wheat price need to main existing acreage. The small U.S. rice acreage is on land that is not designed for the production of traditional grains. The threat of substitution is limited. On a worldwide basis, about 35 million acres of land were used for biofuels in 2004. This is about one percent of total arable land. The Energy Information Administration predicts this will increase to 2 to 3.5 percent by 2030. While the biofuels industry (U.S. and worldwide) is a contributing factor, it is not the only cause of rising world commodity food prices. The precise impact is difficult to assess. In next month’s article we will attempt to provide some comparison scenarios of food supplies and prices if the biofuels industry did not exist. Most of the popular press reports about the food and energy situation focus on actions we can take to solve the situation immediately. However, if we are to be successful, we must also take a holistic and long-term view of the situation and implement policies and programs that will impact the long-term causes of the problem. Below we address the situation from a short-, intermediate- and long-term perspective. There is no good short-term solution to the situation we face today. Many of the programs and policies we implement today will not produce results until the intermediate and long term time periods. However, if you are one of the millions without enough to eat, solutions that provide help next year or in the next decade provide little comfort. Programs that provide immediate food aid and other assistance are required to meet the current needs. However, these programs need to provide assistance without competing with the local agricultural sector. If the problem is the high price of food rather than an actual shortage of food, buying food locally at world prices and providing it to residents at a discounted price provides food for local residents while supporting local agriculture. Conversely, bringing in food staples from outside competes directly with local farmers and impedes the country’s ability to be self-sustaining in food production in future years. Higher agricultural prices stimulate farmers around the world to increase production. This is a powerful force that is often neglected in discussions about the current food situation. The payoff from using better seed varieties, more fertilizer and other production inputs is magnified when grain prices are high. However, this will not impact today’s situation. At least one production cycle is required to increase production. And this assumes that farmers have access to production inputs and the money to purchase them. So, programs that provide access and funding are important for increased production in the coming years. Moreover, policies that limit commodity price increases within countries must be avoided. Developing countries are under pressure to limit price increases in an effort to ease the domestic short-term situation. However, high prices are necessary to stimulate increased domestic production. So, short term programs and policies need to be designed that provide immediate food assistance without depressing or limiting prices. Increased funding for agricultural research and education is the long-term solution to the situation. New seed varieties, new and improved production inputs, better cultural practices, increased knowledge of how to apply these practices, and an array of other research efforts have the ability to significantly increase world agricultural production. Although research programs are of little value in the short-term, their cumulative impact over five, ten or more years can be enormous. However, the challenge will be great. The combined forces of: - continued world population growth, - increased demand for higher quality food by the world’s expanding “middle class”, and - the need to provide both food and fuel, require careful monitoring by the international community and substantial worldwide investment in agricultural research and application. And this agricultural expansion needs to be done in a sustainable manner while adapting to the impacts of climate change on the world’s agricultural production capacity in the near term and mitigating climate change in the long term. References and Further Reading Issue Report: What's Driving Food Prices? - Farm Foundation, July 2008. Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices/WFS-0801, July, 2008. Economic Research Service, USDA. Rising Food Prices and Global Food Needs: The U.S.Response, Congressional Research Service, May 2008. Rising Food Prices: Policy Options and World Bank response , World Bank, 2008. Implications of Higher Global Food Prices for Poverty in Low-income Countries, World Bank, April, 2008. Impact of High Food and Fuel Prices on Developing Countries—Frequently Asked Questions. International Monetary Fund, April, 2008. The High-Level Conference on World Food Security: the Challenges of Climate Change and Bioenergy, Food and Agriculture Organization, June 2008. Biofuels and Sustainable Development, Executive Session on Grand Challenges of the Sustainability Transition, May 2008.
<urn:uuid:3dd7701d-d6d0-4918-9b86-0f09c43562a4>
CC-MAIN-2013-20
http://www.agmrc.org/renewable_energy/agmrc_renewable_energy_newsletter.cfm/international_perspectives_on_food_and_fuel?show=article&articleID=13&issueID=5
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926424
3,325
2.734375
3
Additional Investments in Family Planning Would Save Developing Countries More Than $11 Billion a Year — Access to family planning is an essential human right that unlocks unprecedented rewards for economic development, says new UNFPA report. – UNFPA REPORT TAKE A BOW Secretary Clinton, who began this push in 1995 as first lady when she declared human rights are women’s rights, which begins with planning your family. The UNFPA report is non-binding and won’t impact law internationally, but the statement is a start in a new direction, however long it takes to manifest. The U.N. Population Report declares in it’s title “BY CHOICE, NOT BY CHANCE,” a call for self-determination, empowerment and reproductive freedom for women across the globe. The caterwauling from the right will begin in 3,2.. “Family planning has a positive multiplier effect on development,” Dr. Babatunde Osotimehin, executive director of the fund, said in a written statement. “Not only does the ability for a couple to choose when and how many children to have help lift nations out of poverty, but it is also one of the most effective means of empowering women. Women who use contraception are generally healthier, better educated, more empowered in their households and communities and more economically productive. Women’s increased labor-force participation boosts nations’ economies.” The report effectively declares that legal, cultural and financial barriers to accessing contraception and other family planning measures are an infringement of women’s rights. This declaration, the statement itself, is the most important move in women’s self-determination globally since Hillary Rodham Clinton’s speech in China so many years ago. It’s why her statement of human rights are women’s rights begins the book I wrote about her 20-year rise and the difficulties she had making her mark. It is not by accident that this report came during Secretary Clinton’s tenure at the State Department, which was empowered by President Obama, who, even with his faults on putting politics above science in the importance of Plan B to young women especially, as well as the damage of codifying the Hyde Amendment into law, remains on the whole a great champion of women’s self-determination. It’s the Hillary Effect.
<urn:uuid:41404e58-1b1e-4bc0-8844-ee55fc8183c5>
CC-MAIN-2013-20
http://www.taylormarsh.com/blog/2012/11/u-n-report-declares-contraception-an-international-human-rights-issue/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957175
492
2.53125
3
16th-century Peruvian convent and its historic art eyed for restoration LIMA, Peru (CNS) — Half-hidden behind palm trees at the end of a once elegant avenue in a now rundown neighborhood, the Convento de los Descalzos — the Convent of the Barefoot Friars — has witnessed half a millennium of Peruvian history. Age, economic woes and benign neglect have taken their toll, and the convent has fallen on hard times. But Alberta Alvarez, the director of a foundation established less than a year ago to revitalize the convent, is trying to change that. With about 500 artworks hanging throughout its seven cloisters and tucked away in storerooms, the building that once housed Franciscan missionaries offers “a journey through three centuries of religious art, all in one place,” Alvarez said. During colonial times, Spanish clergy used paintings and statues of religious figures and scenes for evangelization. The convent’s 16th-, 17th- and 18th-century art represent styles known as the Cuzco, Lima and Quito schools, which reflect the melding of European and indigenous artistic styles. Built at the foot of St. Christopher’s Hill, a Lima landmark that provides a panoramic view of the city, the convent itself is a work of art. Founded in the late 1500s by Franciscan friars who sought to live more simply — and whose practice of going without shoes or wearing only sandals gave the Convent of Saint Mary of the Angels its popular name — the place became a museum in the 1980s. Graceful arched porticos around the convent’s seven cloisters provide a refuge from the loud rush-hour traffic outside. But some pillars cracked in recent earthquakes, electrical wiring is exposed to the elements, and saints, angels and early Franciscan missionaries stare down from the walls through layers of mold and grime. “The museum has many needs,” said Alvarez, a native of Spain who first traveled to Lima to volunteer in Franciscan mission work and now heads the fledgling museum foundation. Protecting and restoring the artwork is a priority, but restoration experts are in short supply in Peru, she said. The few who do graduate from National University of San Marcos or Lima’s Fine Arts School are snapped up by art museums or better-funded organizations. As a first step, Alvarez organized a seminar on colonial art restoration in May, inviting Pilar Sedano, director general of cultural heritage for the city of Madrid and former head of restoration and conservation at the Museo Nacional del Prado, Spain’s national art museum. The seminar drew more than 100 participants, including many students from towns in the interior of the country. “Everyone asked if they could sign up for the next one,” said Alvarez, who hopes future events can include more hands-on training. She estimates that it would cost about $200,000 to establish a workshop in the convent where half a dozen professional art restorers could work on a rotating basis with six art school graduates on fellowships. That amount is currently beyond the museum’s reach, but the seminar gave Alvarez and her colleagues a chance to test restoration techniques on a painting from the convent’s collection. The surface was so dark that details were unclear, and Alvarez thought it was probably a painting of St. Francis of Assisi. But an X-ray image — made with a machine borrowed from the parish clinic next door — revealed the head of a violin beneath the saint’s left arm, a symbol associated with the Franciscan missionary St. Francis Solano. As they began to clean and restore the painting, the violin scroll and other details came into view. “It’s exciting when you start to work and people appear on the canvas,” Alvarez said. Although it is now in the middle of Peru’s sprawling capital, the convent originally was outside the city, surrounded by fields and vineyards. Behind the refectory, where the walls are lined with images of saints and early Franciscan missionaries, a wine cellar still holds huge wooden barrels for aging wine. Up a flight of stairs, apothecary jars and equipment from the 19th century are on display in a room in an adjoining courtyard, and Alvarez said she hopes to plant medicinal plants the friars used in the garden. Another room holds old printing equipment. Alvarez would like to create displays recounting the missionary history of the Franciscans, who were among the first to venture into the Peruvian Amazon, using the museum’s old maps. They also need preservation and restoration work, however, as does a set of choral books. “Every day, you see more opportunities,” Alvarez said. “We have the documentation, but someone has to sit down and review it, pull out the information, and think about how the museum can be organized.” For now, her biggest challenge is finding a way to launch an ambitious project on a shoestring budget. “We need everything,” she said. “We’re just beginning.”
<urn:uuid:845f09b4-8e5e-47f7-98c7-07fa3c65ce4f>
CC-MAIN-2013-20
http://catholicphilly.com/2012/07/us-world-news/world-catholic-news/16th-century-peruvian-convent-and-its-historic-art-eyed-for-restoration/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954842
1,084
2.84375
3
Culture of poverty |Part of a series on| |Economic, applied and development |Social and cultural anthropology| ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2010)| The culture of poverty is a social theory that expands on the cycle of poverty. It attracted academic and policy attention in the 1960s, but has largely been discredited by academics around the turn of the century (Goode and Eames, 1996; Bourgois, 2001; Small M.L., Harding D.J., Lamont M., 2010). Although the idea is experiencing a "comeback" current scholars recognize racism and isolation, rather than the "values" of the poor as the reason for potentially mal-adaptive behaviors of the poor. It offers one way to explain why poverty exists despite anti-poverty programs; critics of the culture of poverty argument insist that structural factors rather than individual characteristics better explain the persistence of poverty (Goode and Eames, 1996; Bourgois, 2001; Small M.L., Harding D.J., Lamont M., 2010). Early proponents of this theory argued that the poor are not simply lacking resources, but also acquire a poverty-perpetuating value system. According to Oscar Lewis, "The subculture [of the poor] develops mechanisms that tend to perpetuate it, especially because of what happens to the world view, aspirations, and character of the children who grow up in it.” (Moynihan 1969, p. 199). Later scholars have noticed that the poor do not have different values. The term "subculture of poverty" (later shortened to "culture of poverty") made its first appearance in the ethnography Five Families: Mexican Case Studies in the Culture of Poverty (1959) by anthropologist Oscar Lewis. Lewis struggled to render "the poor" as legitimate subjects whose lives were transformed by poverty. He argued that although the burdens of poverty were systemic and therefore imposed upon these members of society, they led to the formation of an autonomous subculture as children were socialized into behaviors and attitudes that perpetuated their inability to escape the underclass. Lewis gave some seventy characteristics (1996 , 1998) that indicated the presence of the culture of poverty, which he argued was not shared among all of the lower classes. The people in the culture of poverty have a strong feeling of marginality, of helplessness, of dependency, of not belonging. They are like aliens in their own country, convinced that the existing institutions do not serve their interests and needs. Along with this feeling of powerlessness is a widespread feeling of inferiority, of personal unworthiness. This is true of the slum dwellers of Mexico City, who do not constitute a distinct ethnic or racial group and do not suffer from racial discrimination. In the United States the culture of poverty that exists in the Negroes has the additional disadvantage of racial discrimination. People with a culture of poverty have very little sense of history. They are a marginal people who know only their own troubles, their own local conditions, their own neighborhood, their own way of life. Usually, they have neither the knowledge, the vision nor the ideology to see the similarities between their problems and those of others like themselves elsewhere in the world. In other words, they are not class conscious, although they are very sensitive indeed to status distinctions. When the poor become class conscious or members of trade union organizations, or when they adopt an internationalist outlook on the world they are, in my view, no longer part of the culture of poverty although they may still be desperately poor. (Lewis 1998) Although Lewis was concerned with poverty in the developing world, the culture of poverty concept proved attractive to U.S. public policy makers and politicians. It strongly informed documents such as the Moynihan Report (1965) and the War on Poverty more generally. The culture of poverty also emerges as a key concept in Michael Harrington's discussion of American poverty in The Other America (1962). For Harrington, the culture of poverty is a structural concept defined by social institutions of exclusion which create and perpetuate the cycle of poverty in America. Since the 1960s critics of culture of poverty explanations for the persistence of the underclasses have attempted to show that real world data do not fit Lewis' model (Goode and Eames, 1996). In 1974, anthropologist Carol Stack issued a critique of it, calling it "fatalistic" and noticing the way that believing in the idea of a culture of poverty does not describe the poor so much as it serves the interests of the rich. She writes, "The culture of poverty, as Hylan Lewis points out, has a fundamental political nature. The ideas matters most to political and scientific groups attempting to rationalize why some Americans have failed to make it in American society. It is, Lewis (1971) argues, 'an idea that people believe, want to believe, and perhaps need to believe.' They want to believe that raising the income of the poor would not change their life styles or values, but merely funnel greater sums of money into bottomless, self-destructing pits." Thus, she demonstrates the way that political interests to keep the wages of the poor low create a climate in which it is politically convenient to buy into the idea of culture of poverty (Stack 1974). In sociology and anthropology, the concept created a backlash, pushing scholars to look to structures rather than "blaming-the-victim" (Bourgois, 2001). Since the late '90s, the culture of poverty has witnessed a resurgence in the social sciences, although most scholars now reject the notion of a monolithic and unchanging culture of poverty and attribute destructive attitudes and behavior not to inherent moral character but to sustained racism and isolation (Small M.L., Harding D.J., Lamont M., 2010). - Cohen, Patricia, ‘Culture of Poverty’ Makes a Comeback, http://www.nytimes.com/2010/10/18/us/18poverty.html - Stack, Carol. 1974. All Our Kin. Harper & Row. - Goode, Judith and Edwin Eames (1996). "An Anthropological Critique of the Culture of Poverty". In G. Gmelch and W. Zenner. Urban Life. Waveland Press. - Harrington, Michael (1962). The Other America: Poverty in the United States. Simon & Schuster. - Lewis, Oscar (1996 (1966)). "The Culture of Poverty". In G. Gmelch and W. Zenner, eds. Urban Life. Waveland Press. - Lewis, Oscar (1969). "Culture of Poverty". In Moynihan, Daniel P. On Understanding Poverty: Perspectives from the Social Sciences. New York: Basic Books. pp. 187–220. - Lewis, Oscar (January 1998). "The culture of poverty". Society 35 (2): 7. doi:10.1007/BF02838122. - Mayer, Susan E. (1997). What money can’t buy : family income and children’s life chances. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-58733-2. Retrieved 2009-11-09. - Duvoux, Nicolas, "The culture of poverty reconsidered", La vie des idées : http://www.laviedesidees.fr/The-Culture-of-Poverty.html - Patricia Cohen (2010-10-17). "‘Culture of Poverty’ Makes a Comeback". The New York Times. Retrieved 2010-10-20. - Bourgois, Phillipe (2001). "Culture of Poverty". International Encyclopedia of the Social & Behavioral Sciences. Waveland Press. - Small M.L., Harding D.J., Lamont M. (2010). "Reconsidering culture and poverty". Annals of the American Academy of Political and Social Science 629 (1): 6–27. doi:10.1177/0002716210362077.
<urn:uuid:492dd025-d63b-4117-a1af-0fefee0cce7d>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Culture_of_poverty
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911487
1,674
3.4375
3
The four digit PIN is a common part of our everyday life. From door codes to ATMs to cell phones, some of us use a four digit PIN daily. Although it sounds a bit counter intuitive, using only three unique numbers in your PIN instead of four different numbers will your make your PIN more secure. In other words, a PIN like 3963 is more secure than one like 1872. Most PINs tend to be four digits and have options from ten characters (0-9). The brute force strength of this is 10,000 different combinations (104=10,000). This is probably sufficient to ward off a manual brute force attack. The fact that most ATM card skimmers include PIN loggers seems to bear this out. It’s pretty common knowledge, however, that straight brute force is often unnecessary. Residue and oils on your fingers will leave traces. Just a tilt to the light will often tell the attacker exactly what digits are used in your PIN and sometimes the PIN is used so often that light is not even necessary. When an attacker knows that there are only four digits and four places their brute force attack drops from 10,000 to 24 (From 104 to 4! or 4x3x2x1). This drops even further with the likelihood that a specific pattern was probably used for easier recall. Take a look at the following two pictures and remember to keep your cell phones clean. The PIN to unlock this cell phone is probably a pattern like 2546 or the opposite 6452. It is likely that the PIN above is an easy to remember date like 1968 or 1986 (photo from Schneier on Security) So what happens when you use only three numbers in a four digit PIN? The straight brute force strength remains the same at 10,000, but what about in the case of information leakage when there are smudges or worn keys? Only three smudges this time. So what’s the PIN? Since the attacker only knows three digits they are not sure which of the three is repeated or in which place it is repeated. While number patterns might help here (dates like 1990 & 2001 for instance) place patterns will always be an unhelpful triangle. In addition they can no longer calculate a straight factorial function of four (4!=24) to figure out how many different patterns are possible. The attacker has to calculate each string that could contain a single duplicate of the original three numbers. If all possible permutations of four known numbers in a four digit PIN is 24, then the number of permutations where one number is repeated reduces that by half (4x3x1x1) or twelve. The attacker is not sure which of the three digits is repeated so they will have to try those 12 permutations three time, one for each known number. This means they now have 36 permutations to go through; 50% stronger than a PIN using four numbers. The same does not work for a PIN using only two numbers. A quick calculation shows that even exponentially 24 is only 16 permutations. If you subtract 1111 and 2222 you are left with 14 permutations that have at least one of each character. This makes three the sweet spot for slightly more secure four place PIN. It stands up as well as a four character PIN to straight brute force and is more secure under information leakage such as smudges and wear and tear. It is a little bittersweet to discover that this ground has already been covered, but kudos to Presh Talwalkar. His article from Jan 2011 has much better math than mine. This post seems to be getting a little traction, so let me be clear that whether we’re talking about 12, 24, or 36 permutations the difference between them is pretty trivial for anyone with focus. Moving to a 6 or 8 digit PIN would be an improvement across the board.
<urn:uuid:0aec86e8-a434-4bb6-bc37-1cf7b89fe4ed>
CC-MAIN-2013-20
http://skeletonkeysecurity.com/post/15012548814/pins-3-is-the-magic-number
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947318
801
3.140625
3
What’s it like to walk in space? In the first part of an examination of the challenges of living in space, our space correspondent steps outside the International Space Station. “Here we go…” “Whoah!” is all I can manage as I find myself transported from a cluttered office in Houston to Earth orbit. Below, 350km (220 miles) away, the blue and white crescent of the Earth. Above me is the glistening white hull of the International Space Station (ISS), its vast solar arrays glinting in the sunshine. Nasa hasn’t invented teleport. This is the agency’s Virtual Reality (VR) Laboratory at the Johnson Space Centre where dreams of space travel really can come true. The lab complements underwater training and prepares astronauts for EVAs [Extra Vehicular Activity] – space walks – and work with robotic arms. Most of the room looks like a regular office with desks, computers and monitors but the rear section resembles an eccentric gym. Ropes hang from the ceiling, metal boxes are suspended on bungee cords and the room is criss-crossed by lines and pulleys. “That’s where most of the virtual reality takes place,” says James Tinch, chief engineer for the Robotics Astronaut Office and manager of the lab. “The crewmembers put on the helmets and they get the sensation that they’re at the space station. The metal boxes with the ropes and pulleys tied to them are a mass handling device, so the crews can get a feel for what it takes to handle a large mass in space and how much trouble they might have just to move it around.” I sit on the chair at the centre of the test area and Tinch gently lowers a harness over my shoulders. This holds the electronics for the virtual reality helmet, which he tightens around my head, allowing me to see an image of the VR environment. Next come the gloves. They look like cycling gloves and are fitted with sensors for grip and movement. I pull them onto my fingers. A box mounted to the ceiling above me will track their position as I move my hands around. Then Tinch clicks the start button and I’m in orbit reaching out to the handrail just outside the airlock. I truly feel that I’m in space and yet to look at me, I’m still sitting on a chair at the centre of a room. “Right now you’re next to the ISS airlock, where the crew members come out,” says Tinch, calmly. But I’m feeling anything but calm as I struggle to comprehend my new surroundings. Try before you buy If I look down I can see the rest of my spacesuit; straight ahead and there are my hands, now encased in astronaut gloves, tightly clasping the handrail. After giving me a few minutes to get acquainted with the view, Tinch instructs me to try to pull myself across one of the space station modules by releasing my left hand from the rail and gripping again further along. The idea is to pull myself around, hand over hand. But I let go too quickly and end up pushing myself away. I try to ‘swim’ back towards the structure, waving my hands wildly back and forth, but realise there’s nothing to push against. One of the major challenges of space walking – and a fuller understanding of Newton’s laws of motion – starts to become apparent. “In space your hands pretty well do all your work for you,” says Tinch. “So your legs can kick and do anything but they’re not helping you.” “One of the things you’ll find in space is that your wrist is one of the primary sources of how you move your body around, so astronauts do a lot of exercises with their hands and wrists to make sure they’re strong enough,” Tinch explains. “It’s a long day in those suits as you’re working against the suit and working against yourself, trying to get the work done.” Quite how difficult space walking could prove to be was first brought home to Nasa in 1966, when astronaut Gene Cernan left the confines of the Gemini 9 spacecraft for the world’s third EVA. Later described by Cernan as “the spacewalk from hell,” he fought to control his tether and tumbled in a “slow motion ballet.” By the end, his heart rate had tripled, his visor had fogged up and he struggled to get back into the capsule. Although I wasn’t in any danger (except perhaps from falling off my chair), forty-six years later, I experienced similar problems. If I moved my arm one way, my virtual body spun the other. The normal rules of movement that we are accustomed to on Earth do not apply in space. Imagine the simple act of tightening a bolt – without something to push against, as you turn the bolt you end up spinning in the opposite direction, achieving nothing. To overcome this problem, the ISS is fitted with handrails, footholds and often the crew will also use a robotic arm to assist them. And they cannot just pop outside when the mood takes them. Lessons learnt over the years mean that every EVA is meticulously planned and choreographed. In fact, “space dance” might be a better way of describing what’s involved. Tinch explains that the VR lab enables astronauts to solve problems before they try it for real. “If I have these four bolts I have to undo, what’s the best way for my body position to be? So you’re trying to do the choreography of the EVA and trying to figure out what works best for that workspace.” For those already on the ISS, the lab is developing VR helmets that can hook up to the station’s own laptops so astronauts can refresh their training on the job. After 20 minutes in orbit, I’m exhausted. My back aches, my face is dripping with sweat and my wrists are sore. As he lifts off the helmet, Tinch assures me that astronauts trying this for the first time have similar problems. If this were real, I would be wearing a bulky spacesuit, looking through a visor and would not have the luxury of stopping when I got a bit tired. The experience has given me a new appreciation of the training, skill and effort it takes to operate in the uncompromising space environment. A reality check for those of us who advocate manned missions to the Moon and Mars that we should never take this stuff for granted. Space walking may look like fun but, once you get over the amazing view, it is hard. Really hard. In future columns Richard will be reporting from inside the space station control room and the full-sized mock-up of the ISS at Houston to discover what it takes to keep the astronauts alive.
<urn:uuid:3a1f0c28-90f8-48a7-b3f1-7391070c9967>
CC-MAIN-2013-20
http://www.bbc.com/future/story/20121123-taking-a-walk-in-space/print
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946659
1,469
2.75
3
Powers of the Federal Government the Constitution set the basis for the government we have today. Powers are divided between the federal (or national) government and the 50 states. The Founding Fathers knew they had to leave enough powers with the states when they were writing the Constitution. If they didn't, they knew the state legislatures would never ratify the Constitution. All states were granted the right to control certain things within their borders. They could do so as long as they did not interfere with the rights of other states or the nation. Show What You Know
<urn:uuid:f4c8bba8-6700-4330-88e8-06b5472a20d8>
CC-MAIN-2013-20
http://www.congressforkids.net/Constitution_powersoffedgov.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982185
120
3.890625
4
Comparing Common Windows Terms with Mac Terms Computers seem to run on jargon. Many of the buzzwords and common terms used to work with Macs are exactly the same as those used for Windows: files, users, log on, log out, open, close, shut down, help, and most networking and Internet terms. Terms used to describe the graphical interface are mostly the same, too: menu, check box, dialog box, radio button, dragging, clicking, and double-clicking. But some terms are different between the Mac and Windows PC. The following table provides the equivalent terms or types of programs for each platform. |Windows||Mac OS X| |Control Panel||System Preferences| |Exit (Alt+FX)||Quit (Command+Q)| |My Documents||Documents folder| |My Music||Music folder| |My Pictures||Pictures folder| |Recycle Bin||Trash Can| |Hourglass cursor (busy signal)||Spinning beach ball (busy signal)| |Windows Explorer||Finder window| |Windows Update||Software Update|
<urn:uuid:78714a8d-7207-41a2-812c-3d0707cc9869>
CC-MAIN-2013-20
http://www.dummies.com/how-to/content/comparing-common-windows-terms-with-mac-terms.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.714622
234
2.65625
3
Serum iron is a test that measures how much iron is in your blood. Fe+2; Ferric ion; Fe++; Ferrous ion; Iron - serum How the test is performed: A blood sample is needed. For information on how this is done, see: Venipuncture Iron levels are highest in the morning. It's best to do this test in the morning. How to prepare for the test: Make sure your doctor knows about all the medications your are taking. Drugs that can increase iron include estrogens, birth control pills, and methyldopa. Drugs that can lower iron include cholestyramine, colchicine, deferoxamine, methicillin, allopurinol, and testosterone. How the test will feel: When the needle is inserted to draw blood, some people feel moderate pain. Others will feel only a prick or stinging sensation. Afterward, there may be some throbbing. Why the test is performed: Your doctor may order this test if you have signs of low iron (iron deficiency). - Iron: 60-170 mcg/dL TIBC : 240-450 mcg/dL - Transferrin saturation: 20-50% Note: mcg/dl = micrograms per deciliter Note: Normal value ranges may vary slightly among different laboratories. Talk to your doctor about the meaning of your specific test results. The examples above show the common measurements for results for these tests. Some laboratories use different measurements or may test different specimens. What abnormal results mean: Higher-than-normal levels may mean: Lower-than-normal levels may mean: Other conditions under which the test may be performed: - Anemia of chronic disease What the risks are: There is very little risk involved with having your blood taken. Veins and arteries vary in size from one patient to another and from one side of the body to the other. Taking blood from some people may be more difficult than from others. Other risks associated with having blood drawn are slight but may include: - Excessive bleeding - Fainting or feeling light-headed - Hematoma (blood accumulating under the skin) - Infection (a slight risk any time the skin is broken) Ginder G. Microcytic and hypochromic anemias. In: Goldman L, Schafer AI, eds. Cecil Medicine . 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 162. Yee DL, Bollard CM, Geaghan SM. Appendix: Normal Blood Values: Selected Reference Values for Neonatal, Pediatric, And Adult Populations. In: Hoffman R, Benz EJ, Shattil SS, et al, eds. Hematology: Basic Principles and Practice. 5th ed. Philadelphia, Pa: Elsevier Churchill Livingstone; 2008:chap 164. |Review Date: 2/8/2012| Reviewed By: Todd Gersten, MD, Hematology/Oncology, Palm Beach Cancer Institute, West Palm Beach, FL. Review provided by VeriMed Healthcare Network. Also reviewed by Linda J. Vorvick, MD, Medical Director and Director of Didactic Curriculum, MEDEX Northwest Division of Physician Assistant Studies, Department of Family Medicine, UW Medicine, School of Medicine, University of Washington; David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
<urn:uuid:c72aaad7-0506-4b8a-b102-4e785b68cef9>
CC-MAIN-2013-20
http://www.mercydurango.org/body.cfm?id=186&action=detail&AEArticleID=003488&AEProductID=Adam2004_1&AEProjectTypeIDURL=APT_1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.848395
832
3.0625
3
Practice of engaging in sexual activity, usually with individuals other than a spouse or friend, in exchange for immediate payment in money or other valuables. Prostitutes may be of either sex and may engage in either heterosexual or homosexual activity, but historically most prostitution has been by females with males as clients. Prostitution is a very old and universal phenomenon; also universal is condemnation of the prostitute but relative indifference toward the client. Prostitutes are often set apart in some way. In ancient Rome they were required to wear distinctive dress; under Hebrew law only foreign women could be prostitutes; in prewar Japan they were required to live in special sections of the city. In medieval Europe prostitution was licensed and regulated by law, but by the 16th century an epidemic of venereal disease and post-Reformation morality led to the closure of brothels. International cooperation to end the traffic in women for the purpose of prostitution began in 1899. In 1921 the League of Nations established the Committee on the Traffic in Women and Children, and in 1949 the UN General Assembly adopted a convention for the suppression of prostitution. In the U.S. prostitution was first curtailed by the Mann Act (1910), and by 1915 most states had banned brothels (Nevada being a notable exception). Prostitution is nevertheless tolerated in most U.S. and European cities. In The Netherlands many prostitutes have become members of a professional service union, and in Scandinavia government regulations emphasize hygienic aspects, requiring frequent medical examination and providing free mandatory hospitalization for anyone found to be infected with venereal disease. Prostitutes are very often poor and lack skills to support themselves; in many traditional societies there are few other available money-earning occupations for women without family support. In developing African and Asian countries, prostitution has been largely responsible for the spread of AIDS and the orphaning of hundreds of thousands of children.
<urn:uuid:06a09860-8cc1-4b80-a210-d38acd0ebd3b>
CC-MAIN-2013-20
http://www.merriam-webster.com/dictionary/prostitution
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967392
389
2.75
3
|Search Results (6 videos found)| |NASASciFiles - Meteors NASA Sci Files segment explaining what meteors, meteoroids, and meteorites are and the differences in these. Keywords: NASA Sci Files; Rock; Comet; Outer Space; Meteor; Meteoroid; Meteorite; Sonic Boom; Speed of Sound; Seismic Activity; Fire Ball; Shooting Star; Popularity (downloads): 2170 |NASAWhy?Files - Equilibrium NASA Why? Files segment explaining the concept of equlibrium and how the Treehouse Detectives could maintain equlibrium in a Martian environment. Keywords: NASA Why? Files; Adaptation; Environment; Oxygen; Atmosphere; Astronauts; Trash Management; Module; Sunlight; Gravity; Equlilibrium; Balanced System; Mars; Habitat; Weather; Meteors; Plants; Algae; Algal Bloom; Fish; Popularity (downloads): 1401 |NASASciFiles - Moon Phases NASA Sci Files segment explaining the phases of the moon and how they are created. Keywords: NASA Sci Files; Gravity; Craters; Meteors; Astroids; Water; Earth; Moon; Moon Phases; Illuminations; Revolve; New Moon; Full Moon; First Quarter; Third Quarter; Sun; Lunar Phases; Axis; Apollo; Tides; Beach; Gravitational Pull; Oceans; Century; Popularity (downloads): 3386 |NASASciFiles - The Case of the Shaky Quake NASA Sci Files video containing the following eleven segments. NASA Sci Files segment exploring the different types of waves that earthquakes create. NASA Sci Files segment exploring faults and... Keywords: NASA Sci Files; Earthquake; Waves; Primary; Compressional Waves; Secondary; Sheer Waves; Earth; Vibrate; Epicenter; Surface Wave; Crust; Rock Layers; Faults; Normal Fault; Hanging Wall; Foot Wall; Reverse Fault; Strike-Slip Fault; Lithosphere; Plates; Earth; Fault Line; Plate Boundaries; Divergent Boundaries; Rift Valleys; Volcanoes; Convergent Boundary; Mountains; Transform Boundary; San Andreas Fault; Interplate Earthquakes; Fossils; Plate Tectonics; Dinosaur; Bones; Excavation; Climate; Riverbed; Arid; Equator; Continental Drift; Alfred Wagner; Pangaea; Rock Structures; Sandstone; Chimney Formation; Grand Canyon; Global Positioning System; Stations; Satellites; Crustal Movement; Earth; Blind Fault; Computer Simulation; Slip Rate; Prediction; Displacement; Layers; Core; Diameter; Iron; Nickle; Solid; Liquid; Dense; Mantle; Basalt; Granite; Density; Graduated Cylinder; Plates; Measurement; Richter Scale; Moment Magnitude Scale; Scientific Journals; Observations; Data; Epicenter; Comet; Outer Space; Meteor; Meteoroid; Meteorite; Sonic Boom; Speed of Sound; Seismic Activity; Fire Ball; Shooting Star; Earthquake Facts; Frenquency; Location; Intensity; California; Alaska; Weather; Seismograph; Inertia; Newton; Vertical Motion; Horizontal Motion; Seismology; Tremor; S Waves; P Waves; Sound Waves; Seismogram; Triangulation; Graph; Compass; World Map; Student Activity; Epicenter; Seismic Station; Popularity (downloads): 2158 |NASASciFiles - The Case of the Galactic Vacation NASA Sci Files video containing the following eleven segments. NASA Sci Files segment exploring the Arecibo Observatory, what it does, and where it is located. NASA Sci Files segment... Keywords: NASA Sci Files; Arecibo Observatory; Telescope; Radio Telescope; Radio Waves; Signals; Universe; Pulsar; Quasar; Reflector; Receiver; Electrical Signal; Control Room; Scientists; Equator; Wavelength; Optical; Atmosphere; Solar System; Galaxy; Extraterrestrial Intelligence; Artificial Signal; Forces of Motion; Free Fall; Weightlessness; Inertia; Acceleration; Parabola; Accelerometer; Space Travel; Roller Coaster; Navigation and Vehicle Health Monitoring System; Modified Bathrooms; Gravity; Zero Gravity; Exercise Equipment; Kitchen; Starship 2040; Orbit; International Space Station; Living Environment; Earth; Commander; Mars; Tourist Attraction; Canyon; Crater; Solar System; Planet; Water; Liquid; Frozen; Seasons; Axis; Polar Ice Caps; Atmosphere; Gas; Carbon Dioxide; Oxygen; Nitrogen; Hydrogen; Temperature; Space Suit; Common Denomenator; Meteors; Astroids; Water; Earth; Moon; Moon Phases; Illuminations; Revolve; New Moon; Full Moon; First Quarter; Third Quarter; Sun; Lunar Phases; Axis; Apollo; Tides; Beach; Gravitational Pull; Oceans; Century; Space; Distances; Parallax; Experiment; Student Activity; Optics; Protractor; Vertex; Angle; Data; Propulsion System; Space Radiation; Bone Mass; Chemical Rockets; Spaceship; Gases; Plasma; Magnetic Field; Exhaust; Energy; Heat; Electricity; Nuclear Power; Fusion; Thermonuclear Reaction; Technology; Arecibo Telescope; Solar System; Extra-solar Planets; Stars; Planets; Lightyears; Reflecting Telescope; Light; Dim; Betelgeuse; Giant Star; Life; Colors; Red; Blue; Temperature; Yellow; Dwarf Star; Sun; Habitable Zone; Ultraviolet Radiation; Puerto Rico; Galaxy; Orion Nebula; Hydrogen Gas; Whirlpool Galaxy; Extreme Environment; Boiling Temperature; Air Pressure; Celcius; Oxygen; Gravitational Force; Jupiter; Kilometers; Inner Planets; Mercury; Venus; Lava Flows; Helium; Saturn; Uranus; Neptune; Pluto; Astronomer; Proxima Centauri; Popularity (downloads): 1602 |NASAWhy?Files - The Case of the Inhabitable Habitat NASA Why? Files video containing the following fifteen segments. NASA Why? Files segment explaining how astronauts adapt to a new environment like space. NASA Why? Files segment explaining how astronauts... Keywords: NASA Why? Files; NASA Why? Files; Adaptation; Astronauts; Space; Altitude Sickness; Oxygen; Environment; Elevation; Sea Level; Training; Weightlessness; Free Fall; Parabola; Weightless Wonder; Airplane; Simulate; Zero Gravity; Vomit Comet; Trash Management; Module; Sunlight; Gravity; Equlilibrium; Balanced System; Mars; Habitat; Weather; Meteors; Plants; Algae; Algal Bloom; Fish; Atmosphere; Minerals; Water; Photosynthesis; Carbon Dioxide; Food Web; Consumers; Producers; Decomposers; Carnivores; Herbivores; Ominvore; Community; Survival; Bacteria; Fungi; Desert; Ocean; Food; Shelter; Reef; Lagoon; Forest; Pond; Animals; Rain Forest; Predators; Behaviors; Gravity; Outer Space; Microgravity; Earth; NASA; Gravitational Force; Boiling Point; Vacuum Pump; Martian Atmosphere; Boil; Density; Ice; Liquid Water; Water Vapor; Student Activity; Migration; Migratory Patterns; Turtles; Data; Coordinates; Food Source; Space Walk; International Space Station; Hubble Space Telescope; Neutral Bouyancy; Laboratory; Orbit; Space Suit; Radiation; Seeds; Plant Growth Chamber; Plant Reproduction; Germinate; Gases; Transpiration; Pores; Leaves; Evaporation; Condensation; Space Vehicles; Food; Nutrition; Space Seeds; Arabidopsis; Mustard Weed; Life Cycle; Control Group; Records; Reproduction; Normal Growth; Bioregenerative System; Extreme Temperature; Space Suit; Radiation; Protection; Outer Space; Air Pressure; Long Johns; Maximum Absorbency Garment; Iterative Process; Gloves; Space Station; Space Trash; Reduce; Reuse; Recycle; Trash Cans; Efficient Packaging; Progress; Hardware; Self-Sufficient; Soil; Nutrients; Terrarium; The Red Planet; Robotic Airplane; Winds; Iron; Lowlands; Highlands; Volcanoe; Canyon; Thin Atmosphere; Cold; Dry; Nitrogen; Argon; Popularity (downloads): 2175
<urn:uuid:7a8e7385-a09c-4655-9cde-6384af66f95d>
CC-MAIN-2013-20
http://www.open-video.org/results.php?keyword_search=true&terms=+Meteors
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.672482
1,787
3.453125
3
Researchers at Rensselaer have developed a new energy storage device that easily could be mistaken for a simple sheet of black paper. The nanoengineered battery is lightweight, ultra thin, completely flexible, and geared toward meeting the trickiest design and energy requirements of tomorrow’s gadgets, implantable medical equipment, and transportation vehicles. Along with its ability to function in temperatures up to 300 degrees Fahrenheit and down to 100 below zero, the device is completely integrated and can be printed like paper. The device is also unique in that it can function as both a high-energy battery and a high-power supercapacitor, which are generally separate components in most electrical systems. Another key feature is the capability to use human blood or sweat to help power the battery. Details of the project are outlined in the paper “Flexible Energy Storage Devices Based on Nanocomposite Paper” published Aug. 13 in the Proceedings of the National Academy of Sciences. The semblance to paper is no accident: more than 90 percent of the device is made up of cellulose, the same plant cells used in newsprint, loose leaf, lunch bags, and nearly every other type of paper. Rensselaer researchers infused this paper with aligned carbon nanotubes, which give the device its black color. The nanotubes act as electrodes and allow the storage devices to conduct electricity. The device, engineered to function as both a lithium-ion battery and a supercapacitor, can provide the long, steady power output comparable to a conventional battery, as well as a supercapacitor’s quick burst of high energy. The device can be rolled, twisted, folded, or cut into any number of shapes with no loss of mechanical integrity or efficiency. The paper batteries can also be stacked, like a ream of printer paper, to boost the total power output. “It’s essentially a regular piece of paper, but it’s made in a very intelligent way,” said paper co-author Robert Linhardt, the Ann and John H. Broadbent Senior Constellation Professor of Biocatalysis and Metabolic Engineering. “We’re not putting pieces together it’s a single, integrated device,” he said. “The components are molecularly attached to each other: the carbon nanotube print is embedded in the paper, and the electrolyte is soaked into the paper. The end result is a device that looks, feels, and weighs the same as paper.” The creation of this unique nanocomposite paper drew from a diverse pool of disciplines, requiring expertise in materials science, energy storage, and chemistry. Along with Linhardt, authors of the paper include Pulickel M. Ajayan, professor of materials science and engineering, and Omkaram Nalamasu, professor of chemistry with a joint appointment in materials science and engineering. Senior research specialist Victor Pushparaj, along with postdoctoral research associates Shaijumon M. Manikoth, Ashavani Kumar, and Saravanababu Murugesan, were co-authors and lead researchers of the project. Other The researchers used ionic liquid, essentially a liquid salt, as the battery’s electrolyte. It’s important to note that ionic liquid contains no water, which means there’s nothing in the batteries to freeze or evaporate. “This lack of water allows the paper energy storage devices to Along with use in small handheld electronics, the paper batteries’ light weight could make them ideal for use in automobiles, aircraft, and even boats. “Plus, because of the high paper content and lack of toxic chemicals, it’s environmentally safe,” Shaijumon said. Paper is also extremely biocompatible and these new hybrid battery/supercapacitors have potential as power supplies for devices implanted in the body. The team printed paper batteries without adding any electrolytes, and demonstrated that naturally occurring electrolytes in human sweat, blood, and urine can be used to activate the battery device. “It’s a way to power a small device such as a pacemaker without introducing any harsh chemicals such as the kind that are typically found in batteries into the body,” Pushparaj said. The materials required to create the paper batteries are inexpensive, Murugesan said, but the team has not yet developed a way to inexpensively mass produce the devices. The end goal is to print the paper using a roll-to-roll system similar to how newspapers are printed. The team of researchers has already filed a patent protecting the invention. They are now working on ways to boost the efficiency of the batteries and supercapacitors, and investigating different manufacturing techniques. The paper energy storage device project was supported by the New York State Office of Science, Technology, and Academic Research (NYSTAR), as well as the National Science Foundation (NSF) through the Nanoscale Science and Engineering Center at Rensselaer. Send comments to: Inside Rensselaer, Strategic Communications and External Relations 1000 Troy Building, 110 Eighth Street, Troy, N.Y. 12180 or to email@example.com. |Rensselaer Polytechnic Institute | About RPI | Virtual Campus Tour | Academics | Research | Student Life | Admissions | News & Events|
<urn:uuid:5530526e-09a8-4a83-9c62-1bf23703d615>
CC-MAIN-2013-20
http://www.rpi.edu/about/inside/issue/v1n2/paperbattery.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930898
1,121
3.34375
3
Pan across NGC 2841 Star formation is one of the most important processes in shaping the Universe; it plays a pivotal role in the evolution of galaxies and it is also in the earliest stages of star formation that planetary systems first appear. Yet there is still much that astronomers don’t understand, such as how do the properties of stellar nurseries vary according to the composition and density of gas present, and what triggers star formation in the first place? The driving force behind star formation is particularly unclear for a type of galaxy called a flocculent spiral, such as NGC 2841 shown here, which features short spiral arms rather than prominent and well-defined galactic limbs. NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration Acknowledgment: M. Crockett and S. Kaviraj (Oxford University, UK), R. O'Connell (University of Virginia), B. Whitmore (STScI) and the WFC3 Scientific Oversight Committee. About the Video |Release date:||17 February 2011, 15:00| About the Object |Type:||• Galaxies Images/Videos|
<urn:uuid:8327e9fb-6c2f-405c-8bcb-8aee0e919426>
CC-MAIN-2013-20
http://spacetelescope.org/videos/heic1104b/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.880944
247
3.890625
4
February 7th, 2011 05:56 PM ET Using marijuana, or cannabis, may cause psychosis to develop sooner in patients already predisposed to developing it, and in other patients the drug may even cause psychosis, according to a new study published in the Archives of General Psychiatry. "This finding is an important breakthrough in our understanding of the relationship between cannabis use and psychosis," according to the study. "It raises the question of whether those substance users would still have gone on to develop psychosis a few years later." Patients with psychosis tend to lose touch with reality and are prone to hallucinations and delusions about what is happening around them. Psychosis is frequently reported among patients with diagnosed mental illness such as schizophrenia and bipolar disorder. According to the study led by Australian researchers, in which data from 83 studies involving more than 20,000 patients were analyzed, marijuana users experienced psychosis about three years younger than non-users. Users of other substances (besides pot) experienced symptoms of psychosis two years sooner. Alcohol use had no influence on development of psychosis, according to the study. "Reducing the use of cannabis could be one of the few ways of altering the outcome of the illness because earlier onset of schizophrenia is associated with a worse prognosis," according to the study. "An extra two or three years of psychosis-free functioning could allow many patients to achieve the important developmental milestones of late adolescence and early adulthood that could lower the long-term disability arising from psychotic disorders." But experts say the complexity of interaction between genes and environment, and the possibility that cannabis is, in fact, a way to self-medicate when psychotic symptoms arise are not accounted for in this study. "It is distinctly possible, in fact likely, that folks who experience initial symptoms turn to cannabis in an effort to control them, then end up having a psychotic break of some sort earlier simply because they had their first symptoms earlier," said Mitch Earleywine, an associate professor of psychology at the State University of New York at Albany, who is also a marijuana policy expert. "This predicament makes it look as if cannabis preceded the psychotic symptoms when, in fact...folks with worse symptoms who are more likely to have an early break might simply be more likely to turn to cannabis." Theories about an association between marijuana use and schizophrenia include several –sometimes interrelated - scenarios: The possibility that cannabis causes schizophrenia; that cannabis may cause people vulnerable to schizophrenia to develop symptoms; that cannabis may make schizophrenia symptoms worse; or that people with schizophrenia are more likely to use cannabis, according to the study. Study authors suggest that this study, "lends weight to the view that cannabis use precipitates schizophrenia and other psychotic disorders," perhaps because of some confluence of genetic and environmental factors, or because using cannabis early in life may disrupt brain development. "[This study] found that cannabis is associated with early onset of psychosis and that is most likely true but it doesn't answer the question of which way it goes," said Dr. Charles L. Raison, associate professor in the department of psychiatry at Emory University, and CNNHealth's mental health expert doctor. "Does smoking cannabis early in life make you vulnerable to getting early psychosis or is the first manifestation of psychosis to do drugs and alcohol, or is it both?" Raison added that other studies suggesting a causal relationship between marijuana use and psychosis disagree with this one. Whatever the relationship between cannabis and psychosis, experts can agree that early use of cannabis is problematic. "No one wants to see young people get heavily involved with any psychoactive substances," said Earleywine. From around the web About this blog Get a behind-the-scenes look at the latest stories from CNN Chief Medical Correspondent, Dr. Sanjay Gupta, Senior Medical Correspondent Elizabeth Cohen and the CNN Medical Unit producers. They'll share news and views on health and medical trends - info that will help you take better care of yourself and the people you love.
<urn:uuid:69b9e716-2932-4e81-a229-b757cb33dd78>
CC-MAIN-2013-20
http://thechart.blogs.cnn.com/2011/02/07/marijuana-use-may-speed-psychosis/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956024
811
2.734375
3
The largest-ever three-dimensional map of the distant universe has been created using the light of the brightest objects in the cosmos. Since this distant light took eons to reach Earth, the map is essentially a window back in time, providing an unprecedented view of what the universe looked like 11 billion years ago. Normally, researchers make maps of the universe by looking at galaxies. "Here, we are looking at intergalactic hydrogen gas, which blocks light," said researcher Anze Slosar, a physicist at the U.S. Department of Energy's Brookhaven National Laboratory. "It's like looking at the moon through clouds — you can see the shapes of the clouds by the moonlight that they block." Mapping the universe Scientists from the Sloan Digital Sky Survey relied on the light of the brightest objects in the cosmos, quasars — brilliantly luminous beacons powered by giant black holes. As light from a quasar voyages to Earth, it illuminates clouds of intergalactic hydrogen gas that absorb light at specific wavelengths depending on the distances between each quasar and these clouds. This leads to an irregular pattern of quasar light known as the "Lyman-alpha forest." To make a full three-dimensional map of the universe, the researchers relied on 14,000 quasars. The map reveals a time 11 billion years ago, when the first galaxies were just beginning to come together under the force of gravity to form the first large clusters. "The most exciting thing for me personally is proving wrong everyone who was telling us that it is never going to work," Slosar told Space.com. The use of the Lyman-alpha forest in creating a 3-D map was unproven, "a large investment of time, 20 percent of a big international project, and it sort of had to work. But we were the first to show that it actually works. So, while we haven't yet discovered anything amazing about the universe itself using this technique, we demonstrated that it does work and that we will very likely discover new things." These observations came from the Baryon Oscillation Spectroscopic Survey (BOSS), the largest of the four projects making up the latest phase of the Sloan Digital Sky Survey. When BOSS completes its observations of about 140,000 more quasars by 2014, astronomers can make a map 10 times larger than the one being released today. "With that much data, we're bound to find things that we never expected," said researcher Patrick Petitjean, a quasar expert at the Institute of Astrophysics of Paris. Uncovering the mysteries For instance, the ultimate goal of such maps is to study how the expansion of the universe has changed during its history, which could shed light on the mysterious dark energy that seems to drive the accelerating expansion of the universe. Space news from NBCNews.com Teen's space mission fueled by social media Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew. - Buzz Aldrin's vision for journey to Mars - Giant black hole may be cooking up meals - Watch a 'ring of fire' solar eclipse online - Teen's space mission fueled by social media "Dark energy is one of the most surprising discoveries in physics in the last 20 years," Slosar noted. "Nobody has a foggiest idea of what it could be. So we study it by studying the expansion history and growth of structure in the universe. To study these we make maps of the universe at different epochs." By the time BOSS ends, "we will be able to measure how fast the universe was expanding 11 billion years ago with an accuracy of a couple of percent," said researcher Patrick McDonald of Lawrence Berkeley and Brookhaven National Laboratories, who pioneered techniques for measuring the universe with the Lyman-alpha forest and helped design the BOSS quasar survey. "Considering that no one has ever measured the cosmic expansion rate so far back in time, that's a pretty astonishing prospect." The scientists could, for example, "discover that dark energy actually kicked in 11 billion years ago rather than 7 billion as predicted by (the) simplest model and that would be just mind-blowing," Slosar said. "The potential for discovering anomalies is great." The scientists detailed their findings May 1 at a meeting of the American Physical Society in Anaheim, Calif. © 2013 Space.com. All rights reserved. More from Space.com.
<urn:uuid:0367d15a-aa73-432e-8422-0a5728b8997a>
CC-MAIN-2013-20
http://www.nbcnews.com/id/42863590/ns/technology_and_science-space/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945081
960
4
4
v. gleaned, glean•ing, gleans To gather grain or other produce left by reapers. 1. To gather grain or produce left behind by reapers. 2. To collect bit by bit 3. To strip (as a field) of the leavings of reapers Have you ever been gleaning? Ever thought about what that term really means? The definition is listed above. It is the act of picking fruits or vegetables (or grains) that were left behind during the initial harvest. In its simplest form, it is the act of saving food from being plowed under and going to waste. Often, farmers simply can’t afford to incur the labor, materials and transportation costs to retrieve the “leftovers” – and as a result each season millions of pounds of fresh food is left in the fields. On Saturday, Feb. 19, more than 40 high school students from Chets Creek Church in Jacksonville got an opportunity to experience the process firsthand when they traveled to Hastings, Fla., to glean broccoli. The majority of them had never been on a farm, or even knew what it meant to ‘glean’ broccoli from a field. It turned out to be a wonderful experience, and they asked if they could go again! The opportunity was made possible by Second Harvest’s partnership with Society of St. Andrew (SOSA) and SOSA’s relationships with local farmers, as well as the beautiful weather we enjoyed that day. In less than three hours, this group of high school kids picked 2,745 pounds of broccoli (four pallets worth)! Second Harvest was also able to pick up five bins of cabbage and two bins of citrus from neighbor farms – ultimately providing a full truck of fruits and vegetables to families and individuals experiencing hunger in our north Florida communities. We are very grateful for those farmers that allow us this opportunity and to the kids that worked so hard and, in the process, created memories that will last a lifetime. SOSA has gleaning opportunities on most Wednesdays and Saturdays during the growing season, which lasts from November to June. If your group is interested in gleaning, please contact Second Harvest volunteer coordinator Jessie Sanders at 904.517.5560, email@example.com. (All photos provided by Jeff Taylor Photography)
<urn:uuid:d51ed15b-08dd-43c2-ae23-b1388c2d2c6b>
CC-MAIN-2013-20
http://www.wenourishhope.org/blog/gleaning-and-grinning-chets-creek-church-youth-find-joy-broccoli?page=132
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965813
490
2.96875
3
On Thursday, Earth Day was celebrated throughout the country, and the Whittier Daily News included a supplemental section that featured "green" possibilities and initiatives. Earth Day focuses on the need to care for the Earth, a kind of human practice that can be understood as emerging out of a spiritual attitude. Our present tradition of Earth Day festivities, however, was generated by political actions. Conceived by former Wisconsin Sen. Gaylord Nelson, the first Earth Day celebration took place on April 22, 1970 when four million advocates throughout the nation participated in consciousness-raising events. This year marked the 40th anniversary of the founding of the celebration, a number that resonates with biblical stories and symbols. In biblical literature the number 40 signifies a time of trial or testing that is followed by a period of new possibilities. Noah and his kin boarded the ark for 40 days and 40 nights of flood, followed by fresh prospects for the survivors to start society again. Moses and his motley crew of refugees wandered 40 years in the vacant vastness of Sinai before crossing the Jordan River into Canaan. They perceived the land full of promise - flowing with milk and honey - a place where their fidelity to God could flourish. And Jesus went into the wilderness for 40 days of soul-searching solitude. This period of pondering preceded his baptism by John, which initiated his mission of healing and Unlike these biblical stories in which the period of 40 days or years signifies the end of a trial period, the celebrations of Earth Day hereafter will continue to raise awareness about new possibilities for caring for creation. So the alignment of the 40th anniversary of Earth Day with significant biblical periods is simply coincidental. The real spiritual character of Earth Day lies in its charge for humans to care for the earth. In one of the biblical accounts of creation, humans are given the responsibility to exercise dominion over creation (Genesis 1:28). In biblical usage, dominion does not mean wanton domination. Instead, it calls for responsible care, to be God's emissary in working with plants and animals. This idea is reaffirmed in Psalm 24, which my sixth-grade teacher had my classmates and me memorize and recite. Perhaps she was planting seeds for celebrating Earth Day a decade before its initial American celebration! The psalm begins with these words: "The earth is the Lord's and all that is in it, the world, and those who live in it" (RSV). Because the earth belongs to God, people must tend it and its inhabitants with respect and love. With such reverence for the wonder and beauty of the earth, Christians throughout the last century have often sung a joyful hymn whose text was written by Folliot Pierpoint. "For the Beauty of the earth, for the glory of the skies, for the love which from our birth, over and around us lies: Lord of all, to thee we raise, this our hymn of grateful praise." Although Earth Day was not one of the agricultural festivals during biblical times, the roots for its celebration reach back into the earliest stories and statements of biblical faith, and the love of the earth has been expressed in psalms and hymns throughout the years. Indeed, the celebration of Earth Day goes hand in hand with spiritual practices. Joseph L. Price is the Genevieve S. Connick Professor of Religious Studies at Whittier College.
<urn:uuid:96a1f1e5-16dd-4660-90f3-3df79c714bf4>
CC-MAIN-2013-20
http://www.whittierdailynews.com/news/ci_14938098
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957879
701
3.109375
3
The growing use of handheld devices and social media among students is creating a technology tipping point for schools that could completely break down the barriers between teaching platforms within five to 10 years, Bill Gates said today. Tablet computers, smartphones, e-readers, digital textbooks and the accessibility of digital video including YouTube are playing major roles in changing the way students are learning at both the K-12 and higher-education levels, Gates said during the keynote at the education arm of the South by Southwest conference in Austin, Texas. Digital video exercises incorporated into textbooks online are blurring the line between teaching and assessment to the extent that there really isn’t a boundary anymore between the two, the Microsoft co-founder and chairman said. “Finally there are people looking at whether textbooks should be fully digital,” he said, speaking to an audience of teachers, administrators and representatives of educational technology companies. Gates has championed the cause of global health through the Bill & Melinda Gates Foundation since stepping down from day-to-day operations at Microsoft, but improving the state of US education has been a major focus of his humanitarian work as well. One issue is the fact that standardized test scores at public schools have largely remained the same over the past decade or so even though resources being pumped into public school districts have doubled, Gates said. Better use of technology could be the key to improving public schools, he said. Some 44 percent of students in grades 6-8 say they want to read on a digital device, according to data presented by Gates during his SXSW talk. Meanwhile, 80 percent of high school students have access to smartphones, and Twitter use among high school students tripled last year, according to Gates. There is still the issue of cost when it comes to supplying students with, say, tablet computers or e-readers, “but we’re just on the cusp where combo tablet-PCs devices are rich enough [in functionality] and cheap enough that this will clearly be the way it’s done,” Gates said. Currently, the markets for technology content, services and back-end infrastructure for US schools amount to roughly US$420 million, but Gates said those markets could reach $9 billion in the future. The vision has challenges, Gates acknowledged. Education, for example, comprised a mere one percent of all venture capital transactions between 1995 and 2011, while technology in general and health care took in 38 percent and 19 percent of the pie, respectively, according to Gates. Other barriers include proving to administrators that technology works and ensuring that teachers are well-trained. “We’re going to have to grow this,” Gates said. Ultimately, he hopes that better use of technology will help provide more personalised learning options for students and lead to integration of software programs used in schools.
<urn:uuid:54888203-a491-4b95-b4fe-7089c00d8017>
CC-MAIN-2013-20
http://www.macworld.com.au/news/bill-gates-schools-are-at-a-technology-tipping-point-88771/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961599
585
2.9375
3
Institute for the Study of Earth, Oceans, and Space at UNH Scientists Say Developing Countries Will Be Hit Hard By Water Scarcity in the 21st Century By Sharon Keeler UNH News Bureau July 11, 2001 DURHAM, N.H. --The entire water cycle of the globe has been changed by human activities and even more dramatic changes lie ahead, said a group of experts at an international conference in Amsterdam on global change this week. "Today, approximately 2 billion people are suffering from water stress, and models predict that this will increase to more than 3 billion (or about 40 percent of the population) in 2025," said Charles Vorosmarty, a research professor in the University of New Hampshire's Institute for the Study of Earth, Oceans, and Space. There will be winners and losers in terms of access to safe water. The world's poor nations will be the biggest losers. Countries already suffering severe water shortages, such as Mexico, Pakistan, northern China, Poland and countries in the Middle East and sub-Saharan Africa will be hardest hit. "Water scarcity means a growing number of public health, pollution and economic development problems," said Vorosmarty. "To avoid major conflict through competition for water resources, we urgently need international water use plans," added Professor Hartmut Grassl from the Max-Planck-Institute for Meteorology in Germany. "I believe this should be mediated by an established intergovernmental body." The water cycle is affected by climate change, population growth, increasing water demand, changes in vegetation cover and finally the El Nino Southern Oscillation, bringing drought to some areas and flooding to others. Surprisingly, at the global scale, population growth and increasing demand for water -- not climate change -- are the primary contributing factors in future water scarcity to the year 2025. "But at the regional scale, which is where all the critical decisions are made, it is the combination of population growth, increasing demand for water, and climate change that is the main culprit," said Vorosmarty. According to El Nino expert, Professor Antonio Busalacchi from the University of Maryland, the two major El Nino events of the century occurred in the last 15 years and there are signs that the frequency may increase due to human activities. "In 1982-83, what was referred to as the "El Nino event of the century" occurred with global economic consequences totaling more than $13 billion," said Busalacchi. "The recently concluded 1997-1998 El Nino was the second El Nino event of the century with economic losses estimated to be upward of $89 billion."
<urn:uuid:62224b44-50bb-4f13-ba4c-993a1afd2ab3>
CC-MAIN-2013-20
http://www.unh.edu/news/news_releases/2001/july/sk_20010712vorosmarty.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944163
548
3.140625
3
PRIAM -- Their grandparents knew the value of corn cobs as a fuel, and used them to help warm the house or heat up the oven for baking bread. Yet brothers Lonnie Fosso and Ryan Fosso couldn't help but admit their amazement Wednesday as they watched two combines harvest their corn while also collecting the corn cobs. The corn cobs are being harvested as fuel for the Chippewa Valley Ethanol Company plant in Benson. "I wish they could see this,'' said Lonnie Fosso. Dozens of people did. The harvest at the Fosso farm south of Priam served as the first of three demonstration sites for a first-of-its-kind project in the region. Chippewa Valley Ethanol Company has been powering some of the operations at its 47 million-gallon-a-year ethanol plant by feeding biomass to a gasifier. The Frontline Technology system turns the biomass -- currently wood chips -- into a synthetic gas that can replace natural gas. After more than four months of trial use, the gasifier has proven its ability to displace about 25 percent of the natural gas used by the plant. The goal is to someday see biomass replace 75 percent to perhaps as much as 90 percent of the natural gas used at the plant, according to Gene Fynboh, a member of the board of directors and coordinator for the biomass harvest project. Chippewa Valley Ethanol Company would like corn cobs raised by its members to become the biomass fuel of choice. Corn cobs are relatively easy to collect, store and handle when compared to other types of biomass. They have good energy content. And, they hold very little of the nutrients that are returned to the soil by corn stover, Fynboh said. The cooperative would rather pay its farmer members for corn cobs and circulate the dollars at home than buy expensive fossil fuel from sources a long ways from home, Fynboh said. The evidence so far indicates that locally produced biomass is a lower-cost fuel than natural gas, he added. Modern technology makes it much more efficient to convert biomass to energy than when the Fossos' grandparents tossed corn cobs into the wood stove. But in some respects, we're still stuck in the days of the bang board wagon in terms of the technology and knowledge needed to harvest and store corn cobs on a large scale. "This is something we haven't done for 50 years,'' Fynboh said. "We have to learn it all over again.'' Chippewa Valley Ethanol Company and the University of Minnesota West Central Research and Outreach Center obtained $250,000 in grant funds to find a better way. At the Fosso farm, combines demonstrated two different ways to collect the corn cobs that would otherwise have been chopped and spit out with the rest of the corn stover on to the field. One combine towed a pull-behind, Vermeer CCX770 Cob Harvester to collect the cobs. The other had a Ceres Ag Residue Recovery System mounted piggyback on it for the same purpose. With either system, the cobs were periodically dumped into trucks or onto piles. Either system will add fuel costs to the harvest, and necessitate some additional help. An extra truck to haul cobs and a pay loader to build the piles were at work. Fynboh said goal number one is to harvest the corn as efficiently as if the cobs were not being collected. The Fossos said that is a must for them: Farmers cannot afford to jeopardize their corn harvest in any way. The Fossos and Chippewa Valley Ethanol Company are also interested in knowing how the costs of harvesting the corn cobs compare to their market value as a fuel. Harvesting corn cobs as fuel could increase the overall economic return on corn fields, Fynboh said. "We have a resource that is right under our nose here,'' he said. Speaking of the ground, it too is under study. Fynboh said the research will examine how much of the biomass can be taken without adversely affecting the soil or future yields. Fynboh said Chippewa Valley Ethanol Company has lined up 5,000 acres of corn from its members to harvest for the research. The cooperative's 980 owner-members raise corn on 112,000 acres. That is ample to supply all of the biomass energy the company is seeking, Fynboh said.
<urn:uuid:2030c8a3-dffb-4840-bb8f-2ffc5a3cf310>
CC-MAIN-2013-20
http://www.wctrib.com/content/cvec-looks-harvesting-corn-cobs-form-biomass-fuel
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962368
934
2.53125
3
Pauline Hopkins and the American Dream Pauline Elizabeth Hopkins was perhaps the most prolific black female writer of her time. Between 1900 and 1904, writing mainly for Colored American Magazine, she published four novels, at least seven short stories, and numerous articles that often addressed the injustices and challenges facing African Americans in post–Civil War America. In Pauline Hopkins and the American Dream, Alisha Knight provides the first full-length critical analysis of Hopkins’s work. Scholars have frequently situated Hopkins within the domestic, sentimental tradition of nineteenth-century women's writing, with some critics observing that aspects of her writing, particularly its emphasis on the self-made man, seem out of place within the domestic tradition. Knight argues that Hopkins used this often-dismissed theme to critique American society's ingrained racism and sexism. In her “Famous Men” and “Famous Women” series for Colored American Magazine, she constructed her own version of the success narrative by offering models of African American self-made men and women. Meanwhile, in her fiction, she depicted heroes who fail to achieve success or must leave the United States to do so. Hopkins risked and eventually lost her position at Colored American Magazine by challenging black male leaders, liberal white philanthropists, and white racists—and by conceiving a revolutionary treatment of the American Dream that placed her far ahead of her time. Hopkins is finally getting her due, and this clear-eyed analysis of her work will be a revelation to literary scholars, historians of African American history, and students of women’s studies. Alisha Knight is an associate professor of English and American Studies at Washington College. Her published articles include “Furnace Blasts for the Tuskegee Wizard: Revisiting Pauline E. Hopkins, Booker T. Washington, and the Colored American Magazine,” which appeared in American Periodicals.
<urn:uuid:3149e7ce-57f0-4c3f-a286-cfdb585a83ee>
CC-MAIN-2013-20
http://freading.com/ebooks/details/r:download/b3JnLmJpYmxpb3ZhdWx0Ljk3ODE1NzIzMzg4OTA=/new
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960841
392
2.640625
3
ThomasNet Articles about custom manufacturing and fabrication including careers, employment trends, laser cutting, powder coating, history of manufacturing, machinist resources and more. California ConnectEd Linked Learning connects strong academics with real–world experience in a wide range of fields, such as engineering, arts and media, and biomedical and health sciences—helping students gain an advantage in high school, college, and career. Teaching Kids Real Math with Computers – Conrad Wolfram TED Ideas Worth Spreading Relates math to real work and everyday living An Overview of the Methodological Approach of Action Research, Rory O’Brien, Faculty of Information Studies, University of Toronto Resource Area For Teaching (RAFT) Sacramento is a new non-profit organization that fosters hands-on teaching as the best way for teachers to teach and students to learn in pre-school through 12th grade education and community programs. Resource Area For Teaching (RAFT) With more than 8500 members throughout Silicon Valley at our San Jose and Redwood City, CA resource centers, RAFT produces innovative “hands-on” teaching idea sheets and packaged activity kits that are created around important concepts in science, technology, engineering and math (STEM) as well as reading and art. CTE Central Find out how the CTE Pathways Initiative funds are being used in communities to make a difference for students, faculty, employers/industry and others. National Girls Collaborative Project NGCP brings together organizations throughout the United States that are committed to informing and encouraging girls to pursue careers in science, technology, engineering, and mathematics (STEM). California Technology Education Resource (CTER) Center provides a resource for teachers and administrators in technology education in the state of California Project-based learning: a case for not giving up Suzie Boss on Edutopia. Seven Essentials for Project-Based Learning John Larmer and John R. Mergendoller on ASCD
<urn:uuid:929cf978-4b3d-4a02-9a78-2662445d3bd7>
CC-MAIN-2013-20
http://sierraschoolworks.com/section/curriculum/resources/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906137
396
2.53125
3
CSE 341 -- Programming Languages Department of Computer Science and Engineering, University of Washington Steve Tanimoto (instructor) and Jeremy Baer (teaching assistant). Assignment P1Version 1.00 of May 16 Subject to change. Perl WarmupDue date and time: Wednesday, June 2, 1999, in class. (Note change of due date). Turn in this assignment as a hardcopy printout. For #1 show the program, and examples of input and output for 2 separate example files, the first of which uses the 5-line text file in the assignment. For #2, give printouts of (1) the HTML forms page as it looks in the browser, with the form areas filled out ready to be submitted, (2) the Perl source code, and (3) the web page that is generated by the script. In case you choose option 2a, give a printout of the email message, too. Instructions: Do both exercise 1 and exercise 2. 1. Write a perl program that processes a file and builds an inverted index that tells for each word in the file all of the line numbers where the word can be found in the file. For example, if the input file contains the 5 lines of text... This is a sample file and the word zebra occurs on lines 2 and 5. The numbers here are treated just like words such as zebra. Then the inverted index would look like the following when printed out. and: 2, 3 the: 2, 3 zebra: 2, 5 Note: All words have been converted to lower case, and punctuation has been ignored. Peform a kind of "stemming" on the words as they are put into the index. For example, after stemming, each of the following words become "jump". jump, jumped, jumping. Your solution to this is permitted to make mistakes. It doesn't have to handle irregular verbs. You may be able to avoid converting "swing" to "sw" by avoiding stemming whenever the result would be shorter than 3 characters long. 2. Create a Perl script and test it with a web server and browser to do one of the following (your choice). a. receive the values of an HTML form for some questions about programming languages, and then (a) email the results posted by the user to your own mail account, and (b) print a nice message that somehow "evaluates" the user's answers telling them something like "right", "wrong", "I agree", "you have good taste", etc., according to what they answered in the form. b. receive a URL from the user via an HTML form, and then retrieve the document from that location and run the algorithm of exercise 1 (inverted index) on it, and finally format the results as HTML that is returned to the user. c. implement a "vote counter" that lets a web surfer vote on some set of candidates. It should (a) update the count for that candidate selected, and display the result. It should also refuse to accept a second vote for the same election from the same IP address. (Use a browser cookie that names the group of candidates, i.e., the particular election that the user has voted in). It should be possible for the same web page to have multiple elections (e.g., one for president, one for Teamwork: Do your work individually on this assignment. Resources: The recommended platform for this assignment is Fiji. Each of you should have an account on this machine.
<urn:uuid:d67493df-b69d-47fb-861a-713e60fbc80d>
CC-MAIN-2013-20
http://www.cs.washington.edu/education/courses/cse341/99sp/assign/assignment-P1.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907941
758
3.078125
3
Copyright © 2007 Dorling Kindersley Light has different wavelengths, which we see as colors. The range of wavelengths we see is called the visible spectrum. We separate the colors of the spectrum by DISPERSION. Light waves are just one type of electromagnetic wave. They belong to an electromagnetic spectrum that includes radio waves, X-rays, and gamma rays. The visible spectrum is the only part the human eye can see. To our eyes, the colors in the visible spectrum range from violet at one end to red at the other. The light-sensitive cells in the human eye react to just three types of light: red, green, and blue wavelengths. These are the three primary light colors. If all three types of wavelength enter the eye with equal strength, we see white light. When just red and green light are present, we see the mixture as yellow. Different wavelengths of light blend to produce millions of shades of color. The human eye is able to pick out over 10 million of them—some of which can be shown by a COLOR TREE. The amount of color we see depends on how much light there is. In dim light, we see no colors at all, only shades of gray. When white light shines through a specially shaped piece of glass called a prism, it is separated into its different wavelengths by dispersion. The wavelengths show up as a range of colors called a spectrum. English scientist Isaac Newton first used a prism to disperse sunlight in the late 1600s. Rainbows appear when there are water droplets in the atmosphere and bright sunshine at the same time. The droplets act like tiny prisms, refracting and reflecting the sunlight, and dispersing it into the colors of the spectrum. To see a rainbow, you have to be standing at a particular angle to the water droplets and the Sun. A color tree is one way of grading or classifying colors. Using a color tree, it is possible to describe and then match a particular shade of color (of paint or fabric, for example).
<urn:uuid:a2fbd6ec-2579-4173-9905-b0b5cd3cbcab>
CC-MAIN-2013-20
http://www.factmonster.com/dk/encyclopedia/color.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917922
419
4.25
4
Biological invasions: a growing threat to biodiversity 07 May 2012 | News story Biological invasions: a growing threat to biodiversity, human health and food security. Policy recommendations for the Rio+20 process drafted by IUCN SSC Invasive Species Specialist Group and IUCN's Invasive Species Initiative. Planet Under Pressure 2012 was the largest gathering of global change scientists leading up to the United Nations Conference on Sustainable Development (Rio+20) with a total of 3,018 delegates at the conference venue and over 3,500 that attended virtually via live webstreaming. The first State of the Planet Declaration was issued at the conference. Following the conference and declaration several ISSG members were concerned with the limited attention being paid to the issue of biological invasions and invasive alien species in the Rio+20 process. Members proposed the development and submission of a policy paper highlighting the growing threat of biological invasions on biodiversity, human health and food security for the Rio+20 process. After extensive consultation with the membership, the ISSG with the IUCN's Invasive Species Intitiative (ISI) developed and submitted a policy brief related to biologival invasions and invasive alien species to the IUCN. This brief will be included in the IUCN documentation for Rio+20 and text be reflected in the umbrella position paper (which will form the basis of IUCN’s statement to the Rio+20 conference). The Rio+20 Conference will take place in Rio de Janeiro, Brazil, from is June 20 to 22, 2012, in order to mark the 20th anniversary of the United Nations Conference on Environment and Development, also called the “Rio Earth Summit”. The conference will focus on two themes: 1) a Green Economy in the context of sustainable development and poverty eradication; and 2) the Institutional Framework for Sustainable Development.
<urn:uuid:12135c03-6130-4c1c-b83a-d6e03d43a44c>
CC-MAIN-2013-20
http://www.iucn.org/fr/nouvelles_homepage/nouvelles_par_theme/politique_mondiale_news/?9767/Biological-invasions-a-growing-threat-to-biodiversity
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914797
387
2.734375
3
People who suffer from allergies may have a lower risk of brain cancer, researchers say. In a study at Ohio State University, blood samples from brain tumour patients were compared with those from people of a similar age and sex who were cancer-free. The researchers found people affected by allergies to pollen, grass, pets and dust mites had a lower rate of glioblastoma, one of the most common types of brain cancer. The results, published in the Journal of the National Cancer Institute, showed women with allergy-related antibodies were 54 per cent less likely to get a brain tumour, with men 20 per cent less likely. Its thought that because the immune system is on red alert much of the time with an allergy, this suppresses the growth of cancerous cells in the brain. Last year, a Danish study found people with common contact allergies, such as to nickel, had a lower risk of developing certain cancers such as breast and skin cancer. - DAILY MAIL
<urn:uuid:165c28e1-1636-4df7-aa39-1c6e17e4b220>
CC-MAIN-2013-20
http://www.nzherald.co.nz/health-wellbeing/news/article.cfm?c_id=1501238&objectid=10840912
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.977728
205
3.078125
3
The waste characteristics factor category in the ground water pathway is made up of two components: the toxicity/mobility of the most hazardous substance associated with the site and the hazardous waste quantity at the site. The most hazardous substance at a site is identified as the hazardous substance receiving the highest toxicity/mobility value factor. The hazardous waste quantity factor is evaluated as discussed in Section 5. - The Most Hazardous Substance at a Site - Toxicity/Mobility Factor - Hazardous Waste Quantity - Waste Characteristics Factor Category Value The most hazardous substance at a site is identified by calculating the toxicity/mobility factor all eligible hazardous substances associated with the site. Eligible hazardous substances consist of those hazardous substances available to migrate from the sources at the site to ground water and include: - All hazardous substances found in ground water observed releases at the site. - All hazardous substances found in a source with a non-zero ground water containment factor value. - All hazardous substances assigned to the "unallocated source." As noted in Section 5, the determination of the toxicity value for a hazardous substance is complex, involving assessments of a substances relative propensity to cause cancer and/or non-cancer adverse health effects (such as liver damage or death). Toxicity values for nearly all hazardous substances of interest can be found in SCDM. If a value is needed for a hazardous substance not found in SCDM, then EPA should be consulted to determine the appropriate course of action. Toxicity factor values should not be independently calculated. Ground water mobility is a measure of the propensity of a substance to migrate through an aquifer and reach targets. The mobility factor value assigned to a hazardous substance is based on generic and site-specific considerations. The mobility factor for any substance found in any ground water observed release at the site is assigned a value of 1. Otherwise, the substance mobility factor value is assigned based on the water solubility and distribution coefficient of the substance. As with the toxicity factor, the rules for determining the mobility factor values for a hazardous mobility factor values for nearly all hazardous substances of interest can be found in SCDM. If a value is needed for a hazardous substance not found in SCDM, then EPA should be consulted to determine the appropriate course of action. Mobility factor values should not be independently calculated. The SCDM mobility factor values may not be directly applicable at a site because the HRS contains special, site-specific provisions to be applied in certain situations: - Substances found in ground water observed releases are assigned a mobility factor value of 1. - Hazardous substance deposited or currently present in the source as a liquid are essentially assigned the maximum water solubility value. - Hazardous substances are essentially assigned the minimum distribution coefficient value when evaluating karst aquifers. - A default value of 0.002 is used if none of the hazardous substances eligible to be evaluated can be assigned a mobility factor value. SCDM contains water solubility and distribution coefficient values for many hazardous substances for use in these situations. It is important to note the sequential effect that demonstrating an observed release has on the ground water pathway score. If an observed release is demonstrated then: - the maximum likelihood of release value of 550 is assigned (a value 10 percent higher than the maximum potential to release value) - all hazardous substances meeting the observed release criteria are assigned the maximum mobility factor value. Further impacts of observed release demonstrations on target evaluations will be discussed later. With toxicity and mobility values calculated, the toxicity/mobility factor is obtained by referring to Table 3-9 of the HRS Rule. The most hazardous substance for the ground water pathway is the one with the highest toxicity/mobility value. This is the value used to score the aquifer (the value is entered in line 4 of Table 3-1 of the HRS Rule). Note that many substances that are highly toxic have low values for ground water mobility. For example, PCBs, which are highly toxic (toxicity value of 10,000), sorb easily and have a mobility value of 0.0001, even when in liquid state. This explains why observed releases of PCBs to ground water have been documented at only a few sites. What would happen to the toxicity/mobility value if PCBs were detected in a ground water sample at observed release criteria? ANSWER The hazardous waste quantity (HWQ) — the second component of the waste characteristics factor value — is evaluated based only on those sources that have a ground water containment value greater than zero. The waste characteristics factor category value is determined in a straight-forward manner by first multiplying the toxicity/mobility factor value by the hazardous waste quantity value, subject to a maximum value of 108. The waste characteristics factor category value is then determined using this product as specified in HRS Rule Table 2-7. The maximum value achievable in the ground water pathway is 100. In the absence of both an observed release and karst terrain, the waste characteristics factor category value remains the same for all aquifers underlying the site. The above diagram summarizes the steps taken to arrive at the waste characteristics value. Consider the site shown above. As illustrated, the site records indicate that 13 drums containing waste chlordane and arsenical pesticides were deposited in an unlined pit. The SCDM toxicity values for chlordane and arsenic: - arsenic: 10,000 - chlordane: 10,000 The SCDM ground water mobility values for chlordane and arsenic: - arsenic: 0.01 - chlordane: 0.01 [Slide 1 of 3] [Home]
<urn:uuid:a7219d10-3abf-4e7f-a3dd-aa6c45e66f3b>
CC-MAIN-2013-20
http://epa.gov/superfund/training/hrstrain/htmain/10wchar.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892162
1,160
3.078125
3
Friday, December 15, 2006 Welcome Message Converted Sunday, December 03, 2006 The Cascajal Block and the Lost Continent of Mu Cascajal Block and Symbols In September 2006, Science Magazine published a paper entitled "Oldest Writing In the New World". The item of note in the article and most widely reported was that the block showed what appeared to be a sample of the oldest writing ever found in the Americas. As this discovery took place in a location where William Niven had unearthed artifacts and relics in the early 20th century and some of these items were subsequently used as evidence by James Churchward of ancient civilizations - then this new discovery might have some bearing on my research. The questions I want answered are: 1. How close were the discoveries by Niven to the Cascajal Block? 2. Does this ancient script have similar characters to the symbols discussed in the books by James Churchward ? 3. Does this new discovery prove, disprove, or have no bearing on the theories of James Churchward ? In answering questions about William Niven and his digs in Mexico - I turned to the biography of William Niven entitled: "Buried Cities, Forgotten Gods."[Wicks and Harrison; Texas Tech University Press] Niven's Tablet #1584 It is true that the 1926 'Lost Continent of Mu, Motherland of Man' had an entire chapter entitled "Niven's Buried Cities," but James' first contact with Niven was after the publication of the first book (September 19, 1927.) Their relationship, as determined from correspondence, is covered in "Buried Cities, Forgotten Gods." In 1928, Niven sent rubbings of some 2500 tablets from the Valley of Mexico to James. These were translated by James and found their way into the 1931 "Lost Continent of Mu" and the 1931, "Children of Mu." I have seen references to these rubbings having been published, however I am not aware of the location of the original or reproductions thereof. Whether or not James' interpretation was valid is left to different research, the preceding is background information to help calculate the distance between discoveries. One location cited in the Mexico Valley was Atzcapotzalco (19.4889N 99.1836W.) The Cascajal Block was found in Cascajal, Veracruz-Llave (17.9500N 95.1167W). Using an online latitude/longitude calculator (http://jan.ucc.nau.edu/~cvm/latlongdist.html) and would place them 286.9 statute miles apart. In answering question #2, I perused the 1926 Lost Continent of Mu, Motherland of Man, a 1969 reprint of the 1931 Lost Continent of Mu, a 1988 reprint of the 1959 printing of Children of Mu (originally published in 1931,) and the 1988 reprint of the 1960 printing of the Sacred Symbols of Mu (originally published in 1933.) I could find no direct matches between the symbols on the Cascajal Block and any symbol in the aforementioned books. I looked at the symbols contained in the creation myth, I looked at the tables of primitive symbols and hieratic characters and I looked at the depictions of rock carvings. Cascajal Block Symbol #53 Lost Continent of Mu (1931) page 53 The closest a symbol came to resembling a symbol in James Churchward's books is #3[& 16, 45, 53, 59.] The symbol at the top is referenced in numerous places as "Lands of the West" (Lost Continent of Mu, Motherland of Man page 53). The oblong shape beneath is not part of Churchward's theories. Michael Everson, on his website analysis of the Cascajal Block (http://www.evertype.com/gram/olmec.html) identifies the symbol as a pineapple, which is as good an explanation given the other symbols on the tablet. Given that 28 (or 30 as interpretated by Michael Everson) unique symbols found on the Cascajal Block only one remotely resembles a symbol from an ancient civilization, then I would have to answer that there is no correlation between the symbols on the Cascajal Block and symbols discussed by James Churchward. Does this discovery prove, disprove or have no bearing on the theories of James Churchward ? Having no matches between the Cascajal Block symbols and those found in James Churchward's works does not prove or disprove anything, nor does the distances between where the objects were found. It might be said that the age of the artifacts could be different and that could account for the differences. Unfortunately, the Cascajal Block has no bearing on proving or disproving the theories of James Churchward, but still is an object to be studied in the hopes that one day, its secrets will be revealed to the world. - Stone Pages Archaeo News Printed Coverage - Stone Pages Archaeo News Audio Coverage - Archaeology Channel Printed Coverage - Archaeology Channel Audio Coverage - NPR Coverage - LiveScience Coverage - Research Paper (pdf format)
<urn:uuid:8fc9d110-1416-48fa-aa9b-0390723b05c9>
CC-MAIN-2013-20
http://jameschurchwardsmu.blogspot.com/2006_12_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939502
1,070
2.703125
3
HAMBURG, Germany, June 19, 2012 — The European X-ray Free-Electron Laser (European XFEL) international research facility has overcome one of its most difficult building phases: completion of a 3.6-mile-long network of tunnels. By 2015, laserlike x-ray flashes that enable new insights into the nanoworld will be generated in the tunnels by scientists worldwide. “Electrons will fly with almost the speed of light from DESY [Deutsches Elektronen Synchrotron] in Hamburg to Osdorf,” said professor Robert Feidenhans’l, chairman of the European XFEL Council. Tunnel XTL on Dec. 6, 2011. (Image: ©European XFEL) Tunnel construction began in July 2010 with the tunnel boring machine TULA (TUnnel for LAser), which concluded excavation in August 2011. A second boring machine was used from January 2011 right up until the last section of the five photon tunnels leading into the experiment hall was completed. (See: Europe signs on to big x-ray facility The accelerator tunnel, the longest of the accelerator facility, runs in a straight line for 1.3 mi through Hamburg’s underground. It branches out into five photon tunnels, which lead into the future experiment hall. Undulator tunnels, which contain special magnet structures that slalom accelerated and bundled electrons — inducing them to emit intense flashes of x-ray radiation — are set between the accelerator and photon tunnels. To generate the extremely short and intense x-ray flashes, bunches of high-energy electrons are directed through special arrangements of magnets (undulators). (Image: ©European XFEL, Design: Marc Hermann, tricklabor) The tunnels will be equipped with safety devices and infrastructure before the main components of the facility are installed. These include the superconducting electron linear accelerator, whose development, installation and operation will be conducted by DESY, and the photon tunnels, undulator lines and experiment hall, whose equipment and instrument installation will be led by the European XFEL. It is expected that scientists will be able to produce x-ray radiation for the first time here in 2015, generating up to 27,000 flashes per second — nearly 10 sextillion times brighter than the sun. More than 400 participants, including guests from politics and science as well as staff from collaborating companies attended the June 14 ceremony upon completion of the tunnel. (Image: ©European XFEL) “We expect great success for the life sciences, material sciences and nanotechnology when research at the European XFEL begins,” said Dr. Beatrix Vierkorn-Rudolph, head of the Subsection for Large Facilities, Energy and Basic Research, as well as the ESFRI Special Task of the German Federal Ministry of Education and Research. “The just-completed tunnel connects not only Hamburg and Schleswig-Holstein, but also scientists throughout Europe and beyond.” For more information, visit: www.xfel.eu
<urn:uuid:73ad18f4-6105-4ecd-bd8f-2ffc741757f4>
CC-MAIN-2013-20
http://photonics.com/Article.aspx?AID=51173
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931087
643
2.90625
3
| Central Route and Peripheral Route to Persuasion Persuasion is a topic in social psychology. People may be persuaded in different ways. Petty and Cacioppo (1981) suggested that there are two different ways or routes to persuasion: the central route and the peripheral route. The Central Route to Persuasion The central route to persuasion involves being persuaded by the arguments or the content of the message. For example, after hearing a political debate you may decide to vote for a candidate because you found the candidates views and arguments very convincing. The Peripheral Route to Persuasion The peripheral route to persuasion involves being persuaded in a manner that is not based on the arguments or the message content. For example, after reading a political debate you may decide to vote for a candidate because you like the sound of the person's voice, or the person went to the same university as you did. The peripheral route can involve using superficial cues such as the attractiveness of the speaker. Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classic and contemporary approaches. Dubuque, Iowa: Wm. C. Brown Company Publishers.
<urn:uuid:d0e2e310-6f34-443a-a996-1b41b5f647a7>
CC-MAIN-2013-20
http://www.psychologyandsociety.com/routestopersuasion.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00007-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911669
252
3.921875
4
Used in: cache-config The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using cache-scheme-mappings. Each of the cache scheme element types is used to describe a different type of cache, for instance distributed, versus replicated. Multiple instances of the same type may be defined so long as each has a unique scheme-name. For example the following defines two different distributed schemes: Some caching scheme types contain nested scheme definitions. For instance in the above example the distributed schemes include a nested scheme defintion describing their backing map. Caching schemes can be defined by specifying all the elements required for a given scheme type, or by inheriting from another named scheme of the same type, and selectively overriding specific values. Scheme inheritance is accomplished by including a <scheme-ref> element in the inheriting scheme containing the scheme-name of the scheme to inherit from. The following two configurations will produce equivalent "DistributedInMemoryCache" scheme defintions: Please note that while the first is somewhat more compact, the second offers the ability to easily resuse the "LocalSizeLimited" scheme within multiple schemes. The following example demonstrates multiple schemes reusing the same "LocalSizeLimited" base defintion, but the second imposes a diffrent expiry-delay. The following table describes the different types of schemes you can define within the caching-schemes element. |<local-scheme>||Optional||Defines a cache scheme which provides on-heap cache storage.| |<external-scheme>||Optional||Defines a cache scheme which provides off-heap cache storage, for instance on disk.| |<paged-external-scheme>||Optional||Defines a cache scheme which provides off-heap cache storage, that is size-limited via time based paging.| |<distributed-scheme>||Optional||Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes.| |<replicated-scheme>||Optional||Defines a cache scheme where each cache entry is stored on all cluster nodes.| |<optimistic-scheme>||Optional||Defines a replicated cache scheme which uses optimistic rather then pessimistic locking.| |<near-scheme>||Optional||Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache.| |<versioned-near-scheme>||Optional||Defines a near-scheme which uses object versioning to ensure coherence between the front and back tiers.| |<overflow-scheme>||Optional||Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache.| |<invocation-scheme>||Optional||Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes.| |<read-write-backing-map-scheme>||Optional||Defines a backing map scheme which provides a cache of a persistent store.| |<versioned-backing-map-scheme>||Optional||Defines a backing map scheme which utilizes object versioning to determine what updates need to be written to the persistent store.| |<remote-cache-scheme>||Optional||Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster via Coherence*Extend.| |<class-scheme>||Optional|| Defines a cache scheme using a custom cache implementation. Any custom implementation must implement the java.util.Map interface, and include a zero-parameter public constructor. Additionally if the contents of the Map can be modified by anything other than the CacheService itself (e.g. if the Map automatically expires its entries periodically or size-limits its contents), then the returned object must implement the com.tangosol.util.ObservableMap interface. |<disk-scheme>||Optional||Note: As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced by the external-scheme and paged-external-scheme configuration elements.|
<urn:uuid:b722194f-5046-499b-ba12-c686a12565c7>
CC-MAIN-2013-20
http://docs.oracle.com/cd/E14447_01/coh.330/coh33ug/cachingschemes.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.760478
938
2.515625
3
Assent of Children in Research The IRBMED has developed an assent template for researchers to use in developing an assent document appropriate for children ages 10 to 14. It is available at http://www.med.umich.edu/irbmed/ict.htm. Below are frequently asked questions about assent. More information is available on the IRBMED website at Why must IRBs and researchers consider assent? Federal regulations require, “The IRB shall determine that adequate provisions are made for soliciting the assent of the children, when in the judgment of the IRB the children are capable of providing assent.” The IRB must make the determination for each protocol that includes children. The IRB must require child assent unless it can be appropriately waived. 45 §46.408 & 21 § 50.55 What is assent? “Assent” means a child’s affirmative agreement to participate in research. Mere failure to object should not, absent affirmative agreement, be construed as assent. This means the child must actively show his or her willingness to participate in the research, rather than just complying with directions to participate and not resisting in any way. A child’s authority to say “No” to participating in research. 45 CFR 46.402(b), 21 CFR 50.3(n) What is an assent process? It is a reasonable effort to enable the child to understand, to the degree they are capable, what their participation in research would involve. It is not identical to the consent process. Parents retain responsibility to protect the child from risks. The assent should focus on elements important to the child (e.g. pain, time). Foremost, when assent is required by the IRB, parents and researchers must adhere to the child’s decision. What if the parent and child disagree? “If a child is capable of assent and the IRB requires that assent be sought, it must be obtained before the child can participate in the research activity. Thus, if the child dissents from participating in research, even if his or her parents or guardian have granted permission, the child’s decision prevails,” HHS Office of Human Research Protections (OHRP). How does the IRB determine when children are capable to assent? Regulations direct IRBs to consider: - Age and maturity of the children - Psychological state of the children - Nature of the proposed research activity Regulations give IRBs flexibility in deciding which children an investigator must assent. A judgment may be made for all children to be involved in research under a particular protocol, or for each child. The IRB’s outcome should be suited/delineated for the study at hand. Click here for guidelines that the IRB and researchers may use to determine whether to require assent. When can the IRB waive assent? There are 3 waiver options: 1. Capabilities of some or all children is so limited they cannot be consulted. 2. Study offers important benefit unavailable outside of the research. - When the study offers a treatment that is thought to be a better option than those currently available, or it offers the only alternative. (NCI) - Criterion of ‘potential direct benefit’ is not sufficient to grant this waiver. 3. Assent can also be waived under the same criteria as a consent waiver: - the study is minimal risk - subjects’ rights and welfare aren’t adversely affected - assent is not practicable (for reasons other than children’s capabilities) - when appropriate, the subjects will be provided pertinent information How do researchers communicate assent plans to the IRB?In section 10.2.2., eResearch prompts researchers to describe the process to seek and obtain informed assent for children and parental consent/permission (e.g., setting, timing, personnel involved, arrangements for answering subject questions before and after the consent is signed).
<urn:uuid:4b3d3637-9c1d-4272-ab5f-5af9b031c9ba>
CC-MAIN-2013-20
http://med.umich.edu/irbmed/guidance/assent.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928705
844
3.03125
3
Structs (C# Programming Guide) Structs are defined using the struct keyword, for example: Structs share almost all the same syntax as classes, although structs are more limited than classes: Within a struct declaration, fields cannot be initialized unless they are declared as const or static. A struct may not declare a default constructor —a constructor with no parameters — or a destructor. Copies of structs are created and destroyed automatically by the compiler, so a default constructor and destructor are unnecessary. In effect, the compiler implements the default constructor by assigning all the fields of their default values (see Default Values Table). Structs cannot inherit from classes or other structs. Structs are value types — when an object is created from a struct and assigned to a variable, the variable contains the entire value of the struct. When a variable containing a struct is copied, all of the data is copied, and any modification to the new copy does not change the data for the old copy. Because structs do not use references, they do not have identity — there is no way to distinguish between two instances of a value type with the same data. All value types in C# inherently derive from, which inherits from . Value types can be converted to reference types by the compiler in a process known as boxing. For more information, see Boxing and Unboxing. Structs have the following properties: Structs are value types while classes are reference types. Unlike classes, structs can be instantiated without using a new operator. Structs can declare constructors, but they must take parameters. A struct cannot inherit from another struct or class, and it cannot be the base of a class. All structs inherit directly from System.ValueType, which inherits from System.Object. A struct can implement interfaces. For more information:
<urn:uuid:be98057e-e601-4a73-bf04-c30b2c2ae018>
CC-MAIN-2013-20
http://msdn.microsoft.com/en-us/library/saxz13w4(v=vs.80).aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.891775
384
3.65625
4
MINNEAPOLIS - Sinkholes can swallow up earth in an instant and cause damage or even worse. A sinkhole in Florida claimed the life of a 36-year-old man on Thursday. "They can be unpredictable," said Greg Brick, author of the book "Subterranean Twin Cities." "It's why we have an entire profession, geotechnical engineering." Sinkholes happen in Minnesota, too. Some occur naturally like the massive indention in the Longfellow neighborhood of Minneapolis; a sinkhole 30 feet deep which occurred 3,000 years ago. Others can be caused by broken water mains. But most in Minnesota are found in the southeast corner where there have been hundreds. "They tend to be found in what are known as karst areas where you have the limestone bedrock that can be easily dissolved by water," says Brick. In the Twin Cities, you have what geologists call the "classic layer cake" of limestone then St. Peter sandstone. To learn more about the bedrock in Minnesota, visit mngs.umn.edu. (Copyright 2013 KARE. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.)
<urn:uuid:bf4bd2c1-4856-4889-a99f-3c75fb086946>
CC-MAIN-2013-20
http://www.kare11.com/rss/article/1013811/14/Still-rare-but-sinkholes-happen-in-Minn-
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942436
250
2.796875
3
Personal Hygiene in America Ever wonder what life was like before running water and today's endless assortment of toiletries? The plumbing and products we take for granted were nonexistent in colonial days, and this absence was glaringly apparent to visitors. Early travelers to this country noted the overall unclean condition of Americans--as one English tourist remarked, "filthy, bordering on the beastly." After several centuries, much progress has been made and personal hygiene for Americans has reached an art form. Colonists viewed bathing as more curative in nature than hygienic and therefore bathed infrequently in rivers and streams and occasionally in public baths and outdoor bathhouses. With the advent of the 19th century, Americans slowly began to bathe more. New furniture forms and accessories, such as tin tubs, washstands, and wash basins, were designed for use in one's home. These were located anywhere throughout the home, but were primarily found in kitchens and bedrooms. Soap was mainly used for laundry and was often made at home, as evidenced by numerous homemade recipes. By the mid 19th century, Americans started using soap to clean their skin, and manufacturers quickly met the dual demand by producing a variety of toilet and laundry soaps. It logically followed that as Americans washed their bodies more often, they also became concerned with washing their clothes. Every part of the body was eventually scrutinized, not just the skin. Early on, poor dental hygiene caused a number of ear, nose, and throat complaints. To remedy these maladies, Americans concocted recipes for homemade tooth powder and sometimes used twigs and table salt to brush their teeth before toothpaste and toothbrushes were sold. As new dental products were introduced, so were new hair care products and styles. At the end of the 19th century, American men came to view their bushy beards and mutton chops as just another place to harbor germs. A new business look of less facial hair for men became the fashion. The importance of etiquette books in spreading advice on cleanliness to Americans cannot be overlooked. Washing was once considered a privilege of the upper class. However, as these books became more accessible, the growing middle class used them as a blueprint in their quest for gentility and upper-class status. The gospel of hygiene then trickled down to the lower classes and immigrants in the late 1800s, when reformers taught them the rudiments of cleanliness in order to improve their health and assimilate them into the American way of life. Beginning in the middle of the 19th century, large cities across America undertook public works projects to build municipal water and sewer lines. These improvements in plumbing and sanitation necessitated that fixtures be attached to a maze of pipes. A separate room was now required to house these fixtures, making portable containers and accessories obsolete. As bathrooms were gradually added to homes, new innovations and inventions also offered a wide range of options, including pumping one's own shower. Styles in bathroom décor also changed over time. At first, fixtures were fashioned in wood with elaborate marquetry to imitate furniture. Toward the end of the century, with the emphasis on hygiene reaching new heights and scientists preaching germ theory, the bathroom closely resembled a laboratory with white, washable porcelain surfaces. Color was later added to bathrooms as they became more commonplace to personalize and soften the earlier scientific feel. The ritual of personal hygiene was now entrenched in the routine of American life.
<urn:uuid:8edb54f9-65db-48fd-b4f5-e6bcbaeb983e>
CC-MAIN-2013-20
http://www.winterthur.org/?p=735
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982583
708
3.25
3
Herbal Alternatives to Drugs in Pain Management, Part II By John Chen, PhD, PharmD, OMD, LAc Editor's note: Part I of Dr. Chen's column appeared in the May issue of Acupuncture Today. Traditional Chinese Medicine According to traditional Chinese medicine (TCM), the fundamental etiology of pain is qi stagnation, blood stagnation, or both. It is often said that where there is pain, there is stagnation; where there is stagnation, there is pain. Therefore, effective pain relief most often requires the use of herbs that activate qi and blood, removing stagnation and thus resolving the cause of pain. As is true in all treatment involving Chinese medicinal herbs, they are most commonly prescribed in carefully-combined formulas (rather than singly) that directly address the causes and/or symptoms of the imbalance and treat without creating unwanted side-effects or complications. In addition to treating qi and blood stagnation, successful treatment of pain also requires careful differential diagnosis of pain. The three main diagnostic keys are the location of the discomfort, the type of pain and the cause of pain. Location refers to the exact part of the body that is affected: upper body, lower body, external musculoskeletal muscle, internal smooth muscle and so on. The type of pain refers to the characteristics of the patient's pain, such as a sharp, stabbing pain or dull aching, pain at a fixed location as opposed to migratory pain, pain helped by cold or by heat, and other distinguishing characteristics. Lastly, identifying the cause of pain helps the practitioner differentiate soft tissue injuries from structural damage. For example, leg spasms and cramps often involve only soft tissue, while an acute sprained ankle is often accompanied by structural damage. Accurate evaluation of these three criteria is crucial for greatly enhanced diagnostic accuracy and successful relief for the patient. Herbal Treatment for Headache Headache pain may arise from internal or external causes such as invasion of wind, cold, heat, dampness, dryness, summer heat, accumulation of phlegm, and other pathogens in addition to qi and blood stagnation. Headache pain may represent excess or deficient conditions and may affect the occiput, vertex, sinuses and orbital region. It may also present with a complex of locations/symptoms, such as in migraine. Corydalis (yan hu suo) is one of the strongest herbs available to relieve pain and reduce inflammation. Research studies have shown it to work directly on the central nervous system with analgesic effects comparable to those of morphine and codeine.1,2 Another herb, pueraria root (ge gen), has demonstrated remarkable effectiveness in relieving headache pain.3,4,5 Other herbs have proved effective in relieving various types of headaches, including (but not limited to) migraine, vertex, sinus and orbital headache.6,7,8 Many classical Chinese herbal formulas are also commonly used to treat headache. Cnidium & tea formula (chuan xiong cha tiao san) treats headaches due to wind cold. Evodia combination (wu zhu yu tang) relieves vertex headache due to cold and is also used to treat migraine. Coptis, phellodendron & mint formula (huang lian shang qing wan) addresses headache caused by heat. Notopterygium & tuhuo combination (qiang huo sheng shi tang) treats headache due to wind and dampness. Gastrodia & gambir combination (tian ma gou teng yin) relieves headache secondary to liver yang rising. Eucommia & rehmannia formula (you gui wan) tonifies kidney deficiency to relieve headache. Tangkuei & ginseng eight combination (ba zhen tang) tonifies qi and blood deficiency to relieve headache. Pinellia & gastrodia combination (ban xia bai zhu tian ma tang) relieves headache due to phlegm stagnation. Herbal Treatment for Neck and Shoulder Pain Neck and shoulder injuries can be divided into two major categories: acute and chronic. Acute injuries are generally characterized by redness, swelling, inflammation and sharp pain. Chronic injuries are generally characterized by stiffness, numbness, discomfort and dull pain. Acute neck and shoulder problems are often caused by accidents, whiplash, improper sleeping or reading postures, and similar traumas. In addition to pain, redness, swelling and/or inflammation are sometimes present. Treatment consists of reducing pain, swelling and muscle spasms. Herbal formulas are designed to dispel painful symptoms while supporting the healing process. Strong analgesic herbs like corydalis (yan hu suo) are combined with anti-spasmodic herbs and blood-invigorating herbs to alleviate pain, promote blood circulation and open the meridian channels. Chronic neck and shoulder problems are characterized by pain, numbness, stiffness, discomfort, limited mobility, slow recovery or continuing deterioration. Effective treatment must focus on activating qi and blood circulation, opening the channels and collaterals, and nourishing the muscles and tendons. Corydalis is a main herb in the treatment of both acute and chronic neck and shoulder problems. In addition to having strong analgesic properties, it also has a distinctive facility for treating both acute and chronic cases of inflammation.9 Corydalis also protects against NSAID-induced gastric and duodenal ulcers by reducing gastric acid secretion.10 Classical formulas that treat neck and shoulder pain include the following specific applications. Lindera formula (wu yao shun qi san) is formulated for shoulder pain, while pueraria combination (ge gen tang) is more specific for stiff neck due to cold. Atractylodes & arisaema combination (er zhu tang) relieves deficient-type neck and shoulder disorders but may not have strong analgesic effects. Herbal Treatment for Back Pain Similar to neck and shoulder pain, back pain can be divided into two major categories: acute and chronic, with many of the key symptoms described in the categories above. Many classic formulas tonify the kidney to relieve back pain and weakness. Tuhuo & loranthus combination (du huo ji sheng tang) eliminates wind and dampness and has a rapid onset to relieve acute back pain. Herbal formulas that tonify the kidney tend to be slower in action and are more suitable for chronic back pain. Cyathula & rehmannia formula (zuo gui wan) is more specific to address kidney yin deficiency; eucommia & rehmannia formula (you gui wan) focuses more specifically on kidney yang deficiency; and rehmannia eight formula (ba wei di huang wan) tonifies both kidney yin and yang. Herbal Treatment for Musculoskeletal Pain and Painful Obstruction (Bi) Syndrome Musculoskeletal pain is often classified as painful obstruction (bi) syndrome. Though there are many causes of this syndrome, cold and heat are the most common etiologies. Cold-type musculoskeletal pain is characterized by stiffness, pain and limited range of motion of the joints. In Western terms, cold conditions are associated with chronic arthritis and arthralgias such as osteoarthritis and fibromyalgia. Heat-type musculoskeletal pain is characterized by redness, swelling, pain and/or inflammation of the muscles and joints. Patients typically present with muscle cramping and spasms. From a Western perspective, these patients have acute musculoskeletal disorders, typically involving inflammation of the muscles, bursae, tendons and ligaments. Gentiana macrophylla root (qin jiao), a popular ingredient in some remedies, has been shown to have anti-inflammatory activities comparable to those of aspirin (salicylic acid).4 Aconite tsao wu (cao wu) and aconite wu tou (chuan wu) Other herbs have demonstrated exceptional anti-rheumatic, anti-inflammatory, analgesic and anti-pyretic functions.13,14 White peony (bai shao) and licorice (gan cao) have demonstrated remarkable properties in relieving spasms, cramps and pain of skeletal and smooth muscles. Clinical applications include dysmenorrhea,3 musculoskeletal disorders,15 trigeminal pain,16 muscle spasms and twitching in the facial region,17 pain in the lower back and legs,18 abdominal pain and cramps due to intestinal parasites,19 and epigastric and abdominal pain.20 If there are complications to the musculoskeletal disorders described above, classical formulas offer treatment options for the patients. Cinnamon & anemarrhena combination (gui zhi shao yao zhi mu tang) treats musculoskeletal and joint pain due to wind heat. Cyathula & plantago formula (ji sheng shen qi wan) treats musculoskeletal and joint pain arising from cold. Coix combination (yi yi ren tang) treats musculoskeletal and joint pain caused by dampness. Tuhuo & astragalus combination (san bi tang) treats musculoskeletal and joint pain due to deficiency of qi and blood and weakness of the liver and kidney. If the etiology is unclear, notopterygium & turmeric combination (juan bi tang) may be used for relief of general musculoskeletal and joint pain. Herbal Treatment for Traumatic Injury Traumatic injury is characterized by severe qi and blood stagnation. Types of injuries include bruises, contusions, sprains, broken bones, surgical incisions and related internal trauma, and other physical traumas. For complications of traumatic injury, cinnamon & hoelen formula (gui zhi fu ling wan) is used to treat internal bleeding after traumatic or sports injuries; persica & rhubarb combination (tao ren cheng qi tang) is used to treat subcutaneous bleeding with severe swelling and pain. Pain is universally understood as a signal of disease and is the most common symptom that brings a patient to a physician.21 Western clinical medicine and traditional Oriental medicine share common goals of alleviating pain and eliminating the causes of pain; however, the philosophy and clinical approach to pain management in the two disciplines is very different. Generally speaking, Western drugs have immediate and reliable analgesic effects. Unfortunately, Western pharmaceuticals often cause serious short- and long-term side-effects. In addition, the chronic use of drugs, especially opioid analgesics, is strongly associated with addiction and negative social consequences and connotations. As a result, more and more patients are turning to herbal medicine as their primary, complementary or alternative treatment for pain. Herbal medicines definitely have outstanding analgesic, anti-inflammatory and anti-spasmodic functions and benefits. However, even though herbs and pharmaceutical drugs have many overlapping functions, they are not directly interchangeable or analogs of each other. The therapeutic effectiveness of herbal formulas is dependent on accurate diagnosis and careful prescription. When used properly, herbs are powerful alternatives to drugs for pain management. Pharmacology and Applications of Chinese Herbs 1983; 447. Zhu XZ. Development of natural products as drugs acting on central nervous system. Memorias do Instituto Oswaldo Cruz 86 (2):173-5, 191. Bensky D, et al. Chinese Herbal Medicine Materia Medica. Eastland Press 1993. Yeung HC. Handbook of Chinese Herbs. Institute of Chinese Medicine, 1983. Gao XX, et al. Effectiveness of pueraria root (ge gen) in treating migraine headache: a case report of 53 patients. Journal of TCM Internal Medicine (Zhong Hua Nei Ke Za Zi) 1977;6:326. Effectiveness of angelica (bai zhi) in treating occipital headache: a report of 73 cases. Air Force hospital in Hengyang, China. Modern Medical Journal (Xin Zhong Yi) 1976;3:128. Effectiveness of angelica (bai zhi) in treating chronic headache: a report of 62 cases. National Defense Hospital. Journal of Modern Medicine (Xin Yi Xue Yao Za Zi) 1976;8:35. Wang LS. Treatment of headache using xiong zhi shi gao tang: 50 cases. Shanxi Journal of Traditional Chinese Medicine 1985;10:447. Kubo M, et al. Anti-inflammatory activities of methanolic extract and alkaloidal components from corydalis tuber. Biol Pharm Bull February 1994;17(2):262-5. Study of Chinese Herbal Medicine 1976; p. 340. Military Hospital Unit #64. Effectiveness of aconite wu tou (chuan wu) in treating low back pain, a report with 225 patients. New Journal of Medicine and Pharmacology 1975;4:45. Zhang HT, et al. Treatment of frozen shoulders with aconite wu tou (chuan wu) and camphor (zhang nao). Shanghai Journal of Medicine and Pharmacology 1987;1:29. Liao JF. Evaluation with receptor binding assay on the water extracts of ten CNS-active Chinese herbal drugs. Proceedings of the National Science Council, Republic of China. Part B, Life Sciences. July 1995;19(3):151-8. Sun DH. Treatment of heat type of painful obstruction (Bi) syndrome with stephania (fang ji) in 120 patients. Shangdong Journal of Traditional Chinese Medicine 1987;6:21. Tan H, et al. Chemical components of decoction of radix paeoniae and radix glycyrrhizae. China Journal of Chinese Materia Medica Sep 1995;20(9):550-1, 576. Huang DD. Journal of Traditional Chinese Medicine 1983;11:9. Luo DP. Hunan Journal of Traditional Chinese Medicine 1989;2:7. Chen H. Yunnan Journal of Traditional Chinese Medicine 1990;4:15. Zhang RB. Jiangxu Journal of Traditional Chinese Medicine 1966;5:38-39. You JH. Guanxi Journal of Chinese Herbology 1987;5:5-6. Harrison TR, et al. Harrison's Principles of Internal Medicine, 14th edition, 1998. Click here for more information about John Chen, PhD, PharmD, OMD, LAc.
<urn:uuid:9f8e0e3e-0e45-4b8e-a7b4-8808436fba9f>
CC-MAIN-2013-20
http://www.acupuncturetoday.com/mpacms/at/article.php?id=27594
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.873163
3,050
2.828125
3
Learning at Laurel is often noisy, messy and fun! The cognitive growth of a Primary girl is exponential. The Primary School is an extraordinary place where girls excel - they set goals, ask questions, seek answers and establish the foundation they need to become life-long learners. An exceptionally dedicated and inspiring faculty has created an interdisciplinary curriculum, rich in content and transferable skills. Essential questions are explored, critical thinking is fostered and enduring understandings are formed. In our all-girls setting, girls develop early the capacity to take intellectual risks and to challenge themselves, building confidence and mastery along their way. Experiences with technology, math, science, engineering, creative writing, and quality literature require girls to collaborate and learn to identify their own strengths as they work together. Laurel girls develop cultural savvy, technological and linguistic skills, and a passion for discovering solutions to complicated problems that will allow them to compete and succeed in a global world. Girls learn by doing. Active learning allows girls to experience the world. At Laurel we know it is sometimes best to move away from desks and learn about the natural world directly. A fully interdisciplinary approach, incorporating environmental science, mathematics, social studies, language arts, the arts, group problem solving and leadership development characterizes Laurel at Butler Days (LAB Days) once a month for an entire grade. Great education is transformative and offers multiple points of view - at the Butler Campus, girls literally gain another perspective on the world. In an environment that emphasizes respect for the natural world, including sustainability and respect for all, Laurel girls learn to believe in themselves and in each other. Each Primary class spends time every month at Laurel's 140-acre Butler Campus.
<urn:uuid:8a78b7e7-f38f-427e-9344-c6de5fd143ca>
CC-MAIN-2013-20
http://www.laurelschool.com/academics/primarySchool.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931202
340
2.546875
3
NATA'S Safety Guidelines Endorsed by American Academy of Pediatrics DALLAS (April 24, 2002) - Spring and lightning season go hand-in-hand, and due to an alarming rise in lightning casualties in recreational and sports settings, the National Athletic Trainers' Association (NATA) is re-issuing its safety guidelines. Recently endorsed by the American Academy of Pediatrics, NATA's Lightning Safety Guidelines provide protective measures for those who may be participating in outside recreation during a lightning storm. NATA's Lighting Safety Guidelines: - Establish a chain of command that identifies who is to make the call to remove individuals from the field. - Name a designated weather watcher. (A person who actively looks for the signs of threatening weather and notifies the chain of command if severe weather becomes dangerous.) - Have a means of monitoring local weather forecasts and warnings. - Designate a safe shelter for each venue. - Use the Flash-to-Bang count to determine when to go to safety. By the time the flash-to-bang count approaches thirty seconds all individuals should be already inside a safe structure. - Once activities have been suspended, wait at least thirty minutes following the last sound of thunder or lightning flash prior to resuming an activity or returning outdoors. - Avoid being the highest point in an open field, in contact with, or proximity to the highest point, as well as being on the open water. Do not take shelter under or near trees, flagpoles, or light poles. - Assume the lightning safe position (crouched on the ground, weight on the balls of the feet, feet together, head lowered, and ears covered) for individuals who feel their hair stand on end, skin tingle, or hear "crackling" noises. Do not lie flat on the ground. Observe the following basic first aid procedures in managing victims of a lightning strike: - Survey the scene for safety. - Activate local EMS. - Lightning victims do not 'carry a charge' and are safe to touch. - If necessary, move the victim with care to a safer location. - Evaluate airway, breathing, and circulation, and begin CPR if necessary. - Evaluate and treat for hypothermia, shock, fractures and/or burns. - All individuals have the right to leave an athletic site in order to seek a safe structure if the person feels in danger of impending lightning activity, without fear of repercussions or penalty from anyone. Recently, the American Academy of Pediatrics endorsed the NATA's Lightning Safety Guidelines. The Lightning Safety for Athletics and Recreation is available at: http://www.nata.org/publications/otherpub/lightning.pdf (Adobe Acrobat PDF format). The NATA, based in Dallas, Texas, is the voice for nearly 23,000 certified athletic trainers. The NATA's mission is to enhance the quality of health care for athletes and those engaged in physical activity, and to advance the profession of athletic training through education and research in the prevention, evaluation, management and rehabilitation of injuries.
<urn:uuid:a9b858ff-fcf5-49cc-b718-f2546bdbc222>
CC-MAIN-2013-20
http://www.nata.org/print/LightningSeasonSafety
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.891116
640
2.71875
3
Judicature Genes and Justice The Growing Impact of the New Genetics on the Courts November-December 1999 Vol 83(3) COMPLEX SCIENTIFIC EVIDENCE and the JURY by Robert D. Myers, Ronald S. Reinstein, and Gordon M. Griller DNA—deoxyribonucleic acid, the chemical molecule inside cells which carries biological information. DNA is a double stranded molecule held together by weak hydrogen bonds between complementary base pairs of nucleotides (Adenine and Thymine; Guanine and Cytosine). This molecule carries genetic information from parent to offspring. Genome—one copy of all the DNA found in each cell of an organism. The human genome is composed of three billion base pairs of DNA packaged as 23 chromosomes. There are two copies of each [chromosome] in a cell, one copy from each of your parents. The genome contains the organism's genes, the instructions for building that life form. These definitions of DNA and genome, two scientific concepts at the heart of this issue of Judicature, seem rather straightforward and simple. One may think that even without scientific background and learning, these concepts can be readily understood, perhaps with a few additional definitions, or a little more explanation from someone knowledgeable. But as the twentieth century draws to a close, the U.S. Human Genome Project moves closer to its goal: determining and mapping the complete sequence of DNA in the human genome by the year 2003. The implications of the Project's work for courts and the entire legal system are enormous: The HGP's ultimate goal is to discover all of the more than 80,000 human genes and render them accessible for further biological study.... Information obtained as part of the HGP will dramatically change almost all biological and medical research and dwarf the catalog of current genetic knowledge. Both the methods and data developed through the project are likely to benefit investigations of many other genomes, including a large number of commercially important plants and animals. In a related project to sequence the genomes of environmentally and industrially interesting microbes, in 1994 DOE initiated the Microbial Genome Program. For this reason, in addition to the DOE and NIH programs, genome research is being carried out at agencies such as the U.S. Department of AgricultureÉand the private sector. In a departure from most scientific programs, research also is being funded on the ethical, legal, and social implications (ELSI) of HGP data.1 Potential government and private sector applications of this knowledge—gene therapies, gene transfers, genetic screening, and new biotechnologies—ultimately will give rise to a myriad of disputes that will make their way into the courts for resolution. The legal issues involved in these controversies, and the evidence that underlies them, will be far more complex than the two brief definitions of DNA and genome at the outset of this article. As judges and lawyers ready themselves for this growing level of scientific evidence, one principal justice system decision maker is largely unprepared...the trial juror. Already, the most familiar form of genomic evidence, DNA "fingerprinting" (or "profiling," or "typing") in criminal cases, is widely admissible in state and federal courts, by court decision or legislation. The possible uses of genomic evidence, however, are not limited to criminal matters. Some states have already enacted legislation regulating health insurers' use of genetic testing data. Disputes involving insurance coverage, medical malpractice, product liability, toxic torts, employment discrimination, paternity, privacy, and intellectual property will become increasingly complex as the knowledge of not only human, but plant and animal genetics, and the practical applications of that knowledge, become more widespread. As one commentator has said, it is "not whether genetic evidence will ever be admitted into court, but when and under what kinds of circumstances."2 Against this backdrop, the ability of juries to adequately understand genomic evidence, distinguish between and resolve contradicting opinions of expert witnesses, and properly apply the law to the evidence is being called into question. Some court watchers believe juries are not competent to resolve scientific evidence issues, and matters of complex scientific evidence should be removed from them. Others argue that the societal values represented by both criminal and civil juries are too important to forego, and that the common sense approach jurors bring to disputes equip them in a unique, capable manner to comprehend novel and complex scientific evidence. In reality, the truth likely lies somewhere in between. Yet, there is little doubt that increasingly complex scientific issues have the potential to further tax the jury system, and that courts must seek new ways to help jurors deal with scientific evidence. To do so, courts will have to promote an active learning environment within the courtroom—in effect, turn courtrooms into classrooms. This new approach to jury trials is under way in some states today, pioneered by Arizona in its far-reaching 1995 jury rule changes including permitting jurors to ask questions, take notes, and in civil cases allowing jurors to discuss the evidence during the trial.3 Arizona's objective: improve the experience and decision making of jurors by redefining their role from passive observers to active participants, using applied, proven adult learning methods, and permitting information to unfold during the trial in more meaningful and understandable ways—in other words, to increase the potential of the "search for the truth." As research on Arizona's jury reform experience progresses, there is growing evidence that the courtroom, turned juror-friendly classroom, is more conducive to juror comprehension and promotes ease in understanding complex concepts and data. If such is the case, must others wait for statewide system changes? The simple answer: no. Courts and lawyers already possess the means and discretion to enable juries to better carry out their vital roles. Judges and lawyers can independently recognize their roles as educators by embracing ground breaking jury reforms and introducing them in their own courts. These reforms will become increasingly important as genomic evidence appears ever more routinely in America's courtrooms. Juries and complex cases Over the past 30 to 40 years, the perceived performance of juries has been criticized, both in high-profile criminal cases and in complex civil litigation in antitrust, securities, intellectual property, and product liability cases. Critics have questioned whether a jury of untrained and inexperienced people can be a competent fact finder and decision maker in lengthy trials that require comprehension of substantial quantities of complex scientific, technical, or statistical evidence, and resolving the testimony of duplicative expert witnesses whose opinions conflict. Moreover, it is alleged, juries in complex trials will have greater difficulty understanding and remembering the court's instructions, and properly applying the law to the facts. Faced with such a burden, say critics, jurors who are untrained in science and technology are ill-equipped for sound fact finding. As a result, critics allege, jurors will base their decisions less on the evidence and a careful consideration of the reliability of expert testimony, than on external cues, such as the perceived relative expertise and status of the expert witnesses, and will be more susceptible to "junk science" and emotional appeals.4 Intuitively then, we would expect juries to have enormous difficulties with the complex legal issues and scientific evidence that will confront the courts as disputes involving the strange, new world of human genetics and statistical probabilities become more commonplace. We would expect, as well, new proposals for replacing juries with such expert bodies as science courts and expert or "blue ribbon" panels. At the same time, however, a growing body of research on juries and their performance in both "simple" and complex cases is giving us a different picture.5 This research, based on case studies and "lab" or experimental studies, shows that jurors, rather than giving up in the face of voluminous evidence and conflicting expert opinions, take their fact-finding and decision-making responsibilities seriously. The research shows that while certain elements of complex trials do tax jurors' comprehension and understanding, there is no firm evidence that their judgments have therefore been wrong. Jurors are in fact capable of resolving highly complex cases. These studies have also shown that factors such as length of trial, and evidentiary complexity in itself, are not necessarily the critical factors in jury performance in complex matters. The problem presented by conflicting testimony of experts hired by the respective parties, for example, is present in simple as well as complex cases. Finally, the research shows that jurors, rather than being passive participants in the trial process, are active decision makers and want to understand. Jurors actively process evidence, make inferences, use their common sense, have individual and common experiences that inform their decision making, and form opinions as a trial proceeds.6 What the research shows then, along with the experiments and experiences of active and concerned judges in complex cases, is that the trial process itself may be as much an impediment to jury comprehension and understanding as the complexity of the legal concepts and evidence, or the competencies of jurors.7 Many factors, including failure to follow instructions, confusing instructions, non-sequential presentation of evidence, "dueling" expert witnesses, evidentiary admissibility rulings, and attorney strategic errors, affect the jury's ability to follow and comprehend complex evidence. Researchers, and increasingly many progressive courts, suggest that reforming and improving the "decision making environment"8 can improve not only jury comprehension and performance, but juror satisfaction with their trial experience. Challenging the current model The Arizona Supreme Court's Committee on More Effective Use of Juries recognized these issues when it made 55 recommendations to reform the jury system, many of which resulted in the officially adopted comprehensive jury reform rules in 1995. In the introduction to Jurors: The Power of 12, its report to the supreme court, the Committee cited "unacceptably low levels of juror comprehension of the evidence" as one of the motivating factors in urging the Supreme Court to adopt its proposed jury reform rules.9 Arizona's reforms, designed to make jurors active participants during the trial, include juror note taking, pre-deliberation discussions of evidence during civil trials, and the right of jurors to ask written questions. The Arizona reforms also permit judges greater latitude in exercising their inherent powers to provide to each juror preliminary and final written jury instructions, as well as to open up a dialogue between the jurors, the judge, and the lawyers when a jury believes it is deadlocked or needs assistance. The result has been increased satisfaction with the judicial process by judges, lawyers, jurors, and litigants. For years, jury reforms such as note taking and question asking were opposed on the assumption that jurors would miss crucial pieces of evidence or assume the role of advocate rather than neutral fact-finder. The empirical evidence collected thus far, however, overwhelmingly indicates that such opportunities do not adversely affect the pace or outcome of trials. It is intellectually arrogant for those in the legal system to assume that lay jurors are incapable of processing complex information. We have all been thrust into a technologically advanced world, and lawyers and judges are hardly better prepared for the task of sifting through scientific evidence than the jury. But common sense suggests that jury reform measures will aid understanding, and jurors themselves support reforms such as those described above.10 We should recognize that it makes little sense to oppose practices that make jurors more comfortable with complex scientific information. To drive the point home, we have often made the observation that it is difficult to imagine an academic setting in which taking notes and asking questions would not be permitted. Fortunately, the tides are beginning to shift in the debate over jury reform. Already a number of states are adopting new rules; Arizona, Colorado, and California are just a few.11 In New York, much of the reform debate has centered on the selection, administration, and management of the jury, but substantive changes are not far behind. Reforms such as increased jury fees and security, and a juror hotline to report problems have been quite successful. However, the trend in these states and others is to expand beyond administrative concerns and attempt to improve jury deliberations and performance. These grassroots efforts led the American Bar Association in 1998 to adopt a number of jury reform ideals drafted by a Section of Litigation task force as part of its Civil Trial Practice Standards. In adopting these standards, the ABA recognized the need to provide juries, lawyers, and judges with the tools to increase jury comprehension in this era of increasingly complex evidentiary issues. However, a complete overhaul of state and local jurisdictional rules is not necessary. These reforms can often be implemented, consistent with existing rules, at the discretion of the trial judge. Of course, when local rules conflict, those rules control, but most judges possess the inherent power to implement reforms in complex cases. For example, Rule 611 of the Federal (and Arizona) Rules of Evidence permit the judge to control the mode and order of questioning witnesses and presenting evidence. With the number of complex cases dramatically on the rise, judges and lawyers need to collaborate to help the jury become better fact-finders. A practical guide Many lawyers and judges seem to have forgotten the proper role of juries. Alexis de Tocqueville, the renowned historian, once said: [t]he jury...may be regarded as a gratuitous public school, ever open, in which every juror learns his rights,...and becomes practically acquainted with the laws, which are brought within the reach of his capacity by the efforts of the bar, the advice of the judge, and even the passions of the parties...I look upon the [the jury] as one of the most efficacious means for the education of the people which society can employ.12 It is this idea of educating the jury, of treating the courtroom as a classroom, that judges and lawyers alike need to recapture. We urge all members of the legal profession to implement, on their own initiative, the appropriate reforms when cases require an understanding of complex scientific evidence. Before we discuss individual reforms in more detail, it is important to note the role of judges in rigorously applying the rules of evidence. The judge plays a very important role in improving jury comprehension by appropriately screening evidence and admitting only that which meets the appropriate standards. The judge must scrupulously protect the jury from unreliable scientific evidence.13 Jury selection. Lawyers are often criticized for using their peremptory challenges to "dumb down" the jury. In complex cases, however, it is in the best interest of all concerned to select educated jurors and not strike persons based on the extent of their education. While there is little empirical evidence to demonstrate that more educated jurors are struck more often than less educated jurors, there does seem to be an unwritten rule of practice that professionals should be struck when possible. The authors themselves plead guilty to using that approach as trial lawyers. Perhaps lawyers fear that highly educated individuals will dominate in the jury room and be able to persuade the jury to their side during deliberations. However, preliminary data suggest, and we believe, that jurors take their job seriously and will not be easily persuaded to a position with which they do not agree.14 Those lawyers who believe in "dumbing down" juries should adjust their views accordingly, and recognize the important role of jurors as fact finders and decision makers. Of course, both lawyers and judges must still attempt to detect jurors with prejudices or preconceived ideas, but they should also seek to empanel the best jurors available from the pool. Juror note taking and notebooks. Of all the reforms discussed, allowing the jury to take notes during the trial must be the most common-sense and least controversial. Nevertheless many jurisdictions just don't get it. Research indicates that note taking does not distract jurors, nor does it create an undue influence on those jurors who choose not to take notes. Judges in Arizona instruct jurors that they are not obligated to take notes, and they tell the jury to pay attention to all aspects of the trial including witness demeanor and the documentary and testimonial evidence. The vast majority of courts recognize that it is within the sound discretion of the trial judge to permit jurors to take notes. Judges need to thoughtfully exercise their discretion and allow juror note taking in complex cases, and lawyers must urge judges to do so. Jurors need to be encouraged to take an active role in the trial. Allowing the jury to keep track of parties, witnesses, testimony, and evidence by taking notes will empower juries to improve their recall and understanding of all issues, simple and complex. Jurors in complex cases should also be given a comprehensive notebook containing items such as simplified jury instructions, layouts of the courtroom with the names and locations of lawyers and parties, and glossaries of scientific terms or helpful scientific diagrams, photographs, charts, and background data of all types. Better jury instructions. Judges historically instruct juries at the end of the trial. There are few rules or cases, however, that prohibit judges from instructing juries earlier. Judges in Arizona provide juries with pretrial instructions that, for example, define the elements of the alleged crime or define terms such as "negligence" and "fault." This permits the jury to understand the basic legal standards early in the case, refer to them during the trial, and then concentrate on the presentation of the evidence. Jury instructions should be written in plain English. When drafting jury instructions, both judges and lawyers should avoid unnecessary legal jargon. In Arizona, the state bar's Civil Jury Instruction Committee even includes a linguistics professor from a local university. Jury instructions must also be tailored to the case at trial. Instead of using only pattern jury instructions, judges should work with counsel to draft case-specific instructions that include party names and actual facts in the case, without commenting on the evidence. Instructions should be given early in the case both orally and in writing for maximum comprehension and memory retention. The written instructions should be included in the jury notebook. Jurors need to understand the legal context of the evidence presented, and early instruction facilitates a better understanding of its legal relevance. Finally, jurors should each be given a written copy of the final instructions and they should be allowed to have the instructions in the deliberation room. Arizona's rules require judges to provide each juror with a copy of all the jury instructions. After all, why should jurors have to pass a single copy when a few dollars can provide copies all around? And where is it written that jury instructions must only be oral? Permitting the jury to ask written questions. When it comes to issues of scientific evidence, lawyers and judges collaborate to understand and narrow the issues before the court. They ask each other questions to clarify misunderstandings prior to trial, and will confer even during the trial. Yet, once the trial begins, jurors traditionally are not permitted to ask questions. It is time to end this nonsensical practice. Jury questions should be written and given to court personnel before the witness leaves the courtroom. Counsel should be given the opportunity to object in a sidebar, or outside the hearing of the jury, and the jury should be instructed about the limitations on questions that can be asked. In Arizona, there have been no reports of problems with this type of procedure after thousands of trials over the last four years. A study reported in the March-April 1996 issue of Judicature found that jury questions helped jurors understand the facts and issues, that jurors did not ask inappropriate questions, and that jurors did not draw inappropriate inferences when their questions, due to counsel's objection, for example, were not asked.15 As the comments to the ABA Standards noted, state and federal courts have overwhelmingly recognized that it is within the sound discretion of the trial judge to allow juror questioning of witnesses. We encourage judges and lawyers to experiment with jury questions in complex cases. The empirical evidence, and our own experience, reveals that the fears and concerns about jury questions are unfounded. As two Arizona attorneys recently wrote, "Our experience [with juror questions] reinforces for us the effectiveness of juror questions in keeping the jury engaged and in improving the quality of our own trial presentations. The jurors' questions revealed areas of confusion or concern, enabling us to adjust our presentation accordingly."16 Juror discussion during civil trials. Perhaps one of the most controversial Arizona reforms at the time of its adoption, and still controversial today, is allowing jurors in civil cases to discuss the evidence prior to final deliberation. In Arizona, jurors are carefully instructed by the trial judge that they may discuss the case, so long as all members of the jury are present and they reserve judgment until final deliberations. The general consensus of the Arizona bench and bar is that this reform has been a success. In fact, the Committee on the More Effective Use of Jurors, in its second report to the Arizona Supreme Court (in June, 1998), recommended that the rules be expanded to allow pre-deliberation discussions during criminal trials. As of this writing, however, the supreme court has not adopted that recommendation. Traditionally, the view has been that permitting jurors to discuss the evidence early in the trial will lead them to make up their minds before hearing both sides. Recent studies suggest that this is not true.17 In fact, some studies have gone so far as to say that requiring jurors to refrain from discussing evidence actually hinders their ability to process information.18 Pre-deliberation discussion can help improve juror comprehension, improve memory recall, and relieve the tension created by a forced atmosphere of silence with regard to the evidence presented at trial.19 Social scientists report that jurors naturally tend to actively process information as it is received. Therefore, it is not surprising to find that studies show that anywhere from 11 to 44 percent of jurors discuss the evidence among themselves during the trial despite judicial admonitions to avoid such discussion.20 Explicitly allowing pre-deliberation discussions, then, is really an acknowledgment of what often occurs naturally. Perhaps surprising to some, Arizona's experience has shown that when one individual juror makes a preliminary judgment during pre-deliberation discussions, that judgment is often tested or challenged by the entire group.21 In United States v. Wexler (1987) Judge Ditter aptly explained that "jurors are concerned, responsible, conscientious citizens who take most seriously the job at hand." Like Judge Ditter, we believe the jurors are more interested in doing justice than in justifying their own loosely based preliminary conclusions, which are frequently subject to modification as a result of group discussions. A recent study of jury discussions during Arizona trials found that jurors overwhelmingly support this reform and report that it has positive effects.22 Specifically, jurors said that discussions improved comprehension of evidence, that all jurors' views were considered, and evidence was remembered accurately. Additionally, only a very low percentage of participants in the study said that trial discussions encouraged jurors to make up their minds early on. The study also found that, among judges, lawyers, and jurors, support for this reform increases with experience. Permitting pre-deliberation discussion, more than any other reform, challenges the legal profession's traditional notions of jury behavior, but it is time to recognize the need for juries to have better tools in dealing with complex evidentiary issues. Independent court appointed or stipulated experts. Unlike fingerprint or ballistic evidence, where it is easier to understand the samples juries are asked to compare, genetic evidence requires juries to sit through conflicting scientific interpretations from expert witnesses presented by the opposing parties. Early presentation of independent experts, either court appointed or stipulated, can help solve many of the problems presented by genetic evidence. Recent surveys suggest that judges favor appointing independent experts in complex cases. However, statistics show that the actual use of court appointed experts is relatively low.23 This situation is unfortunate because there are many advantages to be realized by the use of independent experts. For example, a case involving the admissibility of DNA evidence using a particular type of analysis was recently before the Arizona Superior Court. Both parties agreed to the appointment of a neutral court expert to testify about the procedures used in this analytial method. Substantial saving, in time and money, were realized by the appointment of the court expert. Judicial economy and fairness demand the use of innovative techniques in dealing with admittedly complex scientific issues. In most jurisdictions trial judges have inherent authority to appoint experts as technical advisors to assist the court. In fact, judges may appoint expert witnesses for testimonial purposes under Rule 706 of the Federal Rules of Evidence and similar provisions in force in most states. However, the use of court appointed experts to serve as a jury tutor on the basics of, for example, DNA evidence, is an under-utilized tool.24 Pre-recorded video "lectures" may be another avenue to explore when considering how to educate jurors on issues of "common" scientific knowledge. The basic building blocks of DNA and the basic methods of DNA testing could be simplified and presented to the jury in such a fashion as to make it much less intimidating.25 Many lawyers may argue that "dueling experts" is the model courts should adhere to, based on the adversarial nature of our justice system. However, a recent study found that jurors do not rely on cross-examination of expert witnesses designed to point out flawed scientific methodology.26 The authors suggest that this is because jurors do not believe lawyers are sincere in their attempts to educate jurors, but rather see cross-examination as the lawyer's attempt to undermine the expert through any means possible. Independent experts present an opportunity to not only improve juror comprehension and performance, but also decrease the substantial costs of expert witnesses, and increase judicial economy. The adversarial nature of the trial may be diminished, but that is actually a benefit, not a cost, according to independent experts considering jury reactions to lawyer cross-examination of opposing party witnesses. It is the judge's responsibility to be proactive in ensuring that the trial is a search for the truth, and that it is not about lawyers setting up roadblocks to that search. Allow a dialogue between jurors, lawyers, and the judge during deliberations. In place of the traditional "pep talk" judges often give to deadlocked juries, Arizona explicitly provides for an opportunity for further instruction by the judge and argument by the parties. Why should the opportunity to educate jurors further stop once deliberations begin? Allowing additional evidence, argument by counsel, or providing further instruction is not problematic, legally or pragmatically. Of course, judges must be careful not to influence jurors and need to limit further inquiries only to those issues that confuse or divide the jury. Once again, there are many cases approving the judge's inherent authority to reopen a case for additional evidence or argument where the jury needs further admissible evidence to reach a verdict, or to determine if a deadlock is unavoidable.27 Opening the courtroom to more creative learning. Increasingly, the Human Genome Project's Ethical, Legal and Social Implications Program is sensitizing the judicial and legal community about the changing rule of the law in light of new genetic discoveries and testing methods. Primers reviewing DNA and genome science have been written, memorable cartoon drawings simplify sophisticated concepts,28 and video background resources explaining genetics in meaningful non-scientific ways are growing in number. Further, difficult concepts can be reduced to plain English and conveyed to juries through innovative technologies, including live, videotaped, or interactive Internet-based testimony. These approaches can easily be presented while simultaneously ensuring that complex scientific evidence is afforded the utmost of seriousness. Educating the jury early in the trial, by using court appointed experts, better written jury instructions, jury notebooks, and basic adult education techniques, will provide a foundation for later testimony of experts presented by the lawyers. Jurors who have been tutored early about complex scientific issues will be in a better position to judge both the content and character of dueling experts. Two central participants in the courtroom are the ultimate beneficiaries of reform-oriented jury approaches when heavy doses of scientific evidence are the subject of an unfolding courtroom drama: jurors, and more importantly, litigants. Contemporary behavioral research, and Arizona's jury reform experience, substantiate that comprehension and understanding are significantly enhanced when information is actively processed. Most courts already possess the tools to implement the educational techniques discussed above. Whether through system-wide jury reform or the efforts of individual trial judges and trial lawyers, a more jury-centered trial will not only allow jurors to actively and intelligently participate in the fact-finding and decision-making process, but also give the litigants a better truth-finding forum. Robert D. Myers is Presiding Judge of the Arizona Superior Court in Maricopa County. Ronald S. Reinstein is Associate Presiding Judge of the Arizona Superior Court in Maricopa County. Gordon M. Griller is court administrator, Arizona Superior Court in Maricopa County and a member of the Board of Directors of the American Judicature Society. The authors wish to thank Timothy D. Keller, a law researcher for Judge Robert D. Myers, and Richard Teenstra, assistant director of the Maricopa County Superior Court Law Library, for their assistance. 1. Department of Energy, Office of Biological and Environmental Research, Life Sciences Division, Human Genome Research: An Introduction (visited Sept. 2, 1999) http://www.science.doe.gov/. 2. Denno, Legal Implications of Genetics and Crime Research, in Bock and Goode, eds., Genetics of Criminal and Antisocial Behaviour 235 (Chichester, N.Y.: Wiley, 1996). 3. See Arizona Supreme Court Orders, Nos. R-94-0031, R-92-004 (1995). 4. See Adler, The Jury: Trial and Error in the American Courtroom (New York: Times Books, 1994); Jury Comprehension in Complex Cases: Report of a Special Committee of the ABA Litgation Section (Chicago: American Bar Association, 1989). 5. For a review of criticisms of civil jury competencies and the jury research literature, see Lempert, Civil Juries and Complex Cases: Taking Stock after Twelve Years, in Litan, ed., Verdict: Assessing the Civil Jury System 181-247 (Washington, D.C.: Brookings Institution, 1993); Vidmar, The Performance of the American Civil Jury: An Empirical Perspective, 40 Ariz. L. Rev. 849 (1998); Cecil, Hans and Wiggins, Citizen Comprehension of Difficult Issues: Lessons from Civil Jury Trials, 40 Am. U. L. Rev. 727 (1991). 6. Hans, Hannaford and Munsterman, The Arizona Jury Reform Permitting Civil Jury Trial Discussions: The Views of Trial Participants, Judges, and Jurors, 32 U. Mich. J.L. Reform 349 (1999). 7. See Dann, "Learning Lessons" and "Speaking Rights": Creating Educated and Democratic Juries, 68 Ind. L.J. 1229 (1993). 8. Cecil, Hans and Wiggins, supra n. 5, at 765. 9. Jurors: The Power of 12, Report of the Arizona Supreme Court Committee On More Effective Use of Juries (November 1994). 10. Hans, Hannaford and Munsterman, supra n. 6, at 371-372. 11. For a review of state jury reform efforts, see Munsterman, A brief history of state jury reform efforts, 79 Judicature 216 (1996); Murphy, et al, Managing Notorious Trials (Williamsburg, Va.: National Center for State Courts, 1998); Enhancing the Jury System: A Guidebook for Jury Reform (Chicago: American Judicature Society, 1999). 12. de Tocqueville, Democracy in America 295-296 (Vintage ed. 1945). 13. Daubert v. Merrell Dow Pharm. Inc., 509 U. S. 579 (1993). 14. Hans, Hannaford and Munsterman, supra n. 6. 15. Heuer and Penrod, Increasing juror participation in trials through note taking and question asking, 79 Judicature 256, 260-261 (1996). 16. Cabot and Coleman, Arizona's 1995 Jury Reform Can be Deemed a Success, Arizona Journal, July 12, 1999, at 6. 17. See Hans, Hannaford and Munsterman, supra n. 6; Hannaford, Hans and Munsterman, "Permitting Jury Discussions During Trial: Impact of the Arizona Reform" 9 (1998) (unpublished manuscript, on file with the authors). 18. Chilton and Henley, Improving the Jury System, Jury Instructions: Helping Jurors Understand the Evidence and the Law, §II, PLRI Reports (Spring 1996) http://www.uchastings.edu/plri/spr96tex/juryinst.html. 19. Hans, Hannaford and Munsterman, supra n. 6; Hannaford, Hans and Munsterman, supra n. 17; Chilton and Henley, supra n. 18. 20. Chilton and Henley, supra n. 18. 21. Myers and Griller, Educating Jurors Means Better Trials: Jury Reform in Arizona, 36 Judges J. 13-17, 51 (Fall 1997). 22. Hans, Hannaford and Munsterman, supra n. 6. 23. Sanders, Scientifically Complex Cases, Trial by Jury, and the Erosion of Adversarial Processes, 48 DePaul L. Rev. 355, 378-379 (1998). 24. The Evaluation of Forensic DNA Evidence 169-171 (Washington, D.C.: National Research Council, 1996). 25. For examples of excellent illustrations and explanations, see Hoagland and Dotson, The Way Life Works (New York: Time Books, 1995). 26. Kovera, McAuliff and Hebert, Reasoning About Scientific Evidence: Effects of Juror Gender and Evidence Quality on Juror Decisions in a Hostile Work Environment Case, 84 J. of Applied Psychology 362, 372-373 (1999). 27. Myers and Griller, supra n. 21, at 16-17. 28. See Hoagland and Dotson, supra n. 25. |The online presentation of this publication is a special feature of the Human Genome Project Information Web site.|
<urn:uuid:bb3fb35f-139e-415f-a24e-04834351e3b5>
CC-MAIN-2013-20
http://www.ornl.gov/sci/techresources/Human_Genome/publicat/judicature/article10.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945384
7,044
3.03125
3
It was previously thought that Australopithecus afarensis walked in a crouched posture, and on the side of the foot, pushing off the ground with the middle part of the foot, as today's great apes do. We found, however, that the Laetoli prints represented a type of bipedal walking that was fully upright and driven by the front of the foot, particularly the big toe, much like humans today, and quite different to bipedal walking of chimpanzees and other apes.Quite remarkably, we found that some healthy humans produce footprints that are more like those of other apes than the Laetoli prints. The foot function represented by the prints is therefore most likely to be similar to patterns seen in modern-humans. This is important because the development of the features of human foot function helped our ancestors to expand further out of Africa. Our work demonstrates that many of these features evolved nearly four million years ago in a species that most consider to be partially tree-dwelling. These findings show support for a previous study at Liverpool that showed upright bipedal walking originally evolved in a tree-living ancestor of living great apes and humans. Australopithecus afarensis, however, was not modern in body proportions of the limbs and torso.The characteristic long-legged, short body form of the modern human allows us to walk and run great distances, even when carrying heavy loads. Australopithecus afarensis had the reverse physical build, short legs and a long body, which makes it probable that it could only walk or run effectively over short distances.
<urn:uuid:ba682725-73c9-44c0-85ec-9f992c91ad4a>
CC-MAIN-2013-20
http://www.dailytech.com/article.aspx?newsid=22213&commentid=701291&threshhold=1&red=5318
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957719
322
4.25
4
|Research Home | Pavements Home| |This report is an archived publication and may contain dated technical, contact, and link information| Publication Number: FHWA-RD-98-085 From the public's perspective, the most important basic purpose of a roadway pavement is to provide a smooth safe ride. For a pavement to accomplish this purpose it must be durable, and it is the responsibility of pavement engineers to ensure that pavements are designed and constructed in such a way that they achieve this durability. Pavement researchers are continually looking for ways that will allow engineers to design and construct pavements that provide a smooth safe ride for the longest amount of time and for the least cost. Pavements historically have been constructed from a combination of locally available natural materials, including natural soils, select soils such as natural gravel, and processed material such as quarried stone. Pavements are constructed in layers with the weaker or least durable materials at the bottom and the stronger or most durable materials at the top. The top layers are normally bound together with some sort of binder. Commonly used binders include hydraulic cements and bituminous cements. The most widely used of the bituminous cements is asphalt, a petroleum product refined from crude oil. What makes asphalt desirable for pavement construction is its tendency to stick to the granular material used in the upper pavement layers and thus to keep this material in place. In addition, when asphalt is heated, it becomes very fluid and can be mixed with gravel or rock, making it an easy material to process in the mass quantities required for pavement construction. If a layer of asphalt bound gravel or rock is thick enough, it takes on structural characteristics of its own and contributes to the overall durability of the pavement. Pavement engineers and researchers discovered long ago that the durability of a pavement is dependent on how it is used. Heavy wheel loads, many wheel loads, or a combination of both will shorten the service life of a pavement. Thicker pavement layers were found to counteract the effects of heavier or more numerous wheel loads. Pavement researchers developed mathematical relationships to calculate the thickness required for specific accumulations of expected wheel loads. Recently developed relationships tend to relate to the specific mechanical properties of the materials used in pavement construction and to how much strain these materials can tolerate while remaining intact. Asphalt has some unique properties that relate to the mechanical properties of an asphalt bound layer of gravel or rock. Asphalt is a liquid, albeit a very stiff liquid under normal ambient temperatures. When an asphalt material is deformed slightly, and for a very short period of time, it tends to return to its original shape. If the deformation is larger or if it occurs over a longer period of time, the asphalt does not fully return to its original shape. The amount of force needed to deform asphalt increases when the asphalt is cooled and decreases when it is warmed. In short, we have a material that deforms if loaded excessively or for too long, and how much or how long it takes to deform depends on its temperature. Another property of asphalt is that, although it is a liquid, it can crack if loaded too much, too quickly or too many times. Additionally, it may lose its bond to the gravel or rock under the same circumstances. All of these factors make asphalt a difficult material to model. One of the mechanical properties of a blend of asphalt and gravel is its modulus, or stiffness. The particular modulus that pavement engineers are most interested in is the amount of recoverable deformation that occurs due to a load. This is sometimes termed the resilient modulus. Another property is the amount of un-recovered (plastic) deformation due to that same load. A special case of the resilient modulus is the dynamic modulus, a measure of the deformation due to an applied load, plus a measure of the time delay between load application and deformation. Asphalt also does not deform simultaneously to a load being applied, an effect known as hysteresis. Rather, it begins to deform when the load is applied and continues to deform over some period of time, although usually a fairly short amount of time. All of these stiffness properties of asphalt are very dependent on temperature. The stiffness of the asphalt layer in turn controls the amount of bending, or deflection, that will occur in a pavement when a load is applied. There are two types of deflections that are relevant to pavement analysis: A pavement engineer or researcher, will measure the deflection with an FWD, analyze the deflection data, usually by backcalculation, to determine a resilient modulus for the asphaltic bound layers, and then use this result to predict how much deflection a truck axle load will generate. The temperature of the asphalt must be taken into account in both cases so that deflections or modulus can be adjusted as needed. With the trend toward mechanistic-empirical design methods, methods to adjust the pavement response for temperature are needed. One such method was developed from the Long Term Pavement Performance (LTPP) program. Within LTPP, the seasonal monitoring program (SMP) was initiated to measure pavement deflections and corresponding pavement temperatures on over 40 pavement test sections throughout the United States and Canada. The testing is conducted on half of the sections for one year, then on the other half the following year. The sections are also instrumented to measure in-pavement temperatures and moisture contents. The FWD tests were conducted at two-year intervals at the same positions within the test sections to minimize the spatial effects (variation in test results that are due to variation in the pavement in both longitudinal and transverse directions). The SMP provides the largest dataset of deflections and related pavement temperatures currently available to researchers. To illustrate the effect that pavement temperatures have on deflections, Figure 1 and Figure 2 show the variation in deflection response to in-pavement temperatures measured at test site locations in Nebraska and Colorado. Figure 1 shows the change that occurs over the course of a few hours at the same point within the same day. Notice that the temperatures effect the deflections close to the load and not away from the load. This is because the top asphalt layer is sensitive to temperature and the underlying unbound materials such the aggregate base and subgrade soil are not. It may be argued that deflections furthest away from the load plate do not change because temperatures do not vary as much, or as quickly in those lower layers, but it is known from independent measurements that those materials are not temperature sensitive. Note that the graph shows a symmetric deflection basin for illustration purposes only. The measurements were only made at the right side of the ordinate. If deflection sensors had been placed on both sides of the load plate, the basin would have been shown to be asymmetric because pavements are not exactly uniform in all directions. Figure 1. Variations in Deflection due to Temperature in Nebraska Figure 2 shows the variation in deflections at a single spot on a test site in Colorado over the course of a year. The deflection is plotted against the temperature measured at the mid-depth of the asphalt pavement. Although this shows a strong relationship between temperature and deflection, other seasonal effects are reflected in this plot also. It can be seen that temperature alone explains 88 percent of the variation in the deflections. The remaining 12 percent of the variation is due to seasonal effects and random error. The seasonal effects at other locations or other pavements will be similar. Figure 2. Variations in Deflection due to Temperature at One Location in Colorado Figure 3 shows the combined effect of temperature and season at a location on a Nebraska test site. The deflections now show that only the outer sensor measurement remained unchanged, indicating that the seasonal effect was not evident in that sensor but did show up at the intermediate sensors. It is still evident that temperature was responsible for most of the observed changes in deflection. Figure 3. Temperature and Seasonal Effects in Nebraska Figure 4 shows the backcalculated moduli for the 160 mm asphalt layer at the same location shown in Figure 3. The trend-line shows that temperature changes explain nearly 98 percent of the variation in the backcalculated moduli. The backcalculated moduli are from 18 different measurements taken from this specific point over the course of a year. If the asphalt moduli are converted to logarithms and plotted against the temperature, the plot becomes linear as shown in Figure 5. Figure 4. Variation in Backcalculated Moduli at a Location in Nebraska Figure 5. Variation in the Log of the Backcalculated Moduli at a Location in Nebraska The SMP, described in the previous section, provided a large amount of data that was used by Lukanen, Stubstad, and Briggs1 to develop empirical regression models for predicting in-depth pavement temperatures from surface temperatures, the time of day when the surface temperature measurement was made, and the average air temperature of the day before. With the same data, Lukanen et al developed empirical regression models that related measured deflections, deflection basin shape factors, or backcalculated moduli to the temperature of the asphalt at mid-depth. These models may be used to adjust measured deflections, deflection basin factors, or backcalculated moduli to those values expected at different temperatures. The BELLS model is used to increase productivity and (arguably) accuracy during testing in the field. An FWD equipped with infrared (IR) sensors records surface temperatures at every test location, accounting for the effects of varying shade levels and color over the pavement surface. Temperature measurements taken in-depth at a fixed location on the test site cannot account for such variations. The surface temperatures can be input into the appropriate BELLS model to calculate the in-depth temperature for each test. Once the in-depth temperatures are calculated, the deflections, basin shape factor, or backcalculated modulus at any other temperature can be predicted by using the models developed and presented within the Lukanen et al report. The ability to measure deflections under any temperature and then adjust the results for all other temperatures greatly increases the usefulness of deflection testing. Without this ability, deflection test productivity would be extremely limited since it would always have to be performed at the specific pavement temperature of interest. Evaluation of the structural capacity of an asphalt pavement typically involves the measurement of pavement deflections under a load with an FWD. At the same time that the deflections are measured, the temperature of the asphalt surface is commonly measured with an IR thermometer mounted on the FWD. If the FWD does not have such a thermometer, surface temperatures can be measured manually with a hand held instrument, or with a surface contact thermometer. Alternatively, a small hole may be drilled to allow measurement of the asphalt temperature at the desired depth directly. The procedure described here deals with using surface temperature measurements to estimate the temperature at some depth within the asphalt using the BELLS equations. Once the temperature at depth is estimated, the deflection measurements, or the backcalculated asphalt moduli can be adjusted to the deflection or moduli expected at any other temperature. Detailed descriptions of using BELLS to estimate mid-depth temperatures and to adjust deflection responses for the effects of temperature follow. Topics: research, infrastructure, pavements and materials Keywords: research, infrastructure, pavements and materials TRT Terms: Pavements, Asphalt--United States--Temperature, Pavements, Asphalt--Performance--United States, Long-Term Pavement Performance Program (U.S.), Adjustment factors, Asphalt pavements, Backcalculation
<urn:uuid:38be948f-1701-4f07-b487-ccc5370c95fe>
CC-MAIN-2013-20
http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/98085/gendis.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928659
2,392
3.6875
4
Give the students some white wax crayons and tell them to lay down a thick layer on manilla paper. Then tell them to cut out high contrast photos (without letters) from newspapers-- color comics work, too. Turn the image over on to the wax layer and burnish well--you can use the round handle end of scissors, wooden spoons, clay tools, whatever. The image from the newsprint will transfer into the white crayon. You can have them create captions, extend other ideas for images. The kids really get into the physical aspects of preparing these transfers and you are free to concentrate on your mural painters. Ann-on-y-mouse in Columbus how to manage the mural painters out in the hall, while supervising the rest of the class. Any ideas for less teacher intensive projects with a high enough engagement level that students would be able to work on out in the hall?
<urn:uuid:68383523-b24a-44fa-b2cd-3e2b3edca1fe>
CC-MAIN-2013-20
http://www.getty.edu/education/teacherartexchange/archive/Feb03/0825.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901922
203
2.84375
3
(Swans - March 28, 2011) The terrible earthquake and tsunami that hit Japan have caused not only massive devastation and the loss of thousands of lives over a wide area, but they also caused major problems at several nuclear power plants, particularly at Fukushima. It is these nuclear problems that have tended to dominate discussions in the news media. The radioactive plumes from the Fukushima plant have so far caused slightly raised levels of radioactivity across a wide area. Nevertheless, dangerous levels have been essentially confined to workers at the site. The threatened meltdown of spent fuel rods in underwater storage at the plant is entirely due to the complete loss of electrical power in the area. This loss of electrical power eliminated the pumping of cooling water to the storage ponds and elsewhere in the plant. This allowed the heat generated by the fuel rods to raise the temperature sufficiently to boil the radioactive ponds dry. The continued heating threatened not simply the release of radioactivity to the atmosphere, but also the loss of integrity of the ponds' structures. This could release the melted nuclear material to an uncontrolled, and possibly uncontrollable, area around the ponds, and would potentially threaten any further rehabilitation of the plant. Certainly, some elements in the rods such as iodine, strontium, and caesium would be emitted to the atmosphere, but considering the known effects of the 1986 complete meltdown at Chernobyl, the expected resulting deaths would be far less than those already caused by the earthquake and tsunami themselves. And, anyway, hope still remains of avoiding the meltdown. It is necessary to emphasize that damage at the plant was entirely due to the unforeseen strength of the tsunami, which poured over the insufficient sea defences, putting out of action all electrical mains and backup power, with consequent failure of all the cooling water pumps, the development of fires in the plant, and the overheating of the spent rods in the storage ponds. It is supremely evident that the design of the plant coped very well with an earthquake of a severity greater than had been designed for, but was overwhelmed by the height of the tsunami. If similar provision for resistance to a severe earthquake was considered and implemented in the design and construction of the nuclear plants in California, then the dismal scenario forecast by Alexander Cockburn (1) is far from the reality of this industry. Widespread reports of radioactivity in water and the atmosphere of what is described as thrice normal levels sounds far more threatening than it is in practice. It has never been properly explained to the public that acceptable levels of radioactivity are set at a level at the most 1/100th the lowest accepted no effect level for human health. This is a level of the kind that has widespread use in the control and assessment of the dangers to human health of substantially all pollutants known to be dangerous at high levels, including food additives and the like, as well as radioactivity. Nevertheless, public concern exists, and it becomes necessary to discuss whether governments should abandon nuclear energy as a source of electrical power. Available energy sources Current energy sources used in the United Kingdom are approximately 39% coal; 36% gas; 22% nuclear; 2% hydro & pumped storage, 1 to 2%; and wind 1 to 2%. (2) Similar figures for the United States are 1% petroleum, 17% natural gas, 51% coal, 9% renewable, and 21% nuclear. Usages in other developed countries are broadly similar, with France being notable for its concentration on hydro and nuclear. Costings for the United Kingdom are not readily available to me, but costings predicted in the United States for 2016 show anticipated cost per megawatt hour as approximately: coal (various technologies) $65 to $93, gas (various technologies) $17 to $46, nuclear $90, hydro $52, wind (onshore) $84, wind (offshore) $210, solar $195 to $260. (3) What is remarkable about these figures are the relatively low cost of coal, and the relatively high cost of the newer renewables, wind and solar. Coal is, and seems likely to remain for at least decades, the major fuel for electricity supply in most countries. Wind power sources are highly subject to weather variability; and solar power is only operative during daylight hours, and is similarly subject to variability of cloud cover. Accordingly, both these sources require equivalent back up from power plants powered with conventional or nuclear fuels. Neither can be considered as practical alternatives to conventional plants. Realism about Energy Energy is, undoubtedly, fundamental to all modern society. With a steady and reliable source of electrical energy all things are possible. For anyone living in the Western world, where a reliable and continuous source of electrical power is taken for granted, it is very hard to understand the difficulties of development, or even of everyday living, that faces millions in much of the rest of the world. Just try to remember the difficulties you found during the last power failure you experienced; which, if you live in one of the leading economies, probably lasted for only a few minutes, or hours, not for weeks or months, and you will realise how crucial is a reliable power supply. Until a means of large-scale storage of electricity at far above the megawatt-hour level is found, renewables can only make a relatively minor, and always unreliable, contribution to world energy supplies. The really important thing to understand is that renewables cannot produce the reliable electrical power we need, and we shall remain for a considerable period reliant on coal, gas, oil, and nuclear power for our main and reliable energy needs. Nuclear waste is considered waste because it is not sufficiently high in radioactivity to be useful for concentration as fuel and yet is dangerous to humans in quantity. There are at present only two basic ways of getting rid of it, either to hide it deep underground in cement (or lead) encasements, or spread it around! Widely distributed as small particles in the atmosphere or in the oceans, the total quantity of radiation would be insignificant compared to the size of the earth and its background radioactivity, to which it would cause no significant increase. The radioactivity of the waste is per mass relatively low, but it is also of long half-life, so will remain at substantially the same level for perhaps centuries or longer, and that is why it is so dangerous bulked into a small place. The troubles with the second method are two-fold. The technical problems are: what chemical compounds of the radioactive elements to choose for the dispersal, and then how to spread it so that it is really fully dispersed around the world. The third problem is one of public relations. Whatever is proposed cannot seem to calm the inordinate fear of radioactivity that grips the populace. Unfortunately, few seem capable of understanding the quip of the old sage Paracelsus that "the poison is in the dose." Certainly there has not, so far, been any public relations attempt to explain it. It is perfectly feasible that a third method, involving inhibition of the radioactivity, may become available in the future as a result of further understanding of nuclear physics as a result of experiments at the Large Hadron Collider resulting in further understanding of the controlling features of quantum mechanics in nuclear physics. As one who was early, well before the Campaign for Nuclear Disarmament, against the use of atomic energy for military instead of peaceful (power) uses, I fear that the one thing our campaigns have left is this apparently all-pervading fear (akin to panic) of radioactivity. In practice radioactivity is a part of reality for which we know extremely well how to check on dangers, and how to use for benefit. Fear of radioactivity is understandable, because it cannot be seen or felt. Local radioactivity is readily detectable and measured with electronic instruments based on the Geiger counter. Nevertheless, it is completely out of the individual's personal control. Yet the current background of radioactivity everywhere is barely affected by all the radioactive fallout introduced to the atmosphere since 1945, and is mainly due to the radioactivity of the earth itself, together with the constant bombardment of the earth by so-called cosmic rays originating from the far corners of the universe. I am no expert in inorganic chemistry, being more used to dealing with biological materials, an area in which low radioactivity materials have been so beneficial in experimentation, and in medicine. So, though I would, in principal, prefer the second method (dilution) to the first method (localised hiding), I have no real idea of how to carry it out in practice. To enlarge on my reference above to Paracelsus, this is the pet name of the wonderfully-named Philippus Aureolus Theophrastus Bombastus von Hohenheimm, who was a famous physician, alchemist, and astrologer of the early 16th century. He was probably the first to recognise that there are many substances very well known to the lay public that can be medicine at small doses, while causing illness at larger doses. One example group consists of the vitamins, unknown in his day, but now known to be so necessary in small amounts in the daily diet, despite being able to cause medical problems at higher doses. It is a finding of exceedingly wide applicability. Radioactivity can be both a cause and a cure for cancer. It all depends on the dose. Of course, it is up to government agencies and the nuclear power industry itself, to contain radioactivity within acceptable limits for human health. It does seem that at least part of the public relations problem of nuclear power resides in the common perception that governments, and by extension their public servants, cannot be trusted to tell the truth. Nevertheless nuclear power has served well, with remarkably few accidents involving fatalities or injuries to either the public or its workforce. Indeed, in so far as its immediate and associated workforce is concerned, its record is probably better than that of all the other power sources considered. As experience and development continues, nuclear energy holds out the best prospect of a truly environmentally benign and reliable provider of electrical power. It must be supported until, in the possibly distant future, its fission reaction method can be superseded by the hopeful paragon of the fusion reaction. It is time to rid ourselves of the influence of the jeremiads who constantly exaggerate dangers and ignore the benefits of every new development in science and technology. If you find Paddy Apling's work valuable, please consider Feel free to insert a link to this work on your Web site or to disseminate its URL on your favorite lists, quoting the first paragraph or providing a summary. However, DO NOT steal, scavenge, or repost this work on the Web or any electronic media. Inlining, mirroring, and framing are expressly prohibited. Pulp re-publishing is welcome -- please contact the publisher. This material is copyrighted, © Edward Apling 2011. All rights reserved. Have your say Do you wish to share your opinion? We invite your comments. E-mail the Editor. Please include your full name, address and phone number (the city, state/country where you reside is paramount information). When/if we publish your opinion we will only include your name, city, state, and country. About the Author Paddy Apling is a retired British scientist. His bio reads with many acronyms: "BSc MChemA CChem FRSC FIFST MRSPH Lecturer in Food Science, University of Reading, 1962-1986 Professional Member, AACC International Professional Member, Institute of Food Technology (USA) Member, Society of Chemical Industry." Put it in simpler terms, he is a fellow of the Royal Society of Chemistry and the Institute of Food Science & Technology, qualified for appointment as Public Analyst, retired from University of Reading (1962-86). Apling maintains his own Blog at http://apling.freeservers.com/. To learn more about him, please read Louis Proyect's recollection of his meeting with Paddy in Manhattan, January 2009. (back) 3. Levelized energy cost chart 1, 2011 DOE report.gif Source: Energy Information Adninstration, December 2010, DOE/EIA -- 0383(2) (back)
<urn:uuid:8c6d846b-73ca-4c8a-876f-0afd38363a41>
CC-MAIN-2013-20
http://www.swans.com/library/art17/paddy01.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948957
2,521
3.390625
3
By Gerald Forbes While the South Burbank pool is being cited as an example of the economic advantages of unitization,1 it is an interesting fact that the first two decades of oil development in the Osage Reservation contained similar features of community interest and concerted control. Between the years 1896 and 1916 the petroleum of the Osage Reservation was developed under a single contract, known as the Foster Lease, the only enduring and successful "blanket" lease in Mid-Continent's history. The Foster Lease undoubtedly was a monopoly,2 but just as certainly it was an instrument of conservation. The former Osage Reservation, the present Oklahoma county by that name, contains about one and a half million acres that were bought from the Cherokees, preparatory to moving the Osages from Kansas in 1872. This territory is bounded by the ninety-sixth meridian on the east, the Arkansas River and the former Creek Nation on the south and west, and Kansas on the north.3 It was nearly twenty years after the Osages had bought the land that the possibility of producing petroleum began to be investigated. The American Civil War had interrupted the beginning of the oil industry in Kansas, but in the final decade of the century the production of petroleum became an established industry in that state. In 1895 the oil production of Kansas was 44,430 barrels.4 Among those persons interested, in the oil industry was Henry Foster, who had moved to Independence, Kansas, from Rhode Island. Foster suspected the presence of petroleum beneath the land of the Osages. In 1895 he applied to the Secretary of the De- 1John J. Arthur, "Unitization vs. Competition," The Oil Weekly, V. 83, No. 2, September 21, 1936, pp. 22-26; The Oil Weekly, V. 82, No. 13, September 7, 1936, p. 51. 2Kate P. Burwell, "Richest People in the World," Sturm's Oklahoma Magazine, II, No. 4, pp. 89-93; United States Geological Survey, Mineral Resources of the United States, 1905, p. 885. 4United States Geological Survey, Mineral Resources of the United States, 1889-90, 355; University Geological Survey of Kansas, IX, Special Report on Oil and Gas, 1908, pp. 21-23. partment of the Interior for a lease of the Osage Reservation. Henry Foster died before the contract was consummated, but his brother, Edwin B. Foster, assumed his interests and obligations. The lease finally was signed, March 16, 1896, by Edwin B. Foster and the Osage National Council, with James Bigheart, principal chief of the Osages, Saucy Chief, president of the Council, and several other Indians writing their names or making their "X's."5 The terms of the contract, which soon became known as a "blanket lease," conferred on Foster the exclusive right of producing oil in the entire Osage Reservation. The term was for ten years. Foster, in turn, agreed to pay a royalty of ten per cent of all crude petroleum removed from the ground and fifty dollars a year for each gas well, as long as it was used. The royalty was to be based on the market value at the place of production, and was to be paid to the National Treasurer of the Osage Nation. Foster further agreed to settle the royalty accounts between the fifth and tenth days of January, April, July, and October.6 Even at this early period there was dissention, and in less than a month a protest was filed with the Secretary of the Interior. The leading protestant was Saucy Chief, who had placed his "X" on the original Foster Lease. The protest was not attested and not clearly genuine. It declared that a full council had not been present when the leasing had been discussed and that the contract did not represent the wishes of a majority of the Osage Tribe. An investigation followed, and the Osage Agent reported that two white men had taken about fifty Indians across the Arkansas River to Cleveland, Oklahoma Territory, where the Osages had been induced with whiskey to sign a protest to the Foster Lease.7 Edwin B. Foster and the heirs of Henry Foster, having organized the Phoenix Oil Company, arranged with McBride and 5Osage Indian Archives, Pawhuska, Oklahoma, D. M. Browning to Henry Foster, January 24, 1896; Hines, E. P., Osage County, in Snider, L. C., Oil and Gas in the Mid-Continent Field, p. 208; Kappler, Charles J., Indian Affairs, Laws and Treaties, III, 1913, p. 137. 6Exact copy of the original lease—Mining Lease, Osage Agency, Oklahoma Territory, 1896, for Prospecting and Mining for Oil and Gas upon the Osage Reservation, Oklahoma Territory. 7Osage Indian Archives, D. M. Browning to U. S. Indian Inspector Duncan, April 6, 1896; D. M. Browning to Acting Agent Freeman, June 13, 1896. Bloom, drillers of Independence, Kansas, to put down a well three or four miles south of Chautauqua Springs, Kansas, likely near the present town of Boulanger, Oklahoma.8 This well was shallow but it produced about fifty barrels of oil daily, not enough at that time to be commercially valuable, so it was capped. The rumor that the well was an excellent one became current. It was rumored that this first Osage well had been closed to permit the owners to acquire leases cheaply in the Oklahoma Territory. The first Osage well was drilled in 1897. It was in 1899 that the Osage Oil Company, another Foster concern, drilled on the eastern side of the Osage Reservation near Bartlesville, a town in the Cherokee Nation. The first well of the Osage Oil Company showed prospects of petroleum and the second well was a good producer. Several dry or nearly dry holes were drilled, but the seventh well of the group was the best producer in the entire Kansas-Indian Territory oil field.9 By 1900 Foster had done little to develop the petroleum industry in the Osage Reservation, but in that year arrangements were completed for subleasing the land in large blocks. The entire reservation was divided into tracts half a mile wide and three miles east to west. These rectangles were numbered consecutively and those in the eastern part of the Reservation were offered to sublessees on a bonus and royalty basis. The sublessees were required to pay the Foster interests a one-eighth (later one-sixth) royalty and a bonus of one to five dollars an acre.10 The next year the Foster interests were consolidated in the Indian Territory Illuminating Oil Company (usually called the I. T. I. O.) which was incorporated at Trenton, New Jersey, with a capitalization of three million dollars. This new company was authorized to own and control all the rights and properties of the Osage and Phoenix Oil Companies.11 It was the I. T. I. O. that handled the subleasing 8Hines, loc. cit. p. 208; Hutchison, L.L., Preliminary Report on Rock Asphalt, Asphalite, Petroleum and Natural Gas in Oklahoma, Oklahoma Geological Survey, Bulletin No. 2, 1911, p. 167; Tidal Topics, III, Tidal Oil Company, 1919, p. 15. 11The Tulsa Democrat, Tulsa, Indian Territory, December 27, 1901, The Osage Journal, January 2, 1902. of the Osage Reservation, and buyers of drilling rights were sought in New York. The first well drilled by a sublessee was financed by the Almeda Oil Company on Lot 40. The Indian Territory Illuminating Oil Company announced that it planned to drill wells itself at the rate of one every twenty days, that the Standard Oil Company would buy the crude oil production at its refinery at Neodesha, Kansas, for eighty-eight cents a barrel, and that leases had been sold to New York and St. Louis companies covering rights on about six thousand acres of land. The I. T. I. O. further called attention to the quality of the crude oil which caused it to yield a high percentage of kerosene. (The name of the company itself calls attention to the fact that gasoline then was not of first importance.) The average depth of the wells was thirteen hundred feet, which made them relatively inexpensive to drill.12 Drilling in the Osage Reservation was comparatively rapid after the system of subleasing had been perfected. During 1902 the rail shipments of crude oil to the Neodesha refinery amounted to 37,000 barrels, which was the production of thirteen wells, six of which had been drilled in 1902. By January, 1903, thirty wells had been completed by the I. T. I. O. and its sublessees. Seventeen of the thirty wells produced oil, two gas, and eleven were dry holes. A year later 361 wells had been completed, and 243 were producing oil, twenty-one gas, and ninety-seven were dry. By the beginning of 1906 there had been 783 wells drilled—544 producing oil, forty-one gas, and 198 were dry. The oil production was: 1903—56,905 barrels; 1904—652,479 barrels; 1905—3,421,478 barrels; 1906—5,219,106 barrels. The average daily production of the Osage wells in 1905 was about 15,000 barrels. In 1905 there were 687,000 acres of the Osage Reservation under the control of the sublessees.13 That year the I. T. I. O. announced that it had disbursed $2,686,627 in connection with the "blanket lease." By the terms of the contract with Foster, the Osages were to receive one-tenth royalty (later changed to one-eighth) while the I. T. I. O. Company required one-eighth (later changed to one- 13United States Geological Survey, Mineral Resources of the United States, 1905, p. 855; 1906, p. 858; 1914, pp. 1009-1010; Tidal Topics, III, p. 15. sixth) royalty of its sublessees, making a profit of one-fortieth (later one-twenty-fourth) in addition to rentals and bonuses. There were less than twenty-five hundred members of the Osage tribe on the official rolls. The rolls contained Indians on the list January 1, 1906, and all children born to them by July, 1907, and those children of white fathers who had not been enrolled previously. There was no distinction between males and females, age, or degree of Indian blood. The equal share which each member of the tribe received from the communal mineral receipts was known as a headright. Headrights, it was provided by law, could be inherited, subdivided or consolidated, and as time passed different members of the tribe did not receive equal shares, as was the case at the time of the Osage allotment. This allotment differed from that of the other Oklahoma tribes, for it provided that only the surface of the land be held in severalty while the minerals of the subsurface remained communal property. As the sublessees of the I. T. I. O. developed the oil industry, the royalties of the Osage tribe mounted and were divided into headright payments.14 The days of the quarterly payments at Pawhuska, seat of the Osage agency, were colorful. On the first and second days of the payments, the full bloods received their monies; then the mixed-bloods were paid on the following two or three days. By 1906 the quarterly payment period kept force of eight men busy for four or five days. The merchants and professional men of Pawhuska who had extended credit to the Indians were on hand to collect their bills before the Osages had spent their money elsewhere. The amount of the payment depended on the number of barrels of oil taken from the ground, the number of gas wells being used, and the market price of petroleum. Accurate figures on the receipts from oil and gas are difficult to acquire, for the Osages also received payments for grazing permits, pipe line damages, and other revenues. Between July 1, 1904, and May 13, 1905, a total of $108,567 was paid to the Osages as oil and gas royalties.15 14United States Statutes At Large, XXXIV, p. 540; Kappler, op. cit., p. 256; United States Geological Survey, Mineral Resources of the United States, 1906, p. 855; Daniel, L. H. "The Osage Nation," The Texaco Star, V. Nos. 7-8, pp. 10-14. Congress began considering the renewal of the Foster Lease in 1905, although it did not expire until March, 1906. Several of the tribal leaders went to Washington to watch the action of Congress, and there were some who wished to prevent renewal of the contract.16 There were oil operators who called attention to the profits they believed the I. T. I. O. company was making and objected that one firm should have such a monopoly. After an investigation, Congress compromised by renewing the Foster Lease and all the subleases made by the I. T. I. O. on a total area of six hundred and eighty thousand acres on the eastern side of the Reservation. All the original conditions of the Foster Lease were to apply for another decade, with the exception that gas well royalty was increased from fifty to one hundred dollars for each well. The status of the western half of the Reservation was left undetermined until 1912. The renewal with reduced acreage left the Indian Territory Illuminating Oil Company with only 2,060 acres that had not been subleased, and caused that firm to lease from its own sublessees.17 Before 1904 the Osage oil was transported by railroad, but in that year the Department of the Interior approved two applications for pipelines to move the crude petroleum. The amount of damages to be paid the Osages puzzled the Federal officials, for there was no precedent for laying pipelines across Indian lands. The Prairie Oil and Gas Company wanted to lay a line to the refinery at Neodesha, while Guffey and Galey sought to pipe gas to Tulsa.18 Damages were fixed at ten cents a rod. In 1905 the Prairie constructed the "Cleveland discharge" line, which connected the Osage wells near Cleveland, Oklahoma Territory, with the trunk line to Kansas. Another outlet for the Osage petroleum appeared with the construction of a refinery by the Uncle Sam Oil Company at Cherryvale, Kansas. The disagreements of the Uncle Sam and the 17"History of twenty-three Years of Oil and Gas Development in the Osage," "National Petroleum News," V. No. 11, pp. 66-68; Kappler, op. cit., p. 137; Osage Indian Archives, Memorandum, p. 1; Osage Indian Archives, C. F. Larrabee to Frank Frantz, June 7, 1905; Hines, E.P., loc. cit., p. 208; Department of the Interior, Commissioner of Indian Affairs, Annual Report, I, p. 307. 18The Osage Journal, March 18, 1905; The Cherokee Advocate, Tahlequah, Indian Territory, April 4, 1903. Standard companies were dragged through the courts for years.19 In 1910 the Gulf Pipe Line Company became a buyer of Osage oil, since inadequate transportation facilities had resulted in 1909 in a decrease of production.20 Drilling and production received no more setbacks until 1915, when little drilling was done because of the uncertainty resulting from the struggle over the second renewal of the Foster Lease. The disposition of the mineral rights in the western half of the Osage county (Oklahoma became a state in 1907) became a pressing question in 1911. A committee of Osages urged the National Council to lease the western land on terms that would be more profitable to the Indians. Royalties of one-third and one-sixth were suggested. Since the Osages were interested in farming and ranching, as well as oil, it was argued that no company should be permitted to drill for oil without the "written consent" of the allottee on whose land the well was desired. After revising some of the suggestions of the committee, the Osage National Council went on record as favoring sealed bids for leases. Sealed bids would prevent leasing except at specific times, and then the lease would go to the highest bidder.21 While this discussion was current among the Indians, some oil operators met at Tulsa and decided on a plan for leasing the western side of Osage County. They proposed the organization of a large company of independent operators, each of whom would be on an equal cooperative footing. Such a company, the oil men believed, would be financially able to contract for the entire unleased acreage of Osage lands. They believed this company could deal pleasantly with the Department of the Interior. The financing of this huge company was expected to be comparatively simple, and it was argued that such a concern would be able to dictate favorable terms to crude oil buyers and thereby gain a profitable 19Osage Indian Archives, C.F. Larrabee to Frank Frantz, January 12, and January 14, 1905; The Muskogee Times Democrat, Muskogee, Indian Territory, January 8, 1907. 21The Osage Journal, January 5, May 25, August 17, September 14, and October 19, 1911; Senate Document 487, 62 Congress, 2 Sess. price for the petroleum. Among the leaders of this plan were P. J. White, Harry Sinclair, E. R. Kemp, and David Gunsberg.22 Samuel Adams, Assistant Secretary of the Department of the Interior asked those who were interested in leasing Osage land to communicate with him.23 The Osage National Council went to Washington to confer with Adams. The proponents of the giant organization of independent producers sent representatives. Many oil men favored neither the plan of the independents nor that of the Osage Council, so a mass meeting was called at Tulsa to protest the organization of the giant cooperative firm. Some believed that the Osage oil long had been a menace to the price of petroleum, and they did not look kindly on any plan to further the production. They suggested that a plan be adopted to discover whether any oil existed in the western side of the Osage County. Several operators believed that all the oil of the Osages had been discovered. Another group, led by E. W. Marland and F. A. Gillespie, opposed any plan involving one big lease. They favored leasing the western side of the county in blocks as small as 160 acres.24 In May, 1912, the Osage National Council directed the principal chief to sign four leases that would cover virtually the entire western part of the county. In these leases were several ideas which the Osages desired, including the "written consent clause," the maintenance by the leasing companies of offices at Pawhuska, and the retention in the county of all the gas. (It was believed that the retention of the gas in the county would induce industries to come.) The leases were issued to four men, one of whom was H. H. Tucker of the Uncle Sam Oil Company, who was reported to be an adopted member of the Osage tribe. The Secretary of the Department of the Interior refused to accept these leases because no provision was made for the supervision of the Federal government. He also frowned on the "written consent clause."25 Despite the fact that Assistant Secretary Adams had said that he would not recognize their election, Bacon Rind, as Principal 25The Osage Journal, March 14 and May 23, 1912; The Tulsa World, March 16, May 25 and June 19, 1912. Chief, and Red Eagle, as Assistant Chief, celebrated their election in July of that year (1912).26 Under the guidance of Bacon Rind and Red Eagle, the Osage National Council joined the Uncle Sam Oil Company in publicly presenting a petition to President Taft asking that the entire unleased portion of the Osage lands be leased to Tucker's company. The Department of the Interior concluded the opposition among the Indians by promptly removing from office both Bacon Rind and Red Eagle, as well as the entire National Council. Tucker responded with a final threat to President Taft that the twelve thousand stockholders of the Uncle Sam Oil Company would remember the refusal of the president to override the decision of the Department of the Interior. He vowed that the stockholders would use their influence to prevent Taft's reelection in November.27 The final decision of the Department of the Interior, issued July 13, 1912, involved elements of several of the plans suggested for the disposal of the west side mineral rights. The land was to be leased in tracts varying from three hundred to 5,120 acres, but no person was to have more than 25,000 acres. The United States Agent at Pawhuska was required periodically to advertise specific tracts for leasing on sealed bids. A person wishing to lease a tract was required to request in writing that the land be offered for bidding. Each bid was to be accompanied by a certified check for ten per cent of the bonus and the first year's rental. All leases were to endure for ten years from the date of approval by the Department of the Interior, providing no lease extended beyond April 8, 1931. The royalty on gas was fixed at one-sixth of the market value at the well, while on petroleum it was set at one-sixth of the gross production at the actual market value. Heretofore the royalty on oil had been one-eighth. Oil men who had been paying one-eighth royalty on oil produced on the land of the Five Tribes objected to giving one-sixth to the Osages, but that was the share which the I. T. I. O. had been receiving from its sublessees. A compromise was reached on the "written consent clause" whereby cultivated lands and homesteads were protected from oil prospec- tors. Producers strongly condemned the new regulations and the Osage National Council.28 The conflict over leasing the west side of the county hardly had ended before it was time for the renewal of the Foster Lease on the east side of the Osage Reservation. The I. T. I. O. minimized the profit it received from the Foster Lease, but in June, 1914, a renewal of the lease was asked. The request of the I. T. I. O. was supported by the company's sublessees. The next month the Osage National Council requested that no blanket lease be approved for the land then held by the I. T. I. O. The leasing company issued a financial statement to show the benefits that it had brought the Osages. The statement said that the I. T. I. O. had received over two million dollars in seventeen years, but that more than a million dollars had been paid to the Indians. The company cited the fact that it had furnished more than one hundred thousand dollars worth of gas free to operating companies. The statement of the I. T. I. O. indicated that the company had spent more in developing the Osage petroleum than it had received from the sublessees.29 When 1915 opened it was clear that some decision must be made regarding the Foster Lease. Secretary Lane of the Department of the Interior called a public hearing at Washington to discuss the lease. Members of the Osage National Council, officials of the Indian administration, and oil operators attended.30 Charles N. Haskell, first governor of Oklahoma, appeared for P. J. White and Harry Sinclair, and declared that the decision would affect the entire Mid-Continent. He asserted that the I. T. I. O. would develop the oil industry in an orderly manner, but that if the district were thrown open to competitive drilling there would be a 28Oklahoma Geological Survey, Bulletin No. 19, Part 1, Petroleum and Natural Gas in Oklahoma, 1915, p. 32; Department of the Interior, Regulations to Govern the Leasing of Lands in the Osage Reservation, Oklahoma, for Oil Gas, and Mining Purposes, 1912, pp. 1-4. 29Estimate of Profit and Loss under the Leases and Subleases of the Indian Territory Illuminating Oil Company in the Osage Reservation, compiled by Charles F. Leech, (nd) Osage Indian Archives. 30The Oil and Gas Journal, February 11, 1915, p. 2; Osage Indian Archives, Cato Sells to J. George Wright, February 10, 1915, The Osage Journal, February 11, 1915. flood of oil that would swamp the marketing facilities. Charles Owen, in a letter that was made a part of the record, took the stand that if the I. T. I. O. were to be protected for the pioneer development, its lease should be renewed where it actually had put down wells, not subleased the land to other companies. Some sublessees objected to the policy of the I. T. I. O. in separating the oil and gas rights, for they argued that they had found the gas, but now that a market was available the I. T. I. O. held it. (By the Foster Lease, the I. T. I. O. owned all gas discovered.) The Osage National Council demanded that leases be made directly with the operating companies without the I. T. I. O. as an intermediary.31 The hearing was concluded in June and the Department of the Interior refused to renew the Foster Lease, deciding to eliminate the I. T. I. O. except as a producing company. The new regulations provided that the east side of the county be broken up into quarter-section units combined in such a way that none would exceed an aggregate of 4,800 acres, except in such units where producing wells were capable of averaging twenty-five barrels a day on July 1, 1915. These units were to be offered at public auction for lease by the Osages under the supervision of the Department of the Interior. Oil and gas rights still were to be kept separately. The royalty on oil was fixed at one-sixth, except on quarter-sections where the average daily production equalled or exceeded one hundred barrels daily. There the royalty was one-fifth. Former sublessees of the I. T. I. O. were allowed to keep those quarter-sections they then were developing provided there would not be a total exceeding 4,800 acres.32 In general these rules were much the same as those governing the oil leases in the lands of the Five Civilized Tribes. March 16, 1916, the Foster Lease expired, ending the only successful blanket lease of the lands of an Indian tribe. The lease 31Osage Indian Archives, J. George Wright to Cato Sells, March 2, 1915, Stenographer's Minutes of Hearing Before Cato Sells, Commissioner of Indian Affairs, in the Matter of the so-called Foster Lease on Oil and Gas Property Owned by the Osage Indians of Oklahoma, Washington, March, 1915, pp. 675-680, 682-683, 686-687, 692-694-703, 710-711, 715, 730; The Osage Journal, May 15, 1915. was a monopoly, but it had good features for all concerned. The Osage lands continued to produce an increasing volume of oil until 1923, whereas the immense deposits in the Creek Nation (Glenn Pool, Cushing Pool, Okmulgee County) were dissipated very rapidly. In the Osage area there was a tendency to avoid competitive drilling because of the large leases. The Osage gas was conserved, thereby retaining much of the natural pressure. Gross overproduction never was one of the evils found in the district. The Osages themselves certainly benefited under the Foster Lease, although their individual wealth generally was over-estimated.
<urn:uuid:763a11b2-2088-4d94-a213-30699c9d1818>
CC-MAIN-2013-20
http://digital.library.okstate.edu/chronicles/v019/v019p070.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976321
5,851
3.015625
3
Significance and Use Sediment provides habitat for many aquatic organisms and is a major repository for many of the more persistent chemicals that are introduced into surface waters. In the aquatic environment, most anthropogenic chemicals and waste materials including toxic organic and inorganic chemicals eventually accumulate in sediment. Mounting evidences exists of environmental degradation in areas where USEPA Water Quality Criteria (WQC; Stephan et al.(67)) are not exceeded, yet organisms in or near sediments are adversely affected Chapman, 1989 (68). The WQC were developed to protect organisms in the water column and were not directed toward protecting organisms in sediment. Concentrations of contaminants in sediment may be several orders of magnitude higher than in the overlying water; however, whole sediment concentrations have not been strongly correlated to bioavailability Burton, 1991(69). Partitioning or sorption of a compound between water and sediment may depend on many factors including: aqueous solubility, pH, redox, affinity for sediment organic carbon and dissolved organic carbon, grain size of the sediment, sediment mineral constituents (oxides of iron, manganese, and aluminum), and the quantity of acid volatile sulfides in sediment Di Toro et al. 1991(70) Giesy et al. 1988 (71). Although certain chemicals are highly sorbed to sediment, these compounds may still be available to the biota. Chemicals in sediments may be directly toxic to aquatic life or can be a source of chemicals for bioaccumulation in the food chain. The objective of a sediment test is to determine whether chemicals in sediment are harmful to or are bioaccumulated by benthic organisms. The tests can be used to measure interactive toxic effects of complex chemical mixtures in sediment. Furthermore, knowledge of specific pathways of interactions among sediments and test organisms is not necessary to conduct the tests Kemp et al. 1988, (72). Sediment tests can be used to: (1) determine the relationship between toxic effects and bioavailability, (2) investigate interactions among chemicals, (3) compare the sensitivities of different organisms, (4) determine spatial and temporal distribution of contamination, (5) evaluate hazards of dredged material, (6) measure toxicity as part of product licensing or safety testing, (7) rank areas for clean up, and (8) estimate the effectiveness of remediation or management practices. A variety of methods have been developed for assessing the toxicity of chemicals in sediments using amphipods, midges, polychaetes, oligochaetes, mayflies, or cladocerans (Test Method E 1706, Guide E 1525, Guide E 1850; Annex A1, Annex A2; USEPA, 2000 (73), EPA 1994b, (74), Environment Canada 1997a, (75), Enviroment Canada 1997b,(76)). Several endpoints are suggested in these methods to measure potential effects of contaminants in sediment including survival, growth, behavior, or reproduction; however, survival of test organisms in 10-day exposures is the endpoint most commonly reported. These short-term exposures that only measure effects on survival can be used to identify high levels of contamination in sediments, but may not be able to identify moderate levels of contamination in sediments (USEPA USEPA, 2000 (73); Sibley et al.1996, (77); Sibley et al.1997a, (78); Sibley et al.1997b, (79); Benoit et al.1997, (80); Ingersoll et al.1998, (81)). Sublethal endpoints in sediment tests might also prove to be better estimates of responses of benthic communities to contaminants in the field, Kembel et al. 1994 (82). Insufficient information is available to determine if the long-term test conducted with Leptocheirus plumulosus (Annex A2) is more sensitive than 10-d toxicity tests conducted with this or other species. The decision to conduct short-term or long-term toxicity tests depends on the goal of the assessment. In some instances, sufficient information may be gained by measuring sublethal endpoints in 10-day tests. In other instances, the 10-day tests could be used to screen samples for toxicity before long-term tests are conducted. While the long-term tests are needed to determine direct effects on reproduction, measurement of growth in these toxicity tests may serve as an indirect estimate of reproductive effects of contaminants associated with sediments (Annex A1). Use of sublethal endpoints for assessment of contaminant risk is not unique to toxicity testing with sediments. Numerous regulatory programs require the use of sublethal endpoints in the decision-making process (Pittinger and Adams, 1997, (83)) including: (1) Water Quality Criteria (and State Standards); (2) National Pollution Discharge Elimination System (NPDES) effluent monitoring (including chemical-specific limits and sublethal endpoints in toxicity tests); (3) Federal Insecticide, Rodenticide and Fungicide Act (FIFRA) and the Toxic Substances Control Act (TSCA, tiered assessment includes several sublethal endpoints with fish and aquatic invertebrates); (4) Superfund (Comprehensive Environmental Responses, Compensation and Liability Act; CERCLA); (5) Organization of Economic Cooperation and Development (OECD, sublethal toxicity testing with fish and invertebrates); (6) European Economic Community (EC, sublethal toxicity testing with fish and invertebrates); and (7) the Paris Commission (behavioral endpoints). Results of toxicity tests on sediments spiked at different concentrations of chemicals can be used to establish cause and effect relationships between chemicals and biological responses. Results of toxicity tests with test materials spiked into sediments at different concentrations may be reported in terms of an LC50 (median lethal concentration), an EC50 (median effect concentration), an IC50 (inhibition concentration), or as a NOEC (no observed effect concentration) or LOEC (lowest observed effect concentration). However, spiked sediment may not be representative of chemicals associated with sediment in the field. Mixing time Stemmer et al. 1990b, (84), aging ( Landrum et al. 1989,(85), Word et al. 1987, (86), Landrum et al., 1992,(87)), and the chemical form of the material can affect responses of test organisms in spiked sediment tests. Evaluating effect concentrations for chemicals in sediment requires knowledge of factors controlling their bioavailability. Similar concentrations of a chemical in units of mass of chemical per mass of sediment dry weight often exhibit a range in toxicity in different sediments Di Toro et al. 1990, (88) Di Toro et al. 1991,(70). Effect concentrations of chemicals in sediment have been correlated to interstitial water concentrations, and effect concentrations in interstitial water are often similar to effect concentrations in water-only exposures. The bioavailability of nonionic organic compounds in sediment is often inversely correlated with the organic carbon concentration. Whatever the route of exposure, these correlations of effect concentrations to interstitial water concentrations indicate that predicted or measured concentrations in interstitial water can be used to quantify the exposure concentration to an organism. Therefore, information on partitioning of chemicals between solid and liquid phases of sediment is useful for establishing effect concentrations Di Toro et al. 1991, (70). Field surveys can be designed to provide either a qualitative reconnaissance of the distribution of sediment contamination or a quantitative statistical comparison of contamination among sites. Surveys of sediment toxicity are usually part of more comprehensive analyses of biological, chemical, geological, and hydrographic data. Statistical correlations may be improved and sampling costs may be reduced if subsamples are taken simultaneously for sediment tests, chemical analyses, and benthic community structure. Table 2 lists several approaches the USEPA has considered for the assessment of sediment quality USEPA, 1992, (89). These approaches include: (1) equilibrium partitioning, (2) tissue residues, (3) interstitial water toxicity, (4) whole-sediment toxicity and sediment-spiking tests, (5) benthic community structure, (6) effect ranges (for example, effect range median, ERM), and (7) sediment quality triad (see USEPA, 1989a, 1990a, 1990b and 1992b, (90, 91, 92, 93 and Wenning and Ingersoll (2002 (94)) for a critique of these methods). The sediment assessment approaches listed in Table 2 can be classified as numeric (for example, equilibrium partitioning), descriptive (for example, whole-sediment toxicity tests), or a combination of numeric and descriptive approaches (for example, ERM, USEPA, 1992c, (95). Numeric methods can be used to derive chemical-specific sediment quality guidelines (SQGs). Descriptive methods such as toxicity tests with field-collected sediment cannot be used alone to develop numerical SQGs for individual chemicals. Although each approach can be used to make site-specific decisions, no one single approach can adequately address sediment quality. Overall, an integration of several methods using the weight of evidence is the most desirable approach for assessing the effects of contaminants associated with sediment, (Long et al. 1991(96) MacDonald et al. 1996 (97) Ingersoll et al. 1996 (98) Ingersoll et al. 1997 (99), Wenning and Ingersoll 2002 (94)). Hazard evaluations integrating data from laboratory exposures, chemical analyses, and benthic community assessments (the sediment quality triad) provide strong complementary evidence of the degree of pollution-induced degradation in aquatic communities (Burton, 1991 (69), Chapman 1992, 1997 (100, 101).) Regulatory Applications—Test Method E 1706 provides information on the regulatory applications of sediment toxicity tests. The USEPA Environmental Monitoring Management Council (EMMC) recommended the use of performance-based methods in developing standards, (Williams, 1993 (102). Performance-based methods were defined by EMMC as a monitoring approach which permits the use of appropriate methods that meet preestablished demonstrated performance standards (11.2). The USEPA Office of Water, Office of Science and Technology, and Office of Research and Development held a workshop to provide an opportunity for experts in the field of sediment toxicology and staff from the USEPA Regional and Headquarters Program offices to discuss the development of standard freshwater, estuarine, and marine sediment testing procedures (USEPA, 1992a, 1994a (89, 103)). Workgroup participants arrived at a consensus on several culturing and testing methods. In developing guidance for culturing test organisms to be included in the USEPA methods manual for sediment tests, it was agreed that no one method should be required to culture organisms. However, the consensus at the workshop was that success of a test depends on the health of the cultures. Therefore, having healthy test organisms of known quality and age for testing was determined to be the key consideration relative to culturing methods. A performance-based criteria approach was selected in USEPA, 2000 (73) as the preferred method through which individual laboratories could use unique culturing methods rather than requiring use of one culturing method. This standard recommends the use of performance-based criteria to allow each laboratory to optimize culture methods and minimize effects of test organism health on the reliability and comparability of test results. See Annex A1 and Annex A2 for a listing of performance criteria for culturing or testing. 1.1 This test method covers procedures for testing estuarine or marine organisms in the laboratory to evaluate the toxicity of contaminants associated with whole sediments. Sediments may be collected from the field or spiked with compounds in the laboratory. General guidance is presented in Sections 1-15 for conducting sediment toxicity tests with estuarine or marine amphipods. Specific guidance for conducting 10-d sediment toxicity tests with estuarine or marine amphipods is outlined in Annex A1 and specific guidance for conducting 28-d sediment toxicity tests with Leptocheirus plumulosus is outlined in Annex A2. 1.2 Procedures are described for testing estuarine or marine amphipod crustaceans in 10-d laboratory exposures to evaluate the toxicity of contaminants associated with whole sediments (Annex A1; USEPA 1994a (1)). Sediments may be collected from the field or spiked with compounds in the laboratory. A toxicity method is outlined for four species of estuarine or marine sediment-burrowing amphipods found within United States coastal waters. The species are Ampelisca abdita, a marine species that inhabits marine and mesohaline portions of the Atlantic coast, the Gulf of Mexico, and San Francisco Bay; Eohaustorius estuarius, a Pacific coast estuarine species; Leptocheirus plumulosus, an Atlantic coast estuarine species; and Rhepoxynius abronius, a Pacific coast marine species. Generally, the method described may be applied to all four species, although acclimation procedures and some test conditions (that is, temperature and salinity) will be species-specific (Sections 12 and Annex A1). The toxicity test is conducted in 1-L glass chambers containing 175 mL of sediment and 775 mL of overlying seawater. Exposure is static (that is, water is not renewed), and the animals are not fed over the 10-d exposure period. The endpoint in the toxicity test is survival with reburial of surviving amphipods as an additional measurement that can be used as an endpoint for some of the test species (for R. abronius and E. estuarius). Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 90 %. Procedures are described for use with sediments with pore-water salinity ranging from >0 o/ooto fully marine. 1.3 A procedure is also described for determining the chronic toxicity of contaminants associated with whole sediments with the amphipod Leptocheirus plumulosus in laboratory exposures (Annex A2; USEPA-USACE 2001(2)). The toxicity test is conducted for 28 d in 1-L glass chambers containing 175 mL of sediment and about 775 mL of overlying water. Test temperature is 25° ± 2°C, and the recommended overlying water salinity is 5 o/oo ± 2 o/oo(for test sediment with pore water at 1 o/oo to 10 o/oo) or 20 o/oo ± 2 o/oo (for test sediment with pore water >10 o/oo). Four hundred millilitres of overlying water is renewed three times per week, at which times test organisms are fed. The endpoints in the toxicity test are survival, growth, and reproduction of amphipods. Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 80 % and there must be measurable growth and reproduction in all replicates of the negative control treatment. This test is applicable for use with sediments from oligohaline to fully marine environments, with a silt content greater than 5 % and a clay content less than 85 %. 1.4 A salinity of 5 or 20 o/oo is recommended for routine application of 28-d test with L. plumulosus (Annex A2; USEPA-USACE 2001 (2)) and a salinity of 20 o/oois recommended for routine application of the 10-d test with E. estuarius or L. plumulosus (Annex A1). However, the salinity of the overlying water for tests with these two species can be adjusted to a specific salinity of interest (for example, salinity representative of site of interest or the objective of the study may be to evaluate the influence of salinity on the bioavailability of chemicals in sediment). More importantly, the salinity tested must be within the tolerance range of the test organisms (as outlined in Annex A1 and Annex A2). If tests are conducted with procedures different from those described in 1.3 or in Table A1.1 (for example, different salinity, lighting, temperature, feeding conditions), additional tests are required to determine comparability of results (1.10). If there is not a need to make comparisons among studies, then the test could be conducted just at a selected salinity for the sediment of interest. 1.5 Future revisions of this standard may include additional annexes describing whole-sediment toxicity tests with other groups of estuarine or marine invertebrates (for example, information presented in Guide E 1611 on sediment testing with polychaetes could be added as an annex to future revisions to this standard). Future editions to this standard may also include methods for conducting the toxicity tests in smaller chambers with less sediment (Ho et al. 2000 (3), Ferretti et al. 2002 (4)). 1.6 Procedures outlined in this standard are based primarily on procedures described in the USEPA (1994a (1)), USEPA-USACE (2001(2)), Test Method E 1706, and Guides E 1391, E 1525, E 1688, Environment Canada (1992 (5)), DeWitt et al. (1992a (6); 1997a (7)), Emery et al. (1997 (8)), and Emery and Moore (1996 (9)), Swartz et al. (1985 (10)), DeWitt et al. (1989 (11)), Scott and Redmond (1989 (12)), and Schlekat et al. (1992 (13)). 1.7 Additional sediment toxicity research and methods development are now in progress to (1) refine sediment spiking procedures, (2) refine sediment dilution procedures, (3) refine sediment Toxicity Identification Evaluation (TIE) procedures, (4) produce additional data on confirmation of responses in laboratory tests with natural populations of benthic organisms (that is, field validation studies), and (5) evaluate relative sensitivity of endpoints measured in 10- and 28-d toxicity tests using estuarine or marine amphipods. This information will be described in future editions of this standard. 1.8 Although standard procedures are described in Annex A2 of this standard for conducting chronic sediment tests with L. plumulosus, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include further investigation to evaluate the relative toxicological sensitivity of the lethal and sublethal endpoints to a wide variety of chemicals spiked in sediment and to mixtures of chemicals in sediments from contamination gradients in the field (USEPA-USACE 2001 (2)). Additional research is needed to evaluate the ability of the lethal and sublethal endpoints to estimate the responses of populations and communities of benthic invertebrates to contaminated sediments. Research is also needed to link the toxicity test endpoints to a field-validated population model of L. plumulosus that would then generate estimates of population-level responses of the amphipod to test sediments and thereby provide additional ecologically relevant interpretive guidance for the laboratory toxicity test. 1.9 This standard outlines specific test methods for evaluating the toxicity of sediments with A. abdita, E. estuarius, L. plumulosus, and R. abronius. While standard procedures are described in this standard, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include the effect of shipping on organism sensitivity, additional performance criteria for organism health, sensitivity of various populations of the same test species, and confirmation of responses in laboratory tests with natural benthos populations. 1.10 General procedures described in this standard might be useful for conducting tests with other estuarine or marine organisms (for example, Corophium spp., Grandidierella japonica, Lepidactylus dytiscus, Streblospio benedicti), although modifications may be necessary. Results of tests, even those with the same species, using procedures different from those described in the test method may not be comparable and using these different procedures may alter bioavailability. Comparison of results obtained using modified versions of these procedures might provide useful information concerning new concepts and procedures for conducting sediment tests with aquatic organisms. If tests are conducted with procedures different from those described in this test method, additional tests are required to determine comparability of results. General procedures described in this test method might be useful for conducting tests with other aquatic organisms; however, modifications may be necessary. 1.11 Selection of Toxicity Testing Organisms: 1.11.1 The choice of a test organism has a major influence on the relevance, success, and interpretation of a test. Furthermore, no one organism is best suited for all sediments. The following criteria were considered when selecting test organisms to be described in this standard (Table 1 and Guide E 1525). Ideally, a test organism should: (1) have a toxicological database demonstrating relative sensitivity to a range of contaminants of interest in sediment, (2) have a database for interlaboratory comparisons of procedures (for example, round-robin studies), (3) be in direct contact with sediment, (4) be readily available from culture or through field collection, (5) be easily maintained in the laboratory, (6) be easily identified, (7) be ecologically or economically important, (8) have a broad geographical distribution, be indigenous (either present or historical) to the site being evaluated, or have a niche similar to organisms of concern (for example, similar feeding guild or behavior to the indigenous organisms), (9) be tolerant of a broad range of sediment physico-chemical characteristics (for example, grain size), and (10) be compatible with selected exposure methods and endpoints (Guide E 1525). Methods utilizing selected organisms should also be (11) peer reviewed (for example, journal articles) and (12) confirmed with responses with natural populations of benthic organisms. 1.11.2 Of these criteria (Table 1), a database demonstrating relative sensitivity to contaminants, contact with sediment, ease of culture in the laboratory or availability for field-collection, ease of handling in the laboratory, tolerance to varying sediment physico-chemical characteristics, and confirmation with responses with natural benthic populations were the primary criteria used for selecting A. abdita, E. estuarius, L. plumulosus, and R. abronius for the current edition of this standard for 10-d sediment tests (Annex A1). The species chosen for this method are intimately associated with sediment, due to their tube- dwelling or free-burrowing, and sediment ingesting nature. Amphipods have been used extensively to test the toxicity of marine, estuarine, and freshwater sediments (Swartz et al., 1985 (10); DeWitt et al., 1989 (11); Scott and Redmond, 1989 (12); DeWitt et al., 1992a (6); Schlekat et al., 1992 (13)). The selection of test species for this standard followed the consensus of experts in the field of sediment toxicology who participated in a workshop entitled “Testing Issues for Freshwater and Marine Sediments”. The workshop was sponsored by USEPA Office of Water, Office of Science and Technology, and Office of Research and Development, and was held in Washington, D.C. from 16-18 September 1992 (USEPA, 1992 (14)). Of the candidate species discussed at the workshop, A. abdita, E. estuarius, L. plumulosus, and R. abronius best fulfilled the selection criteria, and presented the availability of a combination of one estuarine and one marine species each for both the Atlantic (the estuarine L. plumulosus and the marine A. abdita) and Pacific (the estuarine E. estuarius and the marine R. abronius) coasts. Ampelisca abdita is also native to portions of the Gulf of Mexico and San Francisco Bay. Many other organisms that might be appropriate for sediment testing do not now meet these selection criteria because little emphasis has been placed on developing standardized testing procedures for benthic organisms. For example, a fifth species, Grandidierella japonica was not selected because workshop participants felt that the use of this species was not sufficiently broad to warrant standardization of the method. Environment Canada (1992 (5)) has recommended the use of the following amphipod species for sediment toxicity testing: Amphiporeia virginiana, Corophium volutator, Eohaustorius washingtonianus, Foxiphalus xiximeus, and Leptocheirus pinguis. A database similar to those available for A. abdita, E. estuarius, L. plumulosus, and R. abronius must be developed in order for these and other organisms to be included in future editions of this standard. 1.11.3 The primary criterion used for selecting L. plumulosus for chronic testing of sediments was that this species is found in both oligohaline and mesohaline regions of estuaries on the East Coast of the United States and is tolerant to a wide range of sediment grain size distribution (USEPA-USACE 2001 (2), Annex Annex A2). This species is easily cultured in the laboratory and has a relatively short generation time (that is, about 24 d at 23°C, DeWitt et al. 1992a (6)) that makes this species adaptable to chronic testing (Section 12). 1.11.4 An important consideration in the selection of specific species for test method development is the existence of information concerning relative sensitivity of the organisms both to single chemicals and complex mixtures. Several studies have evaluated the sensitivities of A. abdita, E. estuarius, L. plumulosus, or R. abronius, either relative to one another, or to other commonly tested estuarine or marine species. For example, the sensitivity of marine amphipods was compared to other species that were used in generating saltwater Water Quality Criteria. Seven amphipod genera, including Ampelisca abdita and Rhepoxynius abronius, were among the test species used to generate saltwater Water Quality Criteria for 12 chemicals. Acute amphipod toxicity data from 4-d water-only tests for each of the 12 chemicals was compared to data for (1) all other species, (2) other benthic species, and (3) other infaunal species. Amphipods were generally of median sensitivity for each comparison. The average percentile rank of amphipods among all species tested was 57 %; among all benthic species, 56 %; and, among all infaunal species, 54 %. Thus, amphipods are not uniquely sensitive relative to all species, benthic species, or even infaunal species (USEPA 1994a (1)). Additional research may be warranted to develop tests using species that are consistently more sensitive than amphipods, thereby offering protection to less sensitive groups. 1.11.5 Williams et al. (1986 (15)) compared the sensitivity of the R. abronius 10-d whole sediment test, the oyster embryo (Crassostrea gigas) 48-h abnormality test, and the bacterium (Vibrio fisheri) 1-h luminescence inhibition test (that is, the Microtox test) to sediments collected from 46 contaminated sites in Commencement Bay, WA. Rhepoxynius abronius were exposed to whole sediment, while the oyster and bacterium tests were conducted with sediment elutriates and extracts, respectfully. Microtox was the most sensitive test, with 63 % of the sites eliciting significant inhibition of luminescence. Significant mortality of R. abronius was observed in 40 % of test sediments, and oyster abnormality occurred in 35 % of sediment elutriates. Complete concordance (that is, sediments that were either toxic or not-toxic in all three tests) was observed in 41 % of the sediments. Possible sources for the lack of concordance at other sites include interspecific differences in sensitivity among test organisms, heterogeneity in contaminant types associated with test sediments, and differences in routes of exposure inherent in each toxicity test. These results highlight the importance of using multiple assays when performing sediment assessments. 1.11.6 Several studies have compared the sensitivity of combinations of the four amphipods to sediment contaminants. For example, there are several comparisons between A. abdita and R. abronius, between E. estuarius and R. abronius, and between A. abdita and L. plumulosus. There are fewer examples of direct comparisons between E. estuarius and L. plumulosus, and no examples comparing L. plumulosus and R. abronius. There is some overlap in relative sensitivity from comparison to comparison within each species combination, which appears to indicate that all four species are within the same range of relative sensitivity to contaminated sediments. 220.127.116.11 Word et al. (1989 (16)) compared the sensitivity of A. abdita and R. abronius to contaminated sediments in a series of experiments. Both species were tested at 15°C. Experiments were designed to compare the response of the organism rather than to provide a comparison of the sensitivity of the methods (that is, Ampelisca abdita would normally be tested at 20°C). Sediments collected from Oakland Harbor, CA, were used for the comparisons. Twenty-six sediments were tested in one comparison, while 5 were tested in the other. Analysis of results using Kruskal Wallace rank sum test for both experiments demonstrated that R. abronius exhibited greater sensitivity to the sediments than A. abdita at 15°C. Long and Buchman (1989 (17)) also compared the sensitivity of A. abdita and R. abronius to sediments from Oakland Harbor, CA. They also determined that A. abdita showed less sensitivity than R. abronius, but they also showed that A. abdita was less sensitive to sediment grain size factors than R. abronius. 18.104.22.168 DeWitt et al. (1989 (11)) compared the sensitivity of E. estuarius and R. abronius to sediment spiked with fluoranthene and field-collected sediment from industrial waterways in Puget Sound, WA, in 10-d tests, and to aqueous cadmium (CdCl2) in a 4-d water-only test. The sensitivity of E. estuarius was from two (to spiked-spiked sediment) to seven (to one Puget Sound, WA, sediment) times less sensitive than R. abronius in sediment tests, and ten times less sensitive to CdCl2 in the water-only test. These results are supported by the findings of Pastorok and Becker (1990 (18)) who found the acute sensitivity of E. estuarius and R. abronius to be generally comparable to each other, and both were more sensitive than Neanthes arenaceodentata (survival and biomass endpoints), Panope generosa (survival), and Dendraster excentricus (survival). 22.214.171.124 Leptocheirus plumulosus was as sensitive as the freshwater amphipod Hyalella azteca to an artificially created gradient of sediment contamination when the latter was acclimated to oligohaline salinity (that is, 6 o/oo; McGee et al., 1993 (19)). DeWitt et al. (1992b (20)) compared the sensitivity of L. plumulosus with three other amphipod species, two mollusks, and one polychaete to highly contaminated sediment collected from Baltimore Harbor, MD, that was serially diluted with clean sediment. Leptocheirus plumulosus was more sensitive than the amphipods Hyalella azteca and Lepidactylus dytiscus and exhibited equal sensitivity with E. estuarius. Schlekat et al. (1995 (21)) describe the results of an interlaboratory comparison of 10-d tests with A. abdita, L. plumulosus and E. estuarius using dilutions of sediments collected from Black Rock Harbor, CT. There was strong agreement among species and laboratories in the ranking of sediment toxicity and the ability to discriminate between toxic and non-toxic sediments. 126.96.36.199 Hartwell et al. (2000 (22)) evaluated the response of Leptocheirus plumulosus (10-d survival or growth) to the response of the amphipod Lepidactylus dytiscus (10-d survival or growth), the polychaete Streblospio benedicti (10-d survival or growth), and lettuce germination (Lactuca sativa in 3-d exposure) and observed that L. plumulosus was relatively insensitive compared to the response of either L. dytiscus or S. benedicti in exposures to 4 sediments with elevated metal concentrations. 188.8.131.52 Ammonia is a naturally occurring compound in marine sediment that results from the degradation of organic debris. Interstitial ammonia concentrations in test sediment can range from <1 mg/L to in excess of 400 mg/L (Word et al., 1997 (23)). Some benthic infauna show toxicity to ammonia at concentrations of about 20 mg/L (Kohn et al., 1994 (24)). Based on water-only and spiked-sediment experiments with ammonia, threshold limits for test initiation and termination have been established for the L. plumulosus chronic test. Smaller (younger) individuals are more sensitive to ammonia than larger (older) individuals (DeWitt et al., 1997a (7), b (25). Results of a 28-d test indicated that neonates can tolerate very high levels of pore-water ammonia (>300 mg/L total ammonia) for short periods of time with no apparent long-term effects (Moore et al., 1997 (26)). It is not surprising L. plumulosus has a high tolerance for ammonia given that these amphipods are often found in organic rich sediments in which diagenesis can result in elevated pore-water ammonia concentrations. Insensitivity to ammonia by L. plumulosus should not be construed as an indicator of the sensitivity of the L. plumulosus sediment toxicity test to other chemicals of concern. 1.11.7 Limited comparative data is available for concurrent water-only exposures of all four species in single-chemical tests. Studies that do exist generally show that no one species is consistently the most sensitive. 184.108.40.206 The relative sensitivity of the four amphipod species to ammonia was determined in ten-d water only toxicity tests in order to aid interpretation of results of tests on sediments where this toxicant is present (USEPA 1994a (1)). These tests were static exposures that were generally conducted under conditions (for example, salinity, photoperiod) similar to those used for standard 10-d sediment tests. Departures from standard conditions included the absence of sediment and a test temperature of 20°C for L. plumulosus, rather than 25°C as dictated in this standard. Sensitivity to total ammonia increased with increasing pH for all four species. The rank sensitivity was R. abronius = A. abdita > E. estuarius > L. plumulosus. A similar study by Kohn et al. (1994 (24)) showed a similar but slightly different relative sensitivity to ammonia with A. abdita > R. abronius = L. plumulosus > E. estuarius. 220.127.116.11 Cadmium chloride has been a common reference toxicant for all four species in 4-d exposures. DeWitt et al. (1992a (6)) reports the rank sensitivity as R. abronius > A. abdita > L. plumulosus > E. estuarius at a common temperature and salinity of 15°C and 28 o/oo. A series of 4-d exposures to cadmium that were conducted at species-specific temperatures and salinities showed the following rank sensitivity: A. abdita = L. plumulosus = R. abronius > E. estuarius (USEPA 1994a (1)). 18.104.22.168 Relative species sensitivity frequently varies among contaminants; consequently, a battery of tests including organisms representing different trophic levels may be needed to assess sediment quality (Craig, 1984 (27); Williams et al. 1986 (15); Long et al., 1990 (28); Ingersoll et al., 1990 (29); Burton and Ingersoll, 1994 (31)). For example, Reish (1988 (32)) reported the relative toxicity of six metals (arsenic, cadmium, chromium, copper, mercury, and zinc) to crustaceans, polychaetes, pelecypods, and fishes and concluded that no one species or group of test organisms was the most sensitive to all of the metals. 1.11.8 The sensitivity of an organism is related to route of exposure and biochemical response to contaminants. Sediment-dwelling organisms can receive exposure from three primary sources: interstitial water, sediment particles, and overlying water. Food type, feeding rate, assimilation efficiency, and clearance rate will control the dose of contaminants from sediment. Benthic invertebrates often selectively consume different particle sizes (Harkey et al. 1994 (33)) or particles with higher organic carbon concentrations which may have higher contaminant concentrations. Grazers and other collector-gatherers that feed on aufwuchs and detritus may receive most of their body burden directly from materials attached to sediment or from actual sediment ingestion. In some amphipods (Landrum, 1989 (34)) and clams (Boese et al., 1990 (35)) uptake through the gut can exceed uptake across the gills for certain hydrophobic compounds. Organisms in direct contact with sediment may also accumulate contaminants by direct adsorption to the body wall or by absorption through the integument (Knezovich et al. 1987 (36)). 1.11.9 Despite the potential complexities in estimating the dose that an animal receives from sediment, the toxicity and bioaccumulation of many contaminants in sediment such as Kepone®, fluoranthene, organochlorines, and metals have been correlated with either the concentration of these chemicals in interstitial water or in the case of non-ionic organic chemicals, concentrations in sediment on an organic carbon normalized basis (Di Toro et al. 1990 (37); Di Toro et al. 1991(38)). The relative importance of whole sediment and interstitial water routes of exposure depends on the test organism and the specific contaminant (Knezovich et al. 1987 (36)). Because benthic communities contain a diversity of organisms, many combinations of exposure routes may be important. Therefore, behavior and feeding habits of a test organism can influence its ability to accumulate contaminants from sediment and should be considered when selecting test organisms for sediment testing. 1.11.10 The use of A. abdita, E. estuarius, R. abronius, and L. plumulosus in laboratory toxicity studies has been field validated with natural populations of benthic organisms (Swartz et al. 1994 (39) and Anderson et al. 2001 (40) for E. estuarius, Swartz et al. 1982 (43) and Anderson et al. 2001 (40) for R. abronius, McGee et al. 1999 (41)and McGee and Fisher 1999 (42) for L. plumulosus). 22.214.171.124 Data from USEPA Office of Research and Development's Environmental Monitoring and Assessment program were examined to evaluate the relationship between survival of Ampelisca abdita in sediment toxicity tests and the presence of amphipods, particularly ampeliscids, in field samples. Over 200 sediment samples from two years of sampling in the Virginian Province (Cape Cod, MA, to Cape Henry, VA) were available for comparing synchronous measurements of A. abdita survival in toxicity tests to benthic community enumeration. Although species of this genus were among the more frequently occurring taxa in these samples, ampeliscids were totally absent from stations that exhibited A. abdita test survival <60 % of that in control samples. Additionally, ampeliscids were found in very low densities at stations with amphipod test survival between 60 and 80 % (USEPA 1994a (1)). These data indicate that tests with 2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard. D1129 Terminology Relating to Water D4447 Guide for Disposal of Laboratory Chemicals and Samples E29 Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications E105 Practice for Probability Sampling of Materials E122 Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process E141 Practice for Acceptance of Evidence Based on the Results of Probability Sampling E177 Practice for Use of the Terms Precision and Bias in ASTM Test Methods E178 Practice for Dealing With Outlying Observations E456 Terminology Relating to Quality and Statistics E691 Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method E729 Guide for Conducting Acute Toxicity Tests on Test Materials with Fishes, Macroinvertebrates, and Amphibians E943 Terminology Relating to Biological Effects and Environmental Fate E1241 Guide for Conducting Early Life-Stage Toxicity Tests with Fishes E1325 Terminology Relating to Design of Experiments E1391 Guide for Collection, Storage, Characterization, and Manipulation of Sediments for Toxicological Testing and for Selection of Samplers Used to Collect Benthic Invertebrates E1402 Guide for Sampling Design E1525 Guide for Designing Biological Tests with Sediments E1611 Guide for Conducting Sediment Toxicity Tests with Polychaetous Annelids E1688 Guide for Determination of the Bioaccumulation of Sediment-Associated Contaminants by Benthic Invertebrates E1706 Test Method for Measuring the Toxicity of Sediment-Associated Contaminants with Freshwater Invertebrates E1847 Practice for Statistical Analysis of Toxicity Tests Conducted Under ASTM Guidelines E1850 Guide for Selection of Resident Species as Test Organisms for Aquatic and Sediment Toxicity Tests Ampelisca abdita; amphipod; bioavailability; chronic; Eohaustorius estuarius; estuarine; invertebrates; Leptocheirus plumulosus; marine; Rhepoxynius abronius; sediment; toxicity; Acidity, alkalinity, pH--chemicals; Acute toxicity tests; Ampelisca abdita; Amphipods/Amphibia; Aqueous environments; Benthic macroinvertebrates (collecting); Biological data analysis--sediments; Bivalve molluscs; Chemical analysis--water applications; Contamination--environmental; Corophium; Crustacea; EC50 test; Eohaustorius estuarius; Estuarine environments; Field testing--environmental materials/applications; Geochemical characteristics; Grandidierella japonica; Leptocheirus Plumuulosus; Marine environments; Median lethal dose; Polychaetes; Reference toxicants; Rhepoxynium abronius; Saltwater; Seawater (natural/synthetic); Sediment toxicity testing; Static tests--environmental materials/applications; Ten-day testing; Toxicity/toxicology--water environments ASTM International is a member of CrossRef. Citing ASTM Standards [Back to Top]
<urn:uuid:5696801b-841c-4e58-ac11-6af8637c94c1>
CC-MAIN-2013-20
http://www.astm.org/Standards/E1367.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.889574
9,272
3.71875
4
The Cold War Years With the post-war boom in private, airline and military flying generating a record number of accidents, demand was high for CAP’s leading role in domestic air search and rescue. War surplus aircraft helped jump-start post-war search operations. These hand-me-down wartime observation and liaison aircraft included military versions of the 65-hp Taylorcraft (L-2), Aeronca (L-3), and Piper Cub (L-4) plus the larger Stinson L-5. The CAP fleet was bolstered by thousands of member-owned planes. Command organizations flew larger USAF and surplus aircraft including the Beech C-45 and Douglas C-47. Beginning in 1952, the Air Force made its 85-90 hp Aeronca L-16 post-war observation planes available to CAP. Initially flown in USAF inventory by CAP pilots, some 332 of these military Aeroncas were transferred to CAP ownership in 1956 and given FAA “N” numbers. Used for both search and cadet orientation flights, many a baby boomer today can reminisce about his or her first airplane flight in the L-16. CAP in Nevada and elsewhere were prepared for air sampling, both to help monitor Nevada A-Bomb tests in the 1950s and for Civil Defense roles in case of nuclear attack. In 1957 when the Soviet Union launched Sputnik, the world’s first artificial satellite, America panicked! Early efforts to track satellites involved a system of ground observers scanning the nighttime skies. Satellite passage was so fast – 20 seconds from horizon to overhead to horizon – that ground personnel could only radio their timing of these events as “See – Center – Saw.” How to train for this? How to simulate the passage of a satellite overhead? Air Force jets flew too fast or too high, so CAP planes towed a low-wattage light bulb protected in a low-cost aerodynamic shape: a bathroom plunger! In the nighttime sky, the set-up was exactly as bright at 7,000 feet as an orbiting satellite in space. |>PHOTO GALLERY >ART & MEDIA GALLERY >THE AIRPLANES >MUSEUM >MUSEUM STORE >BE A PATRON >SEE CAP HISTORY LOCALLY| © 2005-2009, CAP Historical Foundation. All rights reserved. No reproduction of text, or photographs for commercial purposes, without written permission.
<urn:uuid:5bfbdfa5-44af-4b08-8eaa-87b04e2e7a5b>
CC-MAIN-2013-20
http://www.caphistory.org/museum_exh_5.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918283
504
3.359375
3
Theme 10: Ready for Kindergarten! Theme Read Aloud A Theme Read Aloud Book extends and enhances each of the ten themes, helping children build connections. Each Theme Read Aloud Book develops vocabulary and comprehension and may be introduced at any point during the three-week theme. All Theme Read Aloud Books are available in an audio format on the Theme CDs. In Parts, children learn that everything is made up of smaller pieces, including them!
<urn:uuid:10592dc2-f343-4f66-b43b-1d6bf9e40586>
CC-MAIN-2013-20
http://www.eduplace.com/marketing/prek/10/tour-10_sect4-1.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908134
94
2.609375
3
A near-sighted eye is usually longer than a normal-sighted eye. Incoming rays of light are bundled so that their focal point is not on, but in front of, the retina. Distant objects are perceived in a blurred manner, whilst objects up close are in focus. The longer the eye, the more pronounced the degree of near-sightedness. The interactive animation to the side of this text allows you to see how the eye and image perception change with varying degrees of near-sightedness. The power of refraction of the eyes can be reduced surgically by means of laser correction or intraocular lenses, which shifts the focal point backwards onto the retina. In the case of glasses or contact lenses, this occurs by means of a concave lens, the strength of which is expressed in minus dioptres.
<urn:uuid:79da7058-1e7c-4a35-821c-725eb6de6952>
CC-MAIN-2013-20
http://www.laservista.ch/en/forms-of-defective-vision/near-sightedness/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931534
168
3.65625
4
Literacy skills begin early in life and are critical to a child's health, behaviour and success in school. The Early Literacy Specialist will: - Strengthen, support and promote early literacy and language development within our communities - Work with programs, agencies and families to deliver workshops and presentations - Distribute resources that will further develop language and literacy-rich environments for children aged 0-6 years, with a focus on children aged 0-3 years - facilitating "Train the Trainer" sessions with early years practitioners and parents - working together with libraries, museums and other recreation and cultural programs parenting programs, adult education and literacy programs - evaluating and monitoring early learning programs - providing early literacy materials to programs - and, offering assistance to parents who have questions The Early Literacy Specialist informs parents, child care providers and other interested parties of what is available to them. The following resources are available to borrow at Ontario Early Years Centre near you: - Literacy Suitcases: Suitcases feature many books, activities, costumes, props and much more to add to your daily programming needs - Literacy Kits: A book and related props are included in each literacy kit. They are great for making story time more interactive Who is Eligible? Parents, caregivers, professionals and students may all access the Early Literacy Program. Any community organization involved with children from birth to six years is welcome to access the services of the Early Literacy Specialist. Is there a cost for this program? Workshops and visits to early learning centres and home day cares are free. Borrowing suitcases and other literacy kits is also free. Fees are only charged for cost recovery of materials that are required for some of the workshops and subsidies are available upon request.. How can I access or find out more about this program? For a full listing of the workshops that are currently offered, see "Your Guide" You can call the Early Literacy Specialist at the Ontario Early Years Centre at 519-429-2875 ext 230 or 1-866-463-2759 ext 230. For more related information on literacy and education, you may wish to follow the links below: Infant & Child Development Program, McKinnon Park Child Care Centre Ready Set School Licensed Home Child Care Ontario Early Years Centre-Haldimand and Norfolk Community Action Program for Children "Your Guide" includes a full listing of early literacy workshops being offered. Ontario Early Years Website Haldimand & Norfolk Early Years (online directory featuring community services to support early years) Healthy Babies Healthy Children, Preschool Speech and Language Program Ministry of Children and Youth Grand Erie District School Board Brant Haldimand Norfolk District Catholic School Board Ecole Sainte. Marie Haldimand County Library System Norfolk County Library System Early Years Study 1 Early Years Study 2
<urn:uuid:2c8cac4e-a141-4da7-b337-68fa2249c1a5>
CC-MAIN-2013-20
http://hnreach.on.ca/index.php/child-care-a-other-early-childhood-services/early-childhood-services/early-literacy-program-workshops
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897401
609
2.53125
3
Water reuse can be defined as the use of reclaimed water for a direct beneficial purpose. The use of reclaimed water for irrigation and other purposes has been employed as a water conservation practice in Florida, California, Texas, Arizona, and other states for many years. Reclaimed water, also known as recycled water, is water recovered from domestic, municipal, and industrial wastewater treatment plants that has been treated to standards that allow safe reuse. Properly reclaimed water is typically safe for most uses except human consumption. Wastewater is not reclaimed water. Wastewater is untreated liquid industrial waste and/or domestic sewage from residential dwellings, commercial buildings, and industrial facilities. Gray water, or untreated wastewater from bathing or washing, is one form of wastewater. Wastewater may be land applied, but this is considered to be land treatment rather than water reuse. The demand for fresh water in Virginia is growing as the state’s population increases. This demand can potentially exceed supply during times of even moderate drought. In recent years, the normal seasonal droughts that have occurred in Virginia have caused local and state government to enact water conservation ordinances. These ordinances limit the use of potable water (water suitable for human consumption) for such things as car washing and landscape irrigation. The potential for developing new sources of potable water is limited. Conservation measures, such as irrigating with reclaimed water, are one way to help ensure existing water supplies are utilized as efficiently as possible. The environmental benefits of using reclaimed water include: Reclaimed water typically comes from municipal wastewater treatment plants, although some industries (e.g., food processors) also generate water that may be suitable for nonpotable uses. (Figure 1). During primary treatment at a wastewater treatment plant, inorganic and organic suspended solids are removed from plant influent by screening, and settling. The decanted effluent from the primary treatment process is then subjected to secondary treatment, which involves biological decomposition of organic material and settling to further separate water from solids. If a wastewater treatment plant is not equipped to perform advanced treatment, water is disinfected and discharged to natural water bodies following secondary treatment. Advanced or tertiary treatment consists of further removal of suspended and dissolved solids, including nutrients, and disinfection. Advanced treatment can include: Water that has undergone advanced treatment is disinfected prior to being released or reused. Reclaimed water often requires greater treatment than effluent that is discharged to local streams or rivers because users will typically have more direct contact with undiluted reclaimed water than undiluted effluent. For an interactive diagram of a wastewater treatment system with more information on treatment processes, please see www.wef.org/apps/gowithflow/theflow.htm. Although the primary focus of this publication is on the use of reclaimed water for agricultural, municipal, and residential irrigation, reclaimed water can be used for many other purposes. Non-irrigation uses for reclaimed water include: Intentional indirect potable reuse means that reclaimed water is discharged to a water body where it is then purposefully used as a raw water supply for another water treatment plant. This occurs unintentionally in most rivers, since downstream water treatment plants use treated water discharged by upstream wastewater treatment plants. Direct potable reuse refers to the use of reclaimed water for drinking directly after treatment, and, to date, has only been implemented in Africa (U.S. EPA, 2004). Examples of non-irrigation permitted water reuse projects in Virginia are: The turfgrass and ornamental horticulture industries have grown as Virginia becomes more urbanized. The acreage devoted to high-value specialty crops that benefit from irrigation, such as fruits and vegetables, is also increasing. As demand for potable water increases, maintaining turf, landscape plants, and crops will require the utilization of previously underutilized water sources. The regulation of reclaimed water production and use encourages both the supply of and the demand for reclaimed water. The benefits to suppliers of reclaimed water include greater public awareness and demand for reclaimed water and clear guidelines for reclaimed water production. Benefits to end users include increased public acceptance of the use of reclaimed water and a subsequent decrease in the demand for fresh water. There are no federal regulations governing reclaimed water use, but the U.S. EPA (2004) has established guidelines to encourage states to develop their own regulations. The primary purpose of federal guidelines and state regulations is to protect human health and water quality. To reduce disease risks to acceptable levels, reclaimed water must meet certain disinfection standards by either reducing the concentrations of constituents that may affect public health and/or limiting human contact with reclaimed water. The U.S. EPA (2004) recommends that water intended for reuse should: Biochemical oxygen demand (BOD) is an indicator of the presence of reactive organic matter in water. Total suspended solids (TSS) or turbidity (measured in nephelometric turbidity units, or NTUs) are measures of the amount of organic and inorganic particulate matter in water. Some other parameters often measured as indicators of disinfection efficiency include: The recommended values for each of these indicators depend on the intended use of the reclaimed water (Table 1). Table 1. Summary of U.S. EPA guidelines for water reuse for irrigation (Adapted from U.S. EPA, 2004). Monitoring for specific pathogens and microconstituents may become a part of the standard testing protocol as the use of reclaimed water for indirect potable reuse applications increases. Pathogens of particular concern include enteric viruses and the protozoan parasites Giardia and Cryptosporidium, whose monitoring is required by the state of Florida for water reuse projects. Microconstituents include organic chemicals, such as pharmaceutically active substances, personal care products, endocrine disrupting compounds, and previously unregulated inorganic elements whose toxicity may be re-assessed or newly evaluated. Fish, amphibians, and birds have been found to develop reproductive system abnormalities upon direct or indirect exposure to a variety of endocrine disrupting compounds. Such microconstituents may have the potential to cause reproduction system abnormalities and immune system malfunctioning in other wildlife and humans at higher concentrations. The impacts of the extremely low concentrations of these compounds found in wastewater effluent or reclaimed water are unknown. To date, there is no evidence that microconstituents cause human health effects at environmentally relevant concentrations. Some possible options for the removal of microconstituents from wastewater are treatment with ozone, hydrogen peroxide, and UV light. These methods can destroy some microconstituents via advanced oxidation, but the endocrine disruption activity of the by-products created during oxidation may also be of concern. No illnesses have been directly associated with the use of properly treated reclaimed water in the U.S. (U.S. EPA, 2004). The U.S. EPA recommends, however, that ongoing research and additional monitoring for Giardia, Cryptosporidium, and microconstituents be conducted to understand changes in reclaimed water quality. State regulations need not agree with U.S. EPA guidelines and are often more stringent. In Virginia, water reuse means direct beneficial reuse, indirect potable reuse, or a controlled use in accordance with the Water Reclamation and Reuse Regulation (9 VAC 25-740-10 et seq.; available at the Virginia Department of Environmental Quality website www.deq.virginia.gov/programs/homepage.html under Water Reuse and Reclamation.) The Virginia Water Regulation and Reuse Regulation establishes legal requirements for the reclamation and treatment of water that is to be reused. These require ments are designed to protect both water quality and public health, while encouraging the use of reclaimed water. The Virginia Department of Environmental Quality, Water Quality Division has oversight over the Virginia Water Reclamation and Reuse Regulation. The primary determinants of how reclaimed water of varying quality can be used are based on treatment processes to which the water has been subjected and on quantitative chemical, physical, and biological standards. Reclaimed water suitable for reuse in Virginia is categorized as either Level 1 or Level 2 (Table 2). The minimum standard requirements for reclaimed water for specific uses are summarized in Table 3. Table 2. Minimum standards for treatment of Level 1 and Level 2 reclaimed water. (Summarized from Virginia Water Reclamation and Reuse Regulations: 9 VAC 25-740-10 et seq.) Table 3. Minimum treatment requirements for irrigation and landscape-related reuse of reclaimed water in Virginia. (Summarized from Virginia Water Reclamation and Reuse Regulations: 9 VAC 25-740-10 et seq.) Water quality must be considered when using reclaimed water for irrigation. The following properties are critical to plant and soil health and environmental quality. Salinity, or salt concentration, is probably the most important consideration in determining whether water is suitable for reuse (U.S. EPA, 2004). Water salinity is the sum of all elemental ions (e.g., sodium, calcium, chloride, boron, sulfate, nitrate) and is usually measured by determining the electrical conductivity (EC, units = dS/m) or total dissolved solids (TDS, units = mg/L) concentration of the water. Water with a TDS concentration of 640 mg/L will typically have an EC of approximately 1 dS/m. Salts in reclaimed water come from: Most reclaimed water from urban areas is slightly saline (TDS ≤ 1280 mg/L or EC ≤ 2 dS/m). High salt concentrations reduce water uptake in plants by lowering the osmotic potential of the soil. For instance, residential use of water adds approx 200-400 mg/L dissolved salts (Lazarova et al., 2004a). Plants differ in their sensitivity to salt levels so the salinity of the particular reclaimed water source should be measured so that appropriate crops and/or application rates can be selected. Most turfgrasses can tolerate water with 200-800 mg/L soluble salts, but salt levels above 2,000 mg/L may be toxic (Harivandi, 2004). For further information on managing turfgrasses when irrigating with saline water, see Carrow and Duncan (1998). Many other crop and landscape plants are more sensitive to high soluble-salt levels than turfgrasses, and should be managed accordingly. See Wu and Dodge (2005) for a list of landscape plants with their relative salt tolerance and Maas (1987) for information on salt-tolerant crops. Specific dissolved ions may also affect irrigation water quality. For example, irrigation water with a high concentration of sodium (Na) ions may cause dispersion of soil aggregates and sealing of soil pores. This is a particular problem in golf course irrigation (Sheikh, 2004) since soil compaction is already a concern due to persistent foot and vehicular traffic. The Sodium Adsorption Ratio (SAR), which measures the ratio of sodium to other ions, is used to evaluate the potential effect of irrigation water on soil structure. For more information on how to assess and interpret SAR levels, please see Harivandi (1999). High levels of sodium can also be directly toxic to plants both through root uptake and by accumulation in plant leaves following sprinkler irrigation. The specific concentration of sodium that is considered to be toxic will vary with plant species and the type of irrigation system. Turfgrasses are generally more tolerant to sodium than most ornamental plant species. Although boron (B) and chlorine (Cl) are necessary at low levels for plant growth, dissolved boron and chloride ions can cause toxicity problems at high concentrations. Specific toxic concentrations will vary depending on plant species and type of irrigation method used. Levels of boron as low as 1 to 2 mg/L in irrigation water can cause leaf burn on ornamental plants, but turfgrasses can often tolerate levels as high as 10 mg/L (Harivandi, 1999). Very salt-sensitive landscape plants such as crape myrtle (Lagerstroemia sp.), azalea (Rhododendron sp.), and Chinese privet (Ligustrum sinense) may be damaged by overhead irrigation with reclaimed water containing chloride levels over 100 mg/L, but most turfgrasses are relatively tolerant to chloride if they are mowed frequently (Harivandi, 1999; Crook, 2005). Reclaimed water typically contains more nitrogen (N) and phosphorus (P) than drinking water. The amounts of N and P provided by the reclaimed water can be calculated as the product of the estimated irrigation volume and the N and P concentration in the water. To prevent N and P leaching into groundwater, the Virginia Water Reclamation and Reuse Regulation requires that a nutrient management plan be written for bulk use of reclaimed water not treated to achieve biological nutrient removal (BNR), which the regulation defines as treatment that achieves an annual average of 8.0 mg/L total N and 1.0 mg/L total P. Water that has been subjected to BNR treatment processes contains such low concentrations of N and P that the reclaimed water can be applied at rates sufficient to supply a crop’s water needs without risk of surface or ground water contamination. The Virginia Water Reclamation and Reuse Regulations require that irrigation with reclaimed water shall be limited to supplemental irrigation. Supplemental irrigation is defined as that amount of water which, in combination with rainfall, meets the water demands of the irrigated vegetation to maximize production or optimize growth. Irrigation rates for reclaimed water are site- and crop-specific, and will depend on the following factors (U.S. EPA, 2004; Lazarova et al., 2004b). 1. First, seasonal irrigation demands must be determined. These can be predicted with:• an evapotranspiration estimate for the particular crop being grown • determination of the period of plant growth • average annual precipitation data • data for soil permeability and water holding capacity Methods for calculating such irrigation requirements can be found in the U.S. Department of Agriculture’s National Engineering Handbook at www.info.usda.gov/CED/ftp/CED/neh-15.htm (USDA-NRCS, 2003) and in Reed et al. (1995). These calculations are more complicated for landscape plantings than for agricultural crops or turf because landscape plantings consist of many different species with different requirements. 2. The properties of the specific reclaimed water to be used, as detailed in the section above, must be taken into account since these may limit the total amount of water that can be applied per season. 3. The availability of the reclaimed water should also be quantified, including:• the total amount available • the time of year when available • availability of water storage facilities for the nongrowing season • delivery rate and type Water reuse is actively promoted by the Florida Department of Environmental Protection since Florida law requires that the use of potable water for irrigation be limited. In 2005, 462 Florida golf courses, covering over 56,000 acres of land, were irrigated with reclaimed water. Reclaimed water was also used to irrigate 201,465 residences, 572 parks, and 251 schools. St. Petersburg is home to one of the largest dual distribution systems in the world. (A dual distribution system is one where pipes carrying reclaimed water are separate from those carrying potable water.) In existence since the 1970s, this network provides reclaimed water to residences, golf courses, parks, schools, and commercial areas for landscape irrigation, and to commercial and industrial customers for cooling and other applications. For more information, see Crook (2005) and Florida Department of Environmental Protection (2006). The town of Cary is the first city in the state of North Carolina to institute a dual distribution system. The system has been in operation since 2001 and can provide up to 1 million gallons of reclaimed water daily for irrigating and cooling. The reclaimed water has undergone advanced treatment and meets North Carolina water quality rules. To date, there are over 400 residential and industrial users. For more information, see www.townofcary.org/depts/pwdept/reclaimhome.htm. The Bayberry Hills Golf Course expansion is one of numerous water reuse projects in Massachusetts. It was initiated in 2001 as an addition to an existing golf course of seven holes irrigated with reclaimed water. These seven holes use approximately 18 million gallons of water per year, and water reuse was necessary since Yarmouth’s water supply was already operating at capacity during summer months. The reused water undergoes secondary treatment followed by ozone treatment, filtration, and UV disinfection. There are provisions for water storage during the nongrowing season. The water reuse project has reduced the nitrogen needed for golf course fertilization. For more information on this and other reuse projects in the state of Massachusetts, see www.mapc.org/regional_planning/MAPC_Water_Reuse_Report_2005.pdf. For further information on irrigation of golf courses with reclaimed water, see United States Golf Association (1994). The Southeast Farm in Tallahassee, Florida, has been irrigating with reclaimed water since 1966. The farm is a cooperative between the city of Tallahassee, which supplies water, and farmers who contract acreage. Until 1980, the farm was limited to 20 acres of land for hay production, but has expanded since then to 2,163 acres. The irrigation water receives secondary treatment. The crops grown are corn (Zea mays L. subsp. Mays), soybeans [Glycine max (L.) Merr], bermudagrass [Cynodon dactylon (L.) Pers], and rye (Secale cereale L.). In recent years, however, elevated nitrate levels have been found in the waters of Wakulla Springs State Park south of Tallahassee, which is one of the largest and deepest freshwater springs in the world. This has apparently resulted in excessive growth of algae and exotic aquatic plant species, causing reduced clarity and changes in the spring’s ecosystem. Dye studies have confirmed that at least a portion of the nitrate comes from the Southeast Farm’s irrigated fields, although studies are on-going. As a result, in June 2006, the city of Tallahassee removed all cattle from Southeast Farm, eliminated regular use of nitrogen fertilizer on the farm, and implemented a comprehensive nutrient management plan for the farm. For more information, see www.talgov.com/you/water/pdf/sefarm.pdf or U.S. EPA (2004). Water Conserv II has been in existence since 1986, and is the first project permitted by the Florida Department of Environmental Protection for crops for human consumption. Over 3,000 acres of citrus groves are irrigated with reclaimed water, in addition to nurseries, residential landscaping, a sand mine, and the Orange County National Golf Center. No problems have resulted from the irrigation. The reclaimed water provides adequate boron and phosphorus and maintains soil at correct pH for citrus growth. The adequate supply of water permits citrus growers to maintain optimum moisture levels for high yields and ample water for freeze protection, which requires more than eight times as much water as normal irrigation. Although Water Conserv II had historically provided reclaimed water to citrus growers for no charge, the project recently began charging for water. It’s unclear if citrus growers will continue to irrigate with reclaimed water, or whether Water Conserv II’s emphasis will change to providing reclaimed water for residential, industrial, and landscape customers. For more information, see www.waterconservii.com/ or U.S. EPA (2004). This publication was reviewed by Adria Bordas, Bobby Clark, Erik Ervin, and Gary Felton. A draft version was reviewed by Bob Angelotti, Marcia Degen, Karen Harr, George Kennedy, Valerie Rourke, and Terry Wagner. Any opinions, conclusions, or recommendations expressed in this publication are those of the authors. www.watereuse.org/: WateReuse Association. “The WateReuse Association is a non-profit organization whose mission is to advance the beneficial and efficient use of water resources through education, sound science, and technology using reclamation, recycling, reuse, and desalination for the benefit of our members, the public, and the environment.” Page contains links to water reuse projects (mostly in the western U.S.), and other useful links. www.cvco.org/science/vwea/navbuttons/Glossary-11-01.pdf: Virginia Water Environment Association’s Virginia Water Reuse Glossary. www.hrsd.com/waterreuse.htm: Hampton Roads (Virginia) Sanitation District water reuse page. Description of industrial water reuse project, research reports, FAQ’s, and glossary of water reuse jargon. www.floridadep.org/water/reuse/index.htm: Florida Department of Environmental Protection water re-use page. Links to many water reuse-related resources on site, including general education/information materials, and Florida-specific links on water reuse policy, regulations, and projects. www.gaepd.org/Files_PDF/techguide/wpb/reuse.pdf: Georgia Department of Natural Resources Environmental Protection Division’s “Guidelines for Water Reclamation and Urban Water Re-Use (2002). www.mass.gov/dep/water/wastewater/wrfaqs.htm: Massachusetts Department of Environmental Protection FAQ on water reuse. www.bcua.org/WPC_VT_WasteWaterReUse.htm: Bergen County (New Jersey) Utilities Authority. Describes reuse of wastewater effluent re-use in cooling towers and for sewer cleaning. www.owasa.org/pages/WaterReuse/questionsandanswers.html: FAQ about Orange Water and SewerAuthority’s (Carrboro, NC) water reuse project for the University of North Carolina at Chapel Hill. Carrow, R.N. and R.R. Duncan. 1998. Salt-affected turfgrass sites: Assessment and management. John Wiley & Sons, Inc., New York, N.Y. Crook, James. 2005. St. Petersburg, Florida, dual water system: A case study. Water conservation, reuse, and recycling: Proceedings of an Iranian-American workshop. The National Academies Press, Washington, D.C. Florida Department of Environmental Protection. 2006. 2005 reuse inventory. FDEP, Tallahassee, FL. Available on-line at www.floridadep.org/water/reuse/inventory.htm. Harivandi, M. Ali. 1999. Interpreting turfgrass irrigation water test results. Publication 8009. University of California Division of Agriculture and Natural Resources, Oakland, Calif. Available on-line at anrcatalog.ucdavis.edu/pdf/8009.pdf. Harivandi, M. Ali. 2004. Evaluating recycled waters for golf course irrigation. U.S. Golf Association Green Section Record 42(6): 25-29. Available on-line at turf.lib.msu.edu/2000s/2004/041125.pdf. Landschoot, Peter. 2007. Irrigation water quality guidelines for turfgrass sites. Department of Crop and Soil Sciences, Cooperative Extension. Penn State University, State College, Pa. Available on-line at turfgrassmanagement.psu.edu/irrigation_water_quality_for_turfgrass_sites.cfm. Lazarova, Valentina and Takashi Asano. 2004. Challenges of sustainable irrigation with recycled water. p. 1-30. In Valentina Lazarova and Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla. Lazarova, Valentina, Herman Bouwer, and Akica Bahri. 2004a. Water quality considerations. p. 31-60. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla. Lazarova, Valentina, Ioannis Papadopoulous, and Akica Bahri. 2004b. Code of successful agronomic practices. p. 103-150. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla. Maas, E.V. 1987. Salt tolerance of plants. p. 57–75. In B.R. Christie (ed.) CRC handbook of plant science in agriculture, Vol. II. CRC Press, Boca Raton, Fla. Metropolitan Area Planning Council. 2005. Once is not enough: A guide to water reuse in Massachusetts. MAPC, Boston, Mass. Available on-line at www.mapc.org/regional_planning/MAPC_Water_Reuse_Report_2005.pdf. Reed, Sherwood C., Ronald W. Crites, and E. Joe Middlebrooks. 1995. Natural systems for waste management and treatment. 2nd edition. McGraw-Hill, Inc. New York, N.Y. Sheikh, Bahman. 2004. Code of practices for landscape and golf course irrigation. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla. USDA-NRCS. 2003. Irrigation water requirements. Section 15, Chapter 2. p. 2-i-2-284. In Part 623 National Engineering Handbook. U.S. Dept. of Agriculture Natural Resources Conservation Service, Washington, D.C. Available on-line at www.info.usda.gov/CED/ftp/CED/neh-15.htm. U.S. EPA. 2003. National primary drinking water standards. EPA 816-F-03-016. U.S. Environmental Protection Agency, Washington, D.C. U.S. EPA. 2004. Guidelines for water reuse. EPA 645-R-04-108. U.S. Environmental Protection Agency, Washington, D.C. Available on-line at www.epa.gov/ORD/NRMRL/pubs/625r04108/625r04108.pdf. United States Golf Association. 1994. Wastewater reuse for golf course irrigation. Lewis Publishers, Chelsea, Mich. 294 p. VAAWW-VWEA. 2000. A Virginia water reuse glossary. Virginia Section, American Water Works Association and Virginia Water Environment Federation. Available on-line at www.cvco.org/science/vwea/navbuttons/Glossary-11-01.pdf. Wu, Lin, and Linda Dodge. 2005. Landscape plant salt tolerance guide for recycled water irrigation. Slosson Research Endowment for Ornamental Horticulture, Department of Plant Sciences, University of California, Davis, Calif. Available on-line at ucce.ucdavis.edu/files/filelibrary/5505/20091.pdf. Reviewed by Greg Evanylo, Extension Specialist, Crop and Soil Environmental Sciences Virginia Cooperative Extension materials are available for public use, re-print, or citation without further permission, provided the use includes credit to the author and to Virginia Cooperative Extension, Virginia Tech, and Virginia State University. Issued in furtherance of Cooperative Extension work, Virginia Polytechnic Institute and State University, Virginia State University, and the U.S. Department of Agriculture cooperating. Alan L. Grant, Dean, College of Agriculture and Life Sciences; Edwin J. Jones, Director, Virginia Cooperative Extension, Virginia Tech, Blacksburg; Jewel E. Hairston, Administrator, 1890 Extension Program, Virginia State, Petersburg. May 1, 2009
<urn:uuid:bf39c2b1-ba73-4b48-8a84-7796ac1b235e>
CC-MAIN-2013-20
http://www.pubs.ext.vt.edu/452/452-014/452-014.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901107
5,860
3.890625
4
Galaxy Cluster Abell 520's mass (HST WFPC2) This image shows the location of most of the mass in merging galaxy cluster Abell 520's core, which is dominated by dark matter. Dark matter is an invisible substance that makes up most of the Universe's mass. The dark-matter map was derived from Hubble Wide Field Planetary Camera 2 observations, by detecting how light from distant objects is distorted by the cluster galaxies, an effect called gravitational lensing. Abell 520 resides 2.4 billion light-years away. About the Image About the Object |Type:||• Early Universe : Galaxy : Grouping : Cluster| • Early Universe : Cosmology : Phenomenon : Dark Matter • Cosmology Images/Videos |Distance:||2 billion light years|
<urn:uuid:0c30eadd-dedb-4eae-9ce0-6e931fea204f>
CC-MAIN-2013-20
http://www.spacetelescope.org/images/opo1210g/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.847586
167
3.09375
3
What took Mother Nature thousands of years to create, recent humanity has taken for granted and has selfishly destroyed. In the last 75 years, modern U.S. economic agricultural practices have nearly eradicated all of the naturally occurring organically (carbon containing) complexed trace minerals, poly-electrolytes and metalo-enzymes from our diet. Two-time Nobel prize winner and renowned scientist Linus Pauling categorically stated to the 74th Congress of the U.S. that, "Every ailment, every sickness and every disease can be traced back to an organic trace mineral deficiency." Without a doubt, organic trace minerals are "the gift of life" and cellular function becomes impossible without them. The 74th Congress, 2nd session, of the United States declared that 99 percent of Americans are deficient in 100 percent organically complexed trace minerals. Why? Because our foods no longer contain adequate amounts of critical, essential, and life sustaining organic trace minerals, poly-electrolytes and metalo-enzymes! Dr. Charles Northern in Senate Document 264 indicated that, when an organic soil-based bed is destroyed, plants and crops harvested in that soil lack virtually all of the critical organic trace minerals and more. There is enormous scientific evidence proving organic trace minerals and fulvic acid are both critically necessary to maintain health, promote healing and prevent illnesses and disease. Further, they may be the solution to the world’s health problems and may even be the key to preservation of life on earth for many centuries to come. Lastly, they are proving indispensable to every organ, gland and muscle in the body. Without them, life cannot exist because they are both the stimulus (neuro-electrical catalyst) and the "spark" that single-handedly produces all life functions. What is fulvic acid? Where does it come from and how does it work? Fulvic acid is a humic substance or extract. It is the end product of nature’s humification process, which is the ultimate breakdown and recycling of once-living plant matter. Fulvic acid contains all the phytochemical protective substances, amino acid peptides, nucleic acids, poly-saccharides and muco-polysaccharides from the original living and organic (carbon containing) plant matter. Thus, fulvic acid is highly concentrated, refined, transformed, and enhanced over hundreds of years by the actions of innumerable and microscopic organically complexed plants. This humification process does not break down the original phytochemical protective components and prevents them from turning back into their basic mineral elements and micro-structures. Even the smallest strands of RNA, DNA, and organic plant photosynthetic materials still remain intact. Over time, the original components become organically complexed and enriched with organic and carbonaceous materials. In addition, because fulvic acid is so highly refined and so naturally chelated (i.e., ultra tiny and low molecular weight) by nature itself, it consists of 100 percent organically complexed and ultra tiny molecules which can easily penetrate human tissue and cells. It is highly bio-active on the cellular level, providing innumerable bio-chemical and metabolic detoxification functions. The short term health benefits and long term clinical results are scientifically phenomenal and medically outstanding. Fulvic acid is one of nature’s most precious forms of protection and defense for plants, animals and, possibly, man. Unquestionably, it is tied very closely with immune system functions and has exceptionally powerful antioxidant qualities. What scientific facts do we know about fulvic acid and its vast applications? Because fulvic acid is naturally chelated and organically complexed by nature itself, it has been entirely and perhaps wrongly misunderstood and overlooked by most of medicine and science. We believe nearly every pharmaceutical drug, herbal extract, health supplement and therapeutic substance from nature can, somehow, be traced to the functions and the actual chemical makeup of fulvic acid. We also believe the DNA of every living and extinct species of organism on Earth—be it plant, animal or microbe—has eventually become a component of fulvic acid. The original life-giving, protective, and healing components from plants (phytochemicals) do not disintegrate during nature’s fulvic acid production process; rather, they become highly concentrated. Many species of plants, particularly microscopic plants, seem to be involved in the fulvic acid production process. Fulvic acid production appears to be the end result of nature’s perfect recycling process, and may provide a steady increase in health to subsequent generations of living organisms. Modern agricultural practices appear to have completely broken nature’s recycling process, resulting in progressively deteriorating crops yielding hollow foods and, subsequently, affecting our health. In fact, the use and consumption of a homeostatic balanced amount of fulvic acid on a regular basis could possibly reverse the steady chronic cycle of deteriorating health. Based on medical research, what are the known health benefits of fulvic acid? There are many beneficial therapeutic uses of fulvic acid. Below are findings from some of the latest medical research. 1. Anti-inflammatory agent: Fulvic acid seems to inhibit an enzyme secreted from an infected area, and regulates the level of the trace elements zinc and copper, activating a super-oxide called dismutates. Free radicals generated in the infected area are dismutated, utilized, and eliminated by this agent. 2. Stimulates blood circulation and enhances blood coagulation: Many diseases are caused by circulation malfunction in the capillary blood system. A therapeutic effect of fulvic acid seems to be is its ability to restore and improve blood circulation in the capillary system. Fulvic acid also appears to serve as a blood coagulant when there is bleeding or blood seeping from the vascular bed. 3. Digestive tract ulcers: Another healing effect of fulvic acid is its ability to stimulate blood circulation in the stomach wall and inhibit excessive secretion of acid. It also seems to stimulate the secretion of the glands in the stomach that have the ability to protect the stomach inner wall, thereby potentially preventing and healing stomach ulcers. 4. Immunology: There are indications that, with injection of fulvic acid into the abdominal region, the size of thymus in experimental animals increased, together with indications of macrophage activation. A dosage of 5 mg/kg of fulvic acid when injected into the abdominal cavity appears to be beneficial. 5. Endocrinology: Fulvic acid appears to regulate abnormal thyroid hormone secretion because it is able to regulate cyclic nucleotides at the cellular level. 6. Anti-cancer: In general, fulvic acid does not seem to kill cancer cells directly. However, it serves as a regulating agent in the immune system and can be used therapeutically in conjunction with other anti-cancer medicines. Further research may show that humic acids can also be used to resuscitate some of our soils, and possibly our food sources. Until this can be accomplished, good quality nutritional supplements containing fulvic acid remain our best defense against food devoid of life-sustaining organically complexed minerals and nutrients. Dr. Drucker has a Master’s of Science in Natural Health and a Doctorate in Naturopathy. He is a highly respected doctor in the field of natural health and the CEO of Drucker Labs, which manufactures and distributes health, wellness and nutritional products. These products use a breakthrough technology called intraCELL™ V, which yields unique carbon-bond organic microcomplexed structures that are highly bio-available and extremely effective.
<urn:uuid:21dc661e-839c-4fe4-879c-7c4eb2dba94e>
CC-MAIN-2013-20
http://www.theamericanchiropractor.com/articles-nutrition/4752-the-gift-of-naturally-prolonged-healthy-and-sustained-life.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929932
1,555
3
3
SAN PEDRO, Calif. If the tide is high, the weather is warm, the clock is approaching midnight and the beach you're standing on is in Southern California, it's a given that romance is in the air – or the water. In these parts, it's a time for grunion love. The California grunion does something no other fish on the planet is known to do. It surfs a wave right out of its world and into ours. Then it plops itself down on the sand to lay and fertilize its eggs before waiting patiently for another big wave to carry it home. Sometimes, before it hitches a ride back to the ocean, someone like 13-year-old Judy Feng will catch it – or at least try to (they're slippery). "At first I was trying to get it, but it was all slippery and I dropped it," a delighted Feng said during a recent midnight run at Cabrillo Beach in San Pedro, where more than 2,000 people lined the shore for a good half-mile in search of the legendary but elusive aquatic wonder, known scientifically as leuresthes tenuis. "I was going to let it go, but then my friend said, 'It's at my feet, don't let it get away!' And I grabbed it again," she squealed, still bouncing up and down with excitement as she held her catch in a water-filled plastic bag. Feng had officially become a grunion hunter, a distinction as uniquely Californian as the fish she was now holding at eye level. The California grunion is found nowhere else but along a thousand-mile stretch of coastline extending from central California to the southern half of Baja California, Mexico. It's along that region's scores of flat, sandy beaches that people line up every spring and summer by the tens of thousands to watch the female grunion burrow tail first into the sand as the males wrap themselves around it. "When they flop up against your feet, it sounds sort of like this," says veteran grunion watcher Mimi DiMatteo, popping her cheeks with her fingers and making a noise that sounds something like "whuppa, whuppa, whuppa." Although many come just to watch the grunion spawn, in these hard economic times some people are trekking to the beach to bring home fish to eat. Veteran grunion watcher Chris Lindeman says he counted more than 200 people on a recent night who said that was their plan. Like DiMatteo, Lindeman is a "Grunion Greeter," one of some 550 volunteers who go out and count the fish during their runs from March through August to help marine biologists make sure their numbers remain strong, and that people aren't taking them without fishing licenses or breaking state law by using anything other than their hands to catch them. For those who plan on taking the fish home to eat, veteran grunion hunter Matt Christopherson warns that one taste of the skinny, crunchy little silver fish is often enough to last a lifetime. "If you want to eat fish, go to the store and get a salmon steak – it's so much better" says Christopherson, who puts on grunion lectures from time to time at the Cabrillo Marine Aquarium on nights when runs take place on the beach outside.
<urn:uuid:9ae4820f-a86f-4722-ad1f-7b66b7bd3437>
CC-MAIN-2013-20
http://www.utsandiego.com/news/2009/aug/20/us-watching-fish-run-082009/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97424
693
2.53125
3
Basindra Village, Ratlam District, Madhya Pradesh, is watered by the perennial Jhamand River. The river flows about 100 ft below the village and was once its only source of water. The 2-3 handpumps installed here in the 1970’s by the Public Health and Engineering Department (PHED) used to run dry in the summer. When the river would shrink in the summer, people would dig holes in the riverbed to procure water for their daily needs. Today the village has 13 handpumps and 2 tubewells. Groundwater therefore, is heavily relied upon for meeting daily water needs. This however, is not a daunting prospect for this village at least in the current scenario, for recharge measure have been taken in the form of an earthen dam, a solid weir and a check dam, all built on the Jhamand. “These structures don’t just recharge groundwater in this area, but also ensure that the river never dries up” informs Bhandari, a PHED engineer. PHED believes that the river has been made perennial by the dam even as they see that the river has actually shrunk because of it. The Dholavar earthen dam is over a kilometre long and manages to amass a large amount of water in the reservoir it creates. The dam has a live storage capacity of about 50 million cubic meters, submerging 600 ha of land. The riverbed on the other side of the dam is totally dry for a distance, till it is revived again by groundwater accrual. The multi-purpose dam has been supplying water to Ratlam city since 1984 (about 5 mld) and canal irrigation to neighbouring villages. While currently seeming like a solution, one wonders what the adverse effects of this structure could be in the long-run, on the riverine ecology. The impact canal irrigation and all-year-round agriculture is having on the soil is already visible, “we never needed artificial fertilisers before, now they are a necessity, the soil seems tired” observed Juvan Singh, an elderly citizen of the village. Other than impacts in the immediate environment, dams of this size typically have severe downstream impacts as well. Basindra has piped water supply today, brought to the village from 2 tubewells. The dugwell this village possessed, has typically fallen out of use, with the introduction of handpumps and tubewells. With an electricity-run motor now providing water in people’s houses, consumption has obviously increased. This, they say is not a problem as the reservoir has enough water. Since only about 35 families have opted for individual tap connections, the panchayat is not able to collect enough funds for the operation and maintenance of the scheme. In fact, the funds collected through community contribution (Rs.30/month/family) are not enough even to pay the electricity charges of the motor. “With the river and the handpumps close by, people don’t feel the need to spend money on piped water supply” says Anankuar, an elderly woman of the village. “Due to constant power cuts, we never have water in our taps anyway” she mournfully adds. Rowty village, close by, is plagued with similar problems. Although electricity problems persist, here 80% of the people have individual tap connections. The only ones who don’t are the tribals, typically living in hamlets in the outskirts of the village, where pipes don’t reach. The charges for this facility are Rs.40/month/family, with an initial contribution of Rs.500. Till the 1980’s Rowty was sufficiently watered by 3 dugwells, 1 baori (stepwell) and a seasonal stream. Gradually with climate change, deforestation etc, rainfall began to reduce, but the population pressure kept mounting. “We used to have a good monsoon every year back in the day, the dugwell and baori used to last us the entire year” reminisced a group of villagers. “In 1978 PHED started a pipeline system from the baori, connected to public standposts, but in 1984-85 water scarcity became acute, that’s when we built a stop-dam on the seasonal stream and an overhead tank along with individual tap connections” explained Bhandari. While these measures took care of the problem temporarily, in the 1990’s scarcity rose its ugly head once again as the sources kept drying up. The PHED then decided to build dykes and check dams in the watershed of the village, for recharging groundwater. Tubewells were also drilled, but the water they yielded suffered from bad quality. The handpumps wouldn’t just have bad quality water, but also run dry in the lean season. It was in the late 1990’s when the panchayat demanded water from the Jhamand, through long distance pipes. And only in 2006 was this scheme launched. It is managed by the panchayat, which apart from community contributions, also uses its own funds from other sources to run the system. With the introduction of tap water, the baori and dugwells have been rendered useless and fallen into a dilapidated condition. While the PHED boasts of these two villages as success stories, it is essential to identify the loopholes. It is commendable that piped water supply has been taken seriously here, as opposed to most other villages, where water isn’t available even in public standposts and handpumps. However, the engineers themselves acknowledge that they do not factor in the electricity problem and therefore end up designing unrealistic schemes which are successful only on paper. They assert that designing schemes with lower consumption of electricity is possible, but they never consider it. Technically therefore, they have 24 x 7 water supply, but the ground reality speaks a different language. The other point to ponder over is the environmental short-sightedness displayed by their schemes. PHED needs a drastic shift in its approach while designing drinking water schemes. The focus continues to remain on groundwater extraction by way of tubewells, ignoring dugwells and throwing traditional systems into disuse. The sustainability of the source or of the system is almost never considered. Even though they have now begun to make recharge structures to insure against falling water tables, simpler and less energy-intensive techniques are not considered. Many a times, their recharge structures involve dams, solid-weirs etc, which are ecologically myopic in nature and prove to be disastrous for the environment in the long-run. It is an imperative that the PHED tries to revive traditional and indigenous wisdom that is culture and geography specific, and apply technology to that, so as to make it relevant to modern times. Moreover, PHED has not bothered to involve the community in either of these cases. Villagers are hardly ever consulted before designing a scheme. Their opinions or needs simply don’t matter. If at all any interaction takes place between the village community and the PHED prior to the installation of a water supply scheme, it is one-sided, in the form of information-education-communication (IEC). In these two villages, no IEC activity was undertaken, neither were the locals consulted at any stage of planning or implementation. For this reason, the Village Water and Sanitation Committee (VWSC) continues to lie defunct and it is the panchayat that runs the show. In the former example of Basindra, the scheme was not demand-driven and is running into losses. “PHED schemes are not demand driven but politics driven” the engineers themselves claim. “Where water will be supplied and where it wont is not a matter of need at all, it is based on politics between panchayats, the PHED and MLAs” they discuss amongst each other. When villagers refuse to pay for the installed scheme, PHED engineers lash out at them with hostility, failing to draw a connection between their own observation that they don’t need it and their refusal to pay. The point that squarely drives home is the compelling need for structural and systemic change within the governmental edifice, and more specifically, within the PHED. Under the new guidelines for provision of drinking water in villages, issued by the Department of Drinking Water and Sanitation (DDWS), PHED engineers are expected to involve communities in their schemes right from the planning stage. While very much in line with the participatory governance rhetoric, this idea has few takers as it is designed by those sitting in Delhi, far removed from ground-realities. Policies such as these are issued in a top-down manner by central departments, much in contradiction to what they ask of the engineers at the village level. Community participation is a bottom-up process that involves consistent investment of time and effort by the engineers. It is not a one-day event or a one-visit job. Involving villagers in the planning, implementation and operation processes of a drinking water scheme entails gaining ground within all sections of the community, winning their trust, dealing with caste and gender issues, local politics and becoming aware of all the minute details of the problems they face. Mobilising the people and sustaining their confidence is a long drawn process that does not terminate once the scheme has been installed, for that is just the beginning, and running the scheme successfully henceforth requires much cooperation from the villagers. However, this is not something the PHED engineers feel they are equipped to do. “We are expected to do IEC activities among other aspects of community participation, but neither do we have the skill nor the time for such activities” they claim. “I have over 600 villages under me, how can I undertake a two-year process of community mobilisation for each one of them?” exclaims a sub-engineer from Ratlam district. Another sub-engineer threw light on the loopholes in their planning process, “we are asked to design schemes overnight, how can we ensure people’s involvement in this manner? Our schemes therefore do not factor in local issues – geographical or social, and are unrealistic, thereby causing their own failure.” What these statements elucidate is a desperate need to bring in structural changes so as to enable PHED to respond to these local problems, which the central department often turns a blind eye to. It is clear that many of the engineers are well aware of their limitations and weaknesses, one of them being their inability to involve communities for the reasons stated above. Lack of skill, time and manpower, as well as bureaucratic procedures are only some the barriers. The need of the hour isn’t just to convey the importance of bottom-up planning and participatory governance, but to bring about systemic changes within government bodies to make them more responsive towards current realities. Government bodies, whether the PHED, the municipalities, the development authorities or any other, cannot consist solely of engineers, but will have to have a wing of social scientists who have the skill that engineers lack to work at the village level, with the people. Whether socially blind or ecologically short-sighted, the PHED interventions require a massive shift from being top-down, technocratic schemes to demand-driven, sustainable and people-managed system. What is needed is not a just a scheme but an entire system.
<urn:uuid:3276f3bc-879a-469b-91eb-5c403333123c>
CC-MAIN-2013-20
http://cseindia.org/node/4016
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966815
2,351
3.1875
3
Teenage vegetarians may be at greater risk of eating disorders and suicide than their meat-eating peers, according to US researchers. A study from the University of Minnesota found that adolescent vegetarians were more weight- and body-conscious, and more likely to have been diagnosed with an eating disorder, and to have tried a variety of healthy and unhealthy weight-control practices such as diet pills, laxatives and vomiting. They were also more likely than their peers to have contemplated or attempted suicide. The findings also indicated that adolescents were more likely than adults to be vegetarians for weight-control rather than for health or moral reasons. Although the authors acknowledge that a vegetarian diet can be more healthy than one that contains red meat, they also note that, in some teens, being a vegetarian may be taken as a red flag for eating and other disorders related to self-image (J Adolesc Health, 2001; 29: 406-16).
<urn:uuid:d8c1dff9-c01c-4814-a608-8cf3019b4811>
CC-MAIN-2013-20
http://www.healthy.net/Health/Article/Vegetarian_diet_may_mask_eating_disorder_in_teens/7656
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.985852
190
2.578125
3
Products marketed for infants or billed as "microwave safe" release toxic doses of the chemical bisphenol A when heated, an analysis by the Journal Sentinel has found. The newspaper had the containers of 10 items tested in a lab - products that were heated in a microwave or conventional oven. Bisphenol A, or BPA, was found to be leaching from all of them. The amounts detected were at levels that scientists have found cause neurological and developmental damage in laboratory animals. The problems include genital defects, behavioral changes and abnormal development of mammary glands. The changes to the mammary glands were identical to those observed in women at higher risk for breast cancer. The newspaper's test results raise new questions about the chemical and the safety of an entire inventory of plastic products labeled as "microwave safe." BPA is a key ingredient in common household plastics, including baby bottles and storage containers. It has been found in 93% of Americans tested. The newspaper tests also revealed that BPA, commonly thought to be found only in hard, clear plastic and in the lining of metal food cans, is present in frozen food trays, microwaveable soup containers and plastic baby food packaging. Food companies advise parents worried about BPA to avoid microwaving food in plastic containers, especially those with the recycling No. 7 stamped on the bottom. But the Journal Sentinel's testing found BPA leaching from containers with different recycling numbers, including Nos. 1, 2 and 5. "There is no such thing as safe microwaveable plastic," said Frederick vom Saal, a University of Missouri researcher who oversaw the newspaper's testing. The American Chemistry Council disputed the findings, saying publishing the results amounts to a "serious disservice by drawing a conclusion about product safety that simply cannot be drawn from either this study or the overall body of scientific research." Food company officials say the doses detected in the tests are so low that they are insignificant to human health. "These levels are EXTREMELY low," wrote John Faulkner, director of brand communications for Campbell Soup Co. Tests of the company's Just Heat & Enjoy tomato soup showed its container leached some of the lowest levels of BPA found. "In fact, you might just be able to find similar levels in plain old tap water due to 'background' levels. We are talking 40 to 60 parts per trillion (ppt). What is 40 to 60 ppt? 40 to 60 seconds in 32,000 years!" But the Journal Sentinel identified several peer-reviewed studies that found harm to animals at levels similar to those detected in the newspaper's tests - in some cases, as low as 25 parts per trillion. Scientists with an expertise in BPA say the findings are cause for concern, especially considering how vulnerable a baby's development is and how even tiny amounts of BPA can trigger cell damage. Harm done during this critical window of development is irreparable and can be devastating, they say. "This is stuff that shouldn't be in our babies' and infants' bodies," said Patricia Hunt, a professor at Washington State University who pioneered studies linking BPA to cancer. Scientists say BPA and other chemicals that disrupt the endocrine system do not act like other toxins that become more potent as their doses increase. BPA behaves like a hormone. It mimics estrogen with effects that are ultra-potent. Even tiny amounts can trigger cell change. Nira Ben-Jonathan, a professor at the University of Cincinnati whose studies found that BPA interferes with chemotherapy, said the chemical's effects might not be immediately obvious, but can be devastating over time. "They used to say DDT was safe, too," Ben-Jonathan said. The Journal Sentinel's tests were done to determine the prevalence of BPA in a typical modern diet for babies and small children. Based on the test results, the newspaper then estimated the amount of BPA a child might consume and compared it with low-dose amounts of BPA used by researchers in animal studies. In what is believed to be the first analysis of its kind by a newspaper, the Journal Sentinel found that an average 1-month-old girl is exposed to the same amount of BPA that caused mammary gland changes in mice. Those same changes in humans can lead to breast cancer. The label "microwave safe" is stamped on thousands of products sold across the country. But that is not an official designation regulated by the government. Companies are able to place it on their products without any official testing by the Food and Drug Administration. BPA makes its way into food from plastic packaging when those containers are heated. In the Journal Sentinel's tests, the highest amounts of leaching were found in two items: a can of Enfamil liquid infant formula and a Rubbermaid plastic food-storage container. The lowest levels, trace amounts, were found to be leaching from disposable frozen-food containers. Hunt, the Washington State University scientist, called the levels found leaching from the plastic food-storage containers "real doozies." It is likely that the newspaper's tests underestimated the amounts of BPA that normally would be leaching from reusable products, BPA experts say. All products the newspaper had tested were new. Studies show that as products age and are repeatedly heated and washed, they are more likely to leach higher amounts of BPA. "You can't see this happening," vom Saal said. "You can't taste it, you can't smell it, but you are getting dosed at a higher and higher amount." Also, testers did not examine the food in those containers for BPA levels. They replaced food with a mixture of water and alcohol, a standard laboratory practice that makes measuring easier and more accurate. But that also eliminates other variables that are in the food, such as fats and acids that are more likely to encourage BPA to leach. BPA's effects also can be magnified by other chemicals in the plastic. This has been proved in one experiment after another, said vom Saal, who has become a vocal critic of the chemical industry. While BPA is potentially dangerous to all humans, scientists are especially concerned about how the chemical affects fetuses and newborns, whose systems are not developed. Babies up to age 12 months or so can't metabolize BPA as efficiently as adults. But no one is more exposed to BPA than a newborn. A newborn's small size means that he or she gets a more concentrated dose of the chemical. Many products that contain BPA - such as baby bottles, infant formula, some pacifiers and toys - are marketed for mothers and newborns. Exposure for babies can be exaggerated by the fact that many have diets exclusively made up of liquid baby formula from cans lined with BPA. Babies who drink liquid formula from bottles made with BPA are effectively getting a super-dose of the chemical, said Hunt, the Washington State University scientist. The U.S. surgeon general has advised that breast milk is the healthiest food for newborns, though BPA has been found in breast milk, too. Less than one-third of babies are breastfed until they are 3 months old, and just one in 10 is exclusively breastfed to 6 months, a 2004 study by the U.S. Centers for Disease Control and Prevention found. Gail McCarver, a physician at the Medical College of Wisconsin who led the National Toxicology Program's investigation of BPA earlier this year, declined to be interviewed for this article. But McCarver said at an FDA hearing in September that she is particularly concerned about premature babies who are exposed to plastic tubing in hospitals. The government should be protecting the smallest, most vulnerable baby, not just the average child, she said. Four million babies are born in the United States each year, and roughly 500,000 are born prematurely. Christina Deppoleto, 36, of Hartland says she does her best to protect her 18-month-old son, Carson. Deppoleto, interviewed recently at the Milwaukee County Zoo, said she was troubled to hear about the newspaper's test results - especially findings that showed BPA to be leaching from "microwave safe" containers. "I try to be a good consumer and a good parent," she said. "But you have to be able to trust the labels." Reviewing scientific studies The newspaper examined all the published literature on BPA spanning two decades. A total of 21 studies have looked at effects on mammals at doses that were similar to the amounts found leaching from the products. All but four concluded that BPA caused damage to animals. In one 2006 study, pregnant mice were exposed to BPA from the eighth day of pregnancy to the 16th, a period critical for the development of neurons that regulate sexual behavior. Scientists found the female offspring had fewer such neurons than usual. Their activity levels dropped and mirrored that of their brothers. In another experiment, newborn mice were fed BPA at doses common in human diets. They were found to have changes in the patterns of their mammary glands at the time of puberty. They had more ducts and duct extensions, more developed fat areas and additional cell changes associated with a more mature gland. The consequences of this early alteration in breast tissue development are likely to increase vulnerability to breast cancer later in life, the scientists found. Animals tested were fed BPA through pumps under the skin that regularly administered the chemical. Some critics say that method exaggerates the chemical's effects. But others say it is an acceptable method because newborns are constantly feeding. Scientists also add that the Journal Sentinel analysis of how much BPA a baby might ingest is just a small window into a child's typical day of exposure. Studies have shown the chemical can be absorbed through the skin. And babies also put items other than food in their mouths, including pacifiers and toys that might contain the chemical. The findings have disturbed and angered parents and consumer advocates who say the government needs to do a better job of protecting people from potentially harmful chemicals. "The safety of this compound is in major question, and our government is not taking steps to address this," said Urvashi Rangan, senior analyst for Consumers Union, a watchdog group that regularly tests products. "Consumers shouldn't have to be the guinea pigs here." Canada has declared BPA a toxin and is moving to ban it from baby bottles, infant formula and other children's products. But U.S. regulators have been conflicted. The National Toxicology Program has expressed concern about the chemical for fetuses, newborns and young children. But the FDA has declared it to be safe. That assessment, however, was found to be flawed, and the FDA since has reopened its examination. The conflict has further heightened consumer anxiety about how much BPA, if any, is safe. Bradley Kirschner, a pediatrician at Children's Hospital of Wisconsin and the father of three young girls, said his patients are increasingly concerned about the chemical. "If an entire country is banning it, that makes it hard to ignore," he said. Parents are confused, he said. And he is not certain how to advise them. "If you ask, 'Should a baby sleep on his back?' I can tell you what to do," he said. "But this is muddy." Kirschner said he would like a more definitive answer from U.S. regulators about whether BPA is safe. Increasingly, consumer groups are calling for BPA to be banned. Last month, the consumer watchdog Environmental Working Group sent letters to infant formula makers, asking them to stop packaging their products in containers made with BPA. The attorneys general in New Jersey, Connecticut and Delaware sent letters to 11 companies that make baby bottles and baby formula containers, asking that they voluntarily stop using BPA. Six U.S. senators have called for a federal ban on the chemical, and more than 35 lawsuits have been filed in recent years against companies using BPA, claiming the chemical has caused physical harm. Companies are beginning to proactively back away from BPA. In April, after Canada's announcement of a ban, several corporations said they would stop producing and selling certain products made with BPA. The companies and retailers include Nalgene, Wal-Mart, Toys "R" Us, Playtex and CVS pharmacies. But plenty of products designed for heating food still contain BPA. Many companies that use BPA now include safety information about the chemical on their Web sites. But those sites maintain that BPA is safe at low doses. Their claims are based largely on studies that were paid for by the chemical industry. Jackie Chesney, a grandmother from Spring Grove, Ill., said she assumes a certain level of safety in products that are allowed to be sold. "You should think the things you are using would be safe," Chesney said as she strolled through the Milwaukee County Zoo with her daughter and grandchildren. Chesney said things have changed a lot since her children were small. There is so much more plastic these days, and food is more likely to be individually wrapped, she said. The proliferation of plastic worries her. "Yes, it's handy and convenient," she said. "But at what cost?"
<urn:uuid:4dc6877a-3fa8-4e77-baef-50ce82f6d979>
CC-MAIN-2013-20
http://www.jsonline.com/watchdog/watchdogreports/34532034.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970414
2,755
2.84375
3
- About Us - Global Vision - News and Events - Ways to Help Genetic Causes of Mental Retardation What is genetics? Genetics is "the science that studies the principles and mechanics of heredity, or the means by which traits are passed from parents to offspring" (Glanze, 1996). Through genetics a number of specific disorders have been identified as being genetically caused. One example is fragile X syndrome, a common genetic cause of mental retardation, which is caused by the presence of a single non-working gene (called the FMR-1 gene) on a child's X chromosome. Genetics originated in the mid-19th century when Gregor Mendel discovered over a ten year period of experimenting with pea plants that certain traits are inherited. His discoveries provided the foundation for the science of genetics. Mendel's findings continue to spur the work and hopes of scientists to uncover the mystery behind how our genes work and what they can reveal to us about the possibility of having certain diseases and conditions. The scientific field of genetics can help families affected by genetic disorders to have a better understanding about heredity, what causes various genetic disorders to occur, and what possible prevention strategies can be used to decrease the incidence of genetic disorders. Can a person's genes cause mental retardation? Some genetic disorders are associated with mental retardation, chronic health problems and developmental delay. Because of the complexity of the human body, there are no easy answers to the question of what causes mental retardation. Mental retardation is attributable to any condition that impairs development of the brain before birth, during birth or in the childhood years (The Arc, 1993). As many as 50 percent of people with mental retardation have been found to possess more than one causal factor (AAMR, 1992). Some research has determined that in 75 percent of children with mild mental retardation the cause is unknown (Kozma & Stock, 1993). The field of genetics has important implications for people with mental retardation. Over 350 inborn errors of metabolism have been identified, most of which lead to mental retardation (Scriver, 1995). Yet, the possibility of being born with mental retardation or developing the condition later in life can be caused by multiple factors unrelated to our genetic make-up. It is caused not only by the genotype (or genetic make-up) of the individual, but also by the possible influences of environmental factors. Those factors can range from drug use or nutritional deficiencies to poverty and cultural deprivation. How often is mental retardation inherited? Since the brain is such a complex organ, there are a number of genes involved in its development. Consequently, there are a number of genetic causes of mental retardation. Most identifiable causes of severe mental retardation (defined as an IQ of 50 or less) originate from genetic disorders. Up to 60 percent of severe mental retardation can be attributed to genetic causes making it the most common cause in cases of severe mental retardation (Moser, 1995). People with mild mental retardation (defined as an IQ between 50 and 70-75) are not as likely to inherit mental retardation due to their genetic make-up as are people with severe mental retardation. People with mild mental retardation are more likely to have the condition due to environmental factors, such as nutritional state, personal health habits, socioeconomic level, access to health care and exposure to pollutants and chemicals, rather than acquiring the condition genetically (Nelson-Anderson & Waters, 1995). Two of the most common genetically transmitted forms of mental retardation include Down syndrome (a chromosomal disorder) and fragile X syndrome (a single-gene disorder). What causes genetic disorders? Over 7,000 genetic disorders have been identified and catalogued, with up to five new disorders being discovered every year (McKusick, 1994). Genetic disorders are typically broken down into three types: Chromosomal, single-gene and multifactorial. Chromosomal disorders affect approximately 7 out of every 1,000 infants. The disorder results when a person has too many or too few chromosomes, or when there is a change in the structure of a chromosome. Half of all first-trimester miscarriages or spontaneous abortions occur as a result of a chromosome abnormality. If the child is born, he or she usually has multiple birth defects and mental retardation. Most chromosomal disorders happen sporadically. They are not necessarily inherited (even though they are considered to be genetic disorders). In order for a genetic condition to be inherited, the disease-causing gene must be present within one of the parent's genetic code. In most chromosomal disorders, each of the parent's genes are normal. However, during cell division an error in separation, recombination or distribution of chromosomes occurs. Examples of chromosomal disorders include Down syndrome, Trisomy 13, Trisomy 18 and Cri du chat. Single-gene disorders (sometimes called inborn errors of metabolism or Mendelian disorders) are caused by non-working genes. Disorders of metabolism occur when cells are unable to produce proteins or enzymes needed to change certain chemicals into others, or to carry substances from one place to another. The cell's inability to carry out these vital internal functions often results in mental retardation. Approximately 1 in 5,000 children are born with defective enzymes resulting in inborn errors of metabolism (Batshaw, 1992). Although many conditions are generally referred to as "genetic disorders," single-gene disorders are the most easy to identify as true genetic disorders since they are caused by a mutation (or a change) within a single gene or gene pair. Combinations of multiple gene and environmental factors leading to mental retardation are called multifactorial disorders. They are inherited but do not share the same inheritance patterns typically found in single-gene disorders. It is unclear exactly why they occur. Their inheritance patterns are usually much more complex than those of single gene disorders because their existence depends on the simultaneous presence of heredity and environmental factors. For example, weight and intelligence are traits inherited in this way (Batshaw, 1992). Other common disorders, including cancer and hypertension, are examples of health problems caused by the environment and heredity. Multifactorial disorders are very common and cause a majority of birth defects. Examples of multifactorial disorders include heart disease, diabetes, spina bifida, anencephaly, cleft lip and cleft palate, clubfoot and congenital heart defects. How are genetic disorders inherited? Genetic disorders can be inherited in much the same way a person can inherit other characteristics such as eye and hair color, height and intelligence. Children inherit genetic or hereditary information by obtaining genes from each parent. There are three common types or modes of inheritance: dominant, recessive and X-linked (or sex-linked). Dominant inheritance occurs when one parent has a dominant, disease-causing gene which causes abnormalities even if coupled with a healthy gene from the other parent. Dominant inheritance means that each child has a 50 percent chance of inheriting the disease-causing gene. An example of dominant inheritance associated with mental retardation is tuberous sclerosis. Recessive inheritance occurs when both parents carry a disease-causing gene but outwardly show no signs of disease. Parents of children with recessive conditions are called "carriers" since each parent carries one copy of a disease gene. They show no symptoms of having a disease gene and remain unaware of having the gene until having an affected child. When parents who are carriers give birth, each child has a 25 percent chance of inheriting both disease genes and being affected. Each child also has a 25 percent chance of inheriting two healthy genes and not being affected, and a 50 percent chance of being a carrier of the disorder, like their parents. Examples of disorders which are inherited recessively and are also associated with mental retardation include phenylketonuria (PKU) and galactosemia. X-linked or sex-linked inheritance affects those genes located on the X chromosome and can be either X-linked recessive or X-linked dominant. The X-linked recessive disorder, which is much more common compared to X-linked dominant inheritance, is referred to as a sex-linked disorder since it involves genes located on the X chromosome. It occurs when an unaffected mother carries a disease-causing gene on at least one of her X chromosomes. Since females have two X chromosomes, they are usually unaffected carriers because the X chromosome that does not have the disease-causing gene compensates for the X chromosome that does. Therefore, they are less likely than males to show any symptoms of the disorder unless both X chromosomes have the disease-causing gene. If a mother has a female child, the child has a 50 percent chance to inherit the disease gene and be a carrier and pass the disease gene on to her sons (March of Dimes, 1995). On the other hand, if a mother has a male child, he has a 50 percent chance of inheriting the disease-causing gene since he has only one X chromosome. Consequently, males cannot be carriers of X-linked recessive disorders. If a male inherits an X-linked recessive disorder, he is affected. Some examples of X-linked inheritance associated with mental retardation include fragile X syndrome, Hunter syndrome, Lesch Nyhan syndrome and Duchenne muscular dystrophy. Can genetic disorders which cause mental retardation be fixed? In the past, only a few genetic disorders could be detected and treated early enough to prevent disease. However, the Human Genome Project, an international project among scientists to identify all the 60,000 to 100,000 genes within the human body, is significantly increasing our ability to discover more effective therapies and prevent inherited disease (National Center for Human Genome Research, 1995). As more disease-causing genes are identified, scientists can begin developing genetic therapies to alter or replace a defective gene. However, the development of gene therapies is still in the infancy stage. Gene therapy (also called somatic-cell gene therapy) is a procedure in which "healthy genes" are inserted into individuals to cure or treat an inherited disease or illness. Although there is a role for gene therapy in the prevention of mental retardation, it will most likely benefit only those people who have single-gene disorders, such as Lesch-Nyhan disease, Gaucher disease and phenylketonuria (PKU) that cause severe mental retardation (Moser, 1995). Gene therapy is far less likely to provide treatment of mild mental retardation which accounts for 87 percent of all cases of mental retardation (The Arc, 1993). - AAMR (1992). Mental retardation: Definition, classification, and systems of supports, 9th edition. - Batshaw, M.L. & Perret, Y.M. (1992). Children with disabilities: A medical primer (3rd ed.). Baltimore: Paul H. Brookes Publishing Co. - Glanze, W. (Ed.). (1996). The signet Mosby medical encyclopedia (revised edition). New York: Penguin Books Ltd. - Kozma, C. & Stock, J. (1992). "What is mental retardation." In Smith, R.S. Children with Mental Retardation: A Parent's Guide. Maryland: Woodbine House. - March of Dimes (1995). Birth defects. (Publication No. 09-026-00). White Plains, New York: Author. - McKusick, V.A. (1994). Mendelian Inheritance in Man. Catalogs of Human Genes and Genetic Disorders. (Eleventh edition). Baltimore: Johns Hopkins University Press. - Moser, H. G. (1995) A role for gene therapy in mental retardation. Mental Retardation and Developmental Disabilities Research Reviews: Gene Therapy, 1, 4-6. - National Center for Human Genome Research, National Institutes of Health. (1995). The Human Genome Project: From Maps to Medicine (NIH Publication No. 95-3897). Bethesda, MD. - Scriver, C. R. (1995). The metabolic and molecular bases of inherited disease. (Seventh edition). New York: McGraw-Hill.
<urn:uuid:8869a816-904c-4616-bbd6-b5e0b2e48be7>
CC-MAIN-2013-20
http://www.keystonehumanservices.org/genetic-causes-of-mental-retardation.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931985
2,543
3.640625
4
John F. Kennedy said in both his inaugural address and at the time the stamp was issued, "Together let us explore the stars, conquer the deserts, and eradicate disease." Kennedy's invitation and challenge reflected the significance the United States and other nations, eighty of which issued similar stamps, attached to the objective of "A World United Against Malaria." Its name derived from the Italian for "bad air," malaria has cursed human history for more than 4,000 years. Civilizations had struggled to control it long before 1632, when quinine (cinchona bark) was found to be the first successful treatment. Malaria has been virtually eradicated in most of North America and Europe thanks to the use of insecticides and environmental management, the very things which have hampered similar efforts in Africa, Asia, Latin and South America. The 4-cent commemorative Malaria Eradication Stamp was issued March 30, 1962, in Washington, DC. The design depicts the Great Seal of the US and an adaptation of the World Health Organization (WHO) emblem. Charles R. Chickering designed the blue and ocher stamp for the Bureau of Engraving and Printing. All lettering is in sans-serif type, L-type perforations. The stamp measures 0.84 x 1.44 inches in dimension, arranged horizontally, printed on the Giori presses, issued in panes of fifty with an initial printing of 100 million. World Health Organization. A Global Strategy for Malaria Control. Geneva: World Health Organization, 1993. Malaria. (2008). In Encyclopedia Britannica. Retrieved June 30, 2008, from Encyclopedia Britannica Online. Postal Bulletin (February 15, 1962).
<urn:uuid:5a5d9815-2812-439f-b291-59eec21785b4>
CC-MAIN-2013-20
http://arago.si.edu/index.asp?con=1&cmd=1&mode=2&tid=2034091
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934118
352
3.34375
3
Radiation Exposure from CT Scans GIST Support International asked questions about radiation exposure from CT exams to Donald P. Frush, MD of Duke University Medical Center. At Duke Dr. Frush is a Professor of Radiology, a Pediatrics Faculty member, as well as Chief of the Medical Physics Graduate Program, Division of Pediatric Radiology. Dr. Frush’s research interests are predominantly involved with pediatric body multidetector CT, including techniques, assessment of image quality, and radiation dosimetry. Dr. Frush has published very widely and has served as a guest editor and invited reviewer for numerous medical journals. He is currently the associate editor (North American) of the journal Pediatric Radiology. Here are Dr. Frush's responses to our questions. 1. What is the radiation dose from CT scans of the chest/abdomen/pelvis? How does this compare to chest X-rays and to normal daily-life annual background radiation? Radiation dose from CT scans of the chest, abdomen, or pelvis varies depending on the individual patient, and the technique used. In general, in adults, most abdomen CTs are performed at approximately 10 mSv. The dose is less for a chest CT. The doses should be the same-to-less, if size adjusted, for children. Head CT doses are generally less than about 2-4 mSv. As a rough approximation, one abdomen pelvis CT in an adult is equal to 100-250 chest x-rays. The average background radiation (just from living…) that individuals get is about 3-3.5 mSv per year. 2.What is the risk to health from quarterly CT scans of the chest/abdomen/pelvis over many years? The risk of low-level radiation, such as that used in CT, is unknown. There are established scientific data that show that doses > 100-200 mSv have a significant association with the risk of developing cancer but we do not know for sure about doses, such as from infrequent CT, which are under that amount. The risk is either zero or very small. In general, radiologists and healthcare providers should assume that any unnecessary amount of radiation should be avoided and that the benefit of the CT examination (for example the probability of detection of recurrent tumor, or the satisfaction of knowing that there is no recurrence) outweigh the risks (small, at most with CT examinations). In general, the dose of CT (or any medical radiation, such as an x-ray) is cumulative. That is, 4 examinations at 5.0 mSv each done over three years is the equivalent of 20 mSv of dose (4x5=20). This is accumulated over the lifetime. Similarly, if quarterly abdominal CT scans (at 10 mSv each) were given for 5 years, the cumulative dose would be 4x10x5 mSv = 200 mSv. 3. What is the latency period for development of radiation-induced cancers? The latency for development of solid tumors can be more than a decade. 4. Is sensitivity to radiation exposure greater in growing children and adolescents in whom tissues are still developing? When discussing a potential risk of CT scan, the radiation dose risk is higher for children than young adults. While there is no agreed upon age cut-off where the potential risk is zero, generally the younger the patient (i.e. under 30 years of age) the risk is higher due to potential accumulation of multiple CT examinations over a longer lifetime and increase radiation sensitivity of developing tissues, particularly in children. Sensitivity for children is 2-10 times that for adults; most favor the lower part of the range. 5. What strategies exist for minimizing the radiation dose in CT scans by altering the technique or settings? There are multiple strategies for minimizing radiation risks. First, when imaging is necessary, imaging evaluation considerations should be those that have the least amount of risk, including radiation. For example, if MR imaging (or sonography) will answer the question, then these should be performed. MR may not be the best evaluation, for example if lung parenchyma needs to be assessed. The decision about the type of frequency of examinations needs to come from a discussion between the individual patient and the healthcare team caring for the patient. When a CT is indicated, only as much radiation as is necessary for evaluation should be used. For example, protocols should based on patient size in the pediatric population. Only the necessary region should be scanned, and repeat scans through the area during the CT examination (multiphase examinations) should be minimized in frequency. Additional technical considerations for certain regions need to be considered, such as slightly lower dose to the chest as opposed to the abdomen during CT examinations. 6. Can MRI yield equally useful images to monitor growth or shrinkage or density change of existing tumors, and to detect new metastases in the liver and peritoneum? Are there any disadvantages of MRI? MR examination of the abdomen and pelvis is often an excellent modality for detecting solid organ tumors, as well as tumors elsewhere in the abdomen. The decision about whether this type of study needs to be performed versus a CT, again, needs to be arrived at through discussions between the healthcare team and the patient.
<urn:uuid:79baa8cb-78b2-465a-abca-69f7e26deabb>
CC-MAIN-2013-20
http://www.gistsupport.org/ask-the-professional/radiation-exposure-from-ct-scans.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943181
1,090
2.640625
3
Introduction to basic grammatical concepts and terminology. Specifically intended for students planning to take a foreign language or linguistics. Does not count toward the linguistics major or minor. This course introduces fundamentals of grammar, with primary emphasis on description of English. We will discuss parts of speech, the structure of phrases (modifiers and complements), the components of clauses (subject and predicate), types of clauses, types of subordination, and constructions which vary the order or relations of words, such as passive voice. The course also discusses common differences between languages with respect to basic phenomena such as word order, how questions are formed, agreement, tenses, and so forth. The primary goals of the course are: first, to bring to the studentís awareness the fact that many properties of all languages can be described in terms of a limited set of basic concepts, and second, to develop the studentís skill in analyzing sentences, using English as an example. Requirements include timely completion of homeworks, at least one midterm and a final exam. Student learning goals General method of instruction Class assignments and grading
<urn:uuid:77d3ad49-4961-4de1-9032-634589c3fe54>
CC-MAIN-2013-20
http://www.washington.edu/students/icd/S/ling/100jklaus.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904788
224
3.46875
3
Extremely obese women may not need to gain as much weight during pregnancy as current guidelines suggest, according to a new study presented today at the Society for Maternal-Fetal Medicine annual meeting. Severely obese women who gained less than the recommended amount of weight during the second and third trimester of pregnancy suffered no ill effects, nor did their babies. In contrast, obese and non-obese women who gained less weight in the second and third trimester had undesirable outcomes, including a higher likelihood of delivering a baby that is small for gestational age – smaller than the usual weight for the number of weeks of pregnancy. "The study suggests that even the recommended amounts of weight gain might be more than is needed for the most obese women," said Eva Pressman, M.D., director of Maternal Fetal Medicine at the University of Rochester Medical Center. In 2009, the Institute of Medicine released new guidelines for how much weight a woman should gain during pregnancy, taking into account changes in the population, particularly the increase in the number of women of childbearing age who are overweight and obese. "At some point, there may be even more tailored guidelines than what exists right now for women with different levels of obesity," said Danielle Durie, M.D., M.P.H, lead study author from the Department of Obstetrics and Gynecology at the Medical Center. The study sought to determine the impact of weight gain outside recommended ranges during the second and third trimester of pregnancy on women and their babies. Women were grouped according to pre-pregnancy body mass index (BMI) as underweight, normal weight, overweight, and obese classes I, II, and III. Obese classes II and III include women considered severely and morbidly obese. Gaining less weight than recommended in the second and third trimester was associated with increased likelihood of having a baby that is small for gestational age in all BMI groups except obese class II and III. Gaining more weight than recommended in the second and third trimester was associated with increased likelihood of having a baby that is large for gestational age in all BMI groups. Newborns that are very large or very small may experience problems during delivery and afterwards. Small babies may have decreased oxygen levels, low blood sugar and difficulty maintaining a normal body temperature. Large babies often make delivery more difficult and may result in the need for a cesarean delivery, which increases the risk of infection, respiratory complications, the need for additional surgeries and results in longer recovery times for the mother. In addition to weight gain rates outside the recommended ranges, increasing BMI alone was associated with negative outcomes for mothers and newborns as well. For all BMI groups above normal weight, the likelihood of cesarean delivery, induction of labor and gestational diabetes increased. The study included 73,977 women who gave birth to a single child in the Finger Lakes Region of New York between January 2004 and December 2008. Of the study participants, 4 percent were underweight, 48 percent normal weight, 24 percent overweight and 24 percent obese (13 percent class I, 6 percent class II and 5 percent class III). Researchers from Rochester also reported that overweight and obese women undergoing labor induction may benefit from higher doses of oxytocin, a medication used to induce labor by causing contractions. They tested the effectiveness of two oxytocin protocols – one including a lower dose every 45 minutes and another using a slightly higher dose every half hour – in women based on BMI. Overweight and obese women administered the lower, less frequent dose were less likely to deliver vaginally – the preferred method of delivery – than overweight and obese women administered the higher, more frequent dose. "If you give more oxytocin to overweight and obese patients they may be more likely to delivery vaginally, which is what we want, as opposed to having a cesarean section, which can introduce more complications," according to Pressman, an author of the study. "The study is important because the effect of BMI on induction has not been well described before." The oxytocin protocols tested in the study are relatively standard and were used to induce labor in nearly 500 women who delivered at the University of Rochester Medical Center between October 2007 and September 2008. Study participants were induced for a variety of reasons, including going a week or more past the estimated due date, when there is no longer any benefit to the fetus from remaining inside the womb. In addition to Pressman and Durie, David Hackney, M.D., and Nigel Campbell, M.D., also participated in the oxytocin research. Christopher Glantz, M.D., M.P.H, and Loralei Thornburg, M.D., contributed to the research on weight gain during the second and third trimester of pregnancy. Both studies were funded by the University of Rochester Medical Center. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:2b59cfad-ca0b-4080-82bd-761fceea804f>
CC-MAIN-2013-20
http://www.eurekalert.org/pub_releases/2011-02/uorm-sow021011.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967896
1,045
2.75
3
- Special Sections - Public Notices CAPE CANAVERAL, Fla. (AP) — The world's biggest extraterrestrial explorer, NASA's Curiosity rover, rocketed toward Mars on Saturday on a search for evidence that the red planet might once have been home to itsy-bitsy life. It will take 8½ months for Curiosity to reach Mars following a journey of 354 million miles. An unmanned Atlas V rocket hoisted the rover, officially known as Mars Science Laboratory, into a cloudy late morning sky. A Mars frenzy gripped the launch site, with more than 13,000 guests jamming the space center for NASA's first launch to Earth's next-door neighbor in four years, and the first send-off of a Martian rover in eight years. NASA astrobiologist Pan Conrad, whose carbon compound-seeking instrument is on the rover, had a shirt custom made for the occasion. Her bright blue, short-sleeve blouse was emblazoned with rockets, planets and the words, "Next stop Mars!" Conrad jumped and cheered as the rocket blasted off a few miles away. "It's amazing," she said, "and it's a huge relief to see it all going up in the same direction." The 1-ton Curiosity — as large as a car — is a mobile, nuclear-powered laboratory holding 10 science instruments that will sample Martian soil and rocks, and analyze them right on the spot. There's a drill as well as a stone-zapping laser machine. It's "really a rover on steroids," said NASA's Colleen Hartman, assistant associate administrator for science. "It's an order of magnitude more capable than anything we have ever launched to any planet in the solar system." The primary goal of the $2.5 billion mission is to see whether cold, dry, barren Mars might have been hospitable for microbial life once upon a time — or might even still be conducive to life now. No actual life detectors are on board; rather, the instruments will hunt for organic compounds. Curiosity's 7-foot arm has a jackhammer on the end to drill into the Martian red rock, and the 7-foot mast on the rover is topped with high-definition and laser cameras. No previous Martian rover has been so sophisticated or capable. With Mars the ultimate goal for astronauts, NASA also will use Curiosity to measure radiation at the red planet. The rover also has a weather station on board that will provide temperature, wind and humidity readings; a computer software app with daily weather updates is planned. The world has launched more than three dozen missions to the ever-alluring Mars, which is more like Earth than the other solar-system planets. Yet fewer than half those quests have succeeded. Just two weeks ago, a Russian spacecraft ended up stuck in orbit around Earth, rather than en route to the Martian moon Phobos. "Mars really is the Bermuda Triangle of the solar system," Hartman said. "It's the death planet, and the United States of America is the only nation in the world that has ever landed and driven robotic explorers on the surface of Mars, and now we're set to do it again." Curiosity's arrival next August will be particularly hair-raising. In a spacecraft first, the rover will be lowered onto the Martian surface via a jet pack and tether system similar to the sky cranes used to lower heavy equipment into remote areas on Earth. Curiosity is too heavy to use air bags like its much smaller predecessors, Spirit and Opportunity, did in 2004. Besides, this new way should provide for a more accurate landing. Astronauts will need to make similarly precise landings on Mars one day. Curiosity will spend a minimum of two years roaming around Gale Crater, chosen as the landing site because it's rich in minerals. Scientists said if there is any place on Mars that might have been ripe for life, it would be there. "I like to say it's extraterrestrial real estate appraisal," Conrad said with a chuckle earlier in the week. The rover — 10 feet long and 9 feet wide — should be able to go farther and work harder than any previous Mars explorer because of its power source: 10.6 pounds of radioactive plutonium. The nuclear generator was encased in several protective layers in case of a launch accident. NASA expects to put at least 12 miles on the odometer, once the rover sets down on the Martian surface. This is the third astronomical mission to be launched from Cape Canaveral by NASA since the retirement of the venerable space shuttle fleet this summer. The Juno probe is en route to Jupiter, and twin spacecraft named Grail will arrive at Earth's moon on New Year's Eve and Day. NASA hails this as the year of the solar system.
<urn:uuid:f96fe8ae-2ec0-4122-9205-d49a8d96f05d>
CC-MAIN-2013-20
http://www.lamonitor.com/content/nasa-launches-super-size-mars-rover-red-planet-see-videos?quicktabs_2=0&mini=calendar-date%2F2012-12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941842
988
3.5
4
Depending on a building’s size, location, amount of thermal insulation, and energy costs, a reflective coating can provide energy savings by reducing a building’s cooling loads. While each building obviously is different, a typical building located in a southern climate with large enough roof area and minimal amounts of insulation will generate energy savings. in many cases, these savings can be sufficient to pay back the coating installation cost in five to seven years. Weighing the Options Elastomeric coatings seem to encompass an even wider variety of base materials than bituminous coatings. Base materials include latex, acrylic, Hypalon, neoprene, silicone, urethane, and hybrid materials. Manufacturers regularly introduce new types of coatings. Elastomeric coatings are compatible with most types of roof systems, but managers use them most on single-ply systems, spray-applied polyurethane foam, and metal roofs. Managers also can specify elastomeric coatings for use on most built-up and modified bitumen systems. Matching Coatings and Needs Selecting the appropriate coating product requires research into available products offered in the area, their advantages and disadvantages, potential energy savings, their compatibility with the roofing system to be coated. Nothing is more discouraging than watching a coating – and energy savings – flake off after a few short months because the coating was not compatible with the roof. Adhesion is paramount. Coatings that do not adhere to the roof will not perform. Reflective is the method by which the coating provides energy savings. If the reflectivity fades, so goes the energy savings. The ability to withstand anticipated events is important to coating longevity. One final thing managers need to consider in selecting a roof coating is compliance with regulations governing volatile organic compounds(VOC). Most coatings are manufactured with solvents. Depending on the building’s location and the applicable VOC-compliant coating might be required. As energy costs rise, more managers will look to coatings as one way to decrease cooling loads and reduce energy costs. Proven performance is the key to a good coating, and the only way to truly verify performance is to visit similar local projects. Coating manufacturers whose products comply with the provisions of the Energy Star program are good starting points in selecting a coating that can reduce peak cooling demand by 10-15 percent. *Curtis L. Liscum, RRC – is a Registered Roof Consultant and senior consultant with Benchmark Inc. – a nationwide roof and pavement consulting firm based in Cedar Rapids, Iowa. He has more than 20 years of experience as a roof consultant.
<urn:uuid:5febc2fe-a4c0-4c5f-9339-dba4a5ded5db>
CC-MAIN-2013-20
http://www.turtlecoat.com/roof-coatings:-strategies-for-savings
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943656
547
2.6875
3
- Digital Programs Hidden in the Walls: The Time Capsule from San Francisco's Lost Sanctuary On view:Wednesday, January 1, 2003 - Thursday, January 1, 2004 In 2003, the Judah L. Magnes Museum and the Jewish Museum San Francisco worked together to create an exhibition around a recently uncovered time capsule found in an old temple on Bush Street. The Magnes Museum exhibit highlighted the building's changing role in San Francisco. It was built in 1895 as a synagogue for Ohabai Shalome, a Reform congregation. The temple closed in 1934, and the building became home to an African-American Baptist church, a Zen Buddhist mission, and now is an assisted-living facility for Japanese-American seniors.
<urn:uuid:bb27c877-e378-4f1f-b9f6-32bd4d74801f>
CC-MAIN-2013-20
http://magnes.org/visit/exhibitions-programs/exhibitions/hidden-walls-time-capsule-san-franciscos-lost-sanctuary
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941103
147
2.796875
3
In January I wrote an essay for Cyborgology on the subject of technological autonomy and its implications for the environment. There’s no more important dynamic when it comes to understanding our relationship with machines and where they’re taking us. Technological autonomy is shorthand for the idea that, once advanced technologies pass a certain stage of development, we lose our ability to control them. I generally use the phrase “de facto technological autonomy” to underscore that what’s being talked about is a loss of practical rather than literal control. Loss of practical control occurs for a number of reasons, among them the fact that the economies of modern societies have come to depend, completely, on various technologies. Remove those technologies and the economies collapse. A striking example of this is the dilemma facing Japan as it contemplates whether to resume its dependence on nuclear energy in the wake of the post-tsunami meltdowns at the Fukushima Daiichi reactors last year. Since the meltdowns, operations at all the nation’s 54 nuclear reactors have been gradually suspended. Public concern has kept the plants offline despite increasingly strident warnings from officials there that without them the nation faces (as one publication put it) an “energy death spiral.” The threat is that without power sufficient to supply its manufacturing needs, Japan’s largest employers will be forced to abandon domestic production, initiating a process of “deindustrialization” that would cripple the economy. These concerns are exacerbated by uncertainties regarding international oil supplies and the prognosis that this coming summer may be unusually hot, prompting a spike in energy demands. The dilemma is an excruciating one. The nation’s citizens are essentially being told that they must welcome back into their midst an industry that’s made whole towns uninhabitable and that’s undermined confidence in their food supply, not to mention their officials. The alternative is widespread unemployment and poverty. In other words, while it’s literally possible to shut down the reactors permanently, practically speaking Japan may have no choice but to turn them back on. That’s de facto technological autonomy. Global warming doubles the bind. Without the reactors, Japan will make up some of its energy deficit with fossil fuels, thereby increasing its emissions of greenhouse gases. Japan’s distinction is that the tsunami has forced it to confront the issue of technological autonomy sooner than other industrialized countries. Their time (our time) will come. This post is also available on Doug Hills personal blog: The Question Concerning Technology.
<urn:uuid:585e360e-555d-4ca8-90a4-806e55ca903f>
CC-MAIN-2013-20
http://thesocietypages.org/cyborgology/2012/05/14/no-exit-technological-autonomy-in-japan/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937547
518
2.53125
3
Minimizing Exposure at Work All pesticides have some level of risk. If you work with pesticides like herbicides, insecticides, or cleaning products containing antimicrobials on a regular basis, your risk is greater because your exposure is greater. Even though some pesticides may not make you sick on a daily basis, long-term (chronic) exposure to certain pesticides may increase your risk of chronic health effects. Therefore, it is important to take steps to reduce your risk by minimizing your exposure to all the pesticides you work with. Tips for reducing pesticide risks at work (and at home): - Always follow the label. The directions on a pesticide label are there to protect you and others from becoming over-exposed. - Consider using the least toxic methods of pest control to get the job done. Integrated Pest Management options can be less costly and more effective than traditional chemical control. - Personal Protective Equipment (PPE) can be uncomfortable to wear, but it can protect you from being over-exposed. Read the label to find the appropriate PPE or contact your state lead pesticide agency for help. - Inspect your PPE to make sure it will do its job effectively. Check for signs of cracking, swelling, stiffness, sponginess, or any change of color in the rubber or plastic of your PPE. - Frequently washing the pesticide off your chemical resistant clothes, like aprons, boots and gloves, can minimize your exposure and help your equipment last longer. Cleaning this kind of PPE at work may reduce your family's exposure. - Wear your PPE when you are mixing, loading, or when you are cleaning the equipment. Consider bringing an extra set of clothes to change into at work in case you get pesticides on your clothes. - Remove your shoes before entering your house, take a shower immediately and clean work clothes properly to reduce your family's exposure. - If you want to report a workplace concern, call NPIC or contact your local regulatory agency or OSHA contacts. If you have questions about this, or any pesticide-related topic, please call NPIC at 1-800-858-7378 (7:30am-3:30pm PST), or email us at email@example.com. - Washing Pesticide Work Clothing - California Department of Pesticide Regulation - Chemical Resistant Clothing Guide - Oregon OSHA - Personal Protective Equipment: EPA Chemical Resistance Category Chart - Environmental Protection Agency (EPA) - Coveralls, Gloves and Other Skin Protection Guide - Environmental Protection Agency (EPA) - Eye Protection Guide - Environmental Protection Agency (EPA) - What You Need to Know about Protecting Yourself From Pesticides - Pennsylvania State University Cooperative Extension - Personal Protective Equipment for Working With Pesticides - University of Missouri Extension - Respiratory Protective Devices for Pesticides - Pennsylvania State University Cooperative Extension - Occupational Chemical Database - Occupational Safety and Health Administration (OSHA) - Preventing worker illness from indoor pesticide exposure - California Department of Public Health
<urn:uuid:64734f2f-e85e-4bea-a419-12f482d82f46>
CC-MAIN-2013-20
http://www.npic.orst.edu/health/minwork.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.875693
631
2.515625
3
Many people do not eat after exercise because they might not be hungry or don’t have time, but nourishing the body with nutrients post-exercise is a good habit to practice. Studies have shown that the 60 minutes following exercise is the optimal time to eat carbohydrate-rich foods and drinks. It is the “Golden Hour” when the muscles absorb the most nutrients and when glycogen, an energy reserve in your muscles, is replaced most efficiently. The carbs replenish the used-up energy that is normally stored as glycogen in muscle and in the liver. Protein is also important in post-exercise nutrition because it aids in recovery to build back the muscles that were fatigued during exercise. Most experts suggest a post exercise meal consisting of protein and carbohydrates.
<urn:uuid:764de07c-b2d7-43e8-97b7-f8d6d13fbf6f>
CC-MAIN-2013-20
http://www.wolx.com/The-Golden-Hour-/15077624?archive=1&pid=230055
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959922
160
2.796875
3
By Grace Rubenstein We think of domestic violence as something that happens among adults. But as some young survivors from Contra Costa County recently told me, abuse is also alarmingly common between teen boyfriends and girlfriends. Evidence has been growing that it starts even younger than we previously imagined. A study published in March surveyed more than 1,400 seventh graders of diverse races. More than one in three reported being psychologically abused by a boyfriend or girlfriend within the past six months. Nearly one in three reported experiencing physical dating violence within the same timeframe. Their average age? 12. The study was commissioned by the Robert Wood Johnson Foundation and Blue Shield of California. Revelations like that are slowly making the long-hidden problem of teen dating abuse more visible – especially at schools, ground zero for teen romance. Shocking events like the murder of 17-year-old Cindi Santana, stabbed by her ex-boyfriend on campus at her Los Angeles high school last September, have put the issue more on the radar. “The difference between domestic violence and teen relationship violence is that school is often the platform for teen relationship violence, so there’s an even greater urgency for schools and policymakers to address it,” Los Angeles Unified School District board member Steve Zimmer told me in a recent phone interview. Research also shows that people’s relationship habits are laid down early, and the effects of childhood abuse can last a lifetime. The federal Centers for Disease Control and Prevention report [PDF] that among adult women who suffer rape, physical violence or stalking, 22 percent experienced their first incident of abuse between ages 11 and 17. Schools are a pivotal place for prevention. Yet so far, only two California school districts – Oakland and Los Angeles – have explicit policies on dating violence, according to the California Partnership to End Domestic Violence. (Oakland’s passed in 2006. Los Angeles adopted its policy last October, but it was in the works before Cindi Santana’s murder.) Contra Costa County is mounting a countywide effort and hopes some of its school districts will be next. A bill by California Assembly Member Ricardo Lara (D – Bell Gardens) that would have required middle and high schools to spell out teen dating abuse policies in their safety plans recently stalled in a state Senate committee. At the same time, recent research [PDF] shows that the classic prevention tactic – classroom lessons – by itself is not enough. To really change teens’ behavior, you have to change their environment. The CPEDV is leading a statewide campaign to get more schools to do this. “If you look a the ecology of things, not one single intervention is going to do it. It’s got to be all these multitudes,” said Sharon Turner, regional director of the nonprofit STAND! for Families Free of Violence in Contra Costa County. “You need policy. You need conversations happening in the community about these things. You need visual reminders. You need the jargon and slogans about it. You need the social media piece.” The LAUSD policy calls for a designated coordinator at each middle and high school to arrange for education on healthy relationships, awareness efforts, and intervention when abuse does occur. These coordinators would also reach out to parents and arrange ongoing training for school staff. Zimmer, a key architect of the policy, said the training is critical. Without it, schools may mistakenly treat abuse as simply bad behavior that needs punishment, when in fact it’s a complex pattern that needs a more holistic intervention. Plus, untrained staff members may not know how to respond to reports of abuse. “Without a protocol to deal with it, you do have moments of confusion that can become very dangerous for a child,” Zimmer said. Yet even the communities most committed to preventing teen dating violence are running up against a hard wall: money. LAUSD’s policy, for the moment, exists only on paper. To fully implement it would cost about $2 million a year. There’s no spare money in the district budget, and Zimmer has struggled to get outside donations. Compared to academics, he said, “I’ve had a really hard time convincing folks that this is as important as any performance metric or evaluation tool.”
<urn:uuid:202ae83c-0d76-4656-bdb6-5fe09f26deca>
CC-MAIN-2013-20
http://blogs.kqed.org/stateofhealth/2012/06/20/county-effort-fights-teen-dating-violence/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960406
885
2.515625
3
Submitted by brad on Fri, 2009-06-12 13:49. Our world has not rid itself of atrocity and genocide. What can modern high-tech do to help? In Bosnia, we used bombs. In Rwanda, we did next to nothing. In Darfur, very little. Here’s a proposal that seems expensive at first, but is in fact vastly cheaper than the military solutions people have either tried or been afraid to try. It’s the sunlight principle. First, we would mass-produce a special video recording “phone” using the standard parts and tools of the cell phone industry. It would be small, light, and rechargeable from a car lighter plug, or possibly more slowly through a small solar cell on the back. It would cost a few hundred dollars to make, so that relief forces could airdrop tens or even hundreds of thousands of them over an area where atrocity is taking place. (If they are $400/pop, even 100,000 of them is 40 million dollars, a drop in the bucket compared to the cost of military operations.) They could also be smuggled in by relief workers on a smaller scale, or launched over borders in a pinch. Enough of them so that there are so many that anybody performing an atrocity will have to worry that there is a good chance that somebody hiding in bushes or in a house is recording it, and recording their face. This fear alone would reduce what took place. Once the devices had recorded a video, they would need to upload it. It seems likely that in these situations the domestic cell system would not be available, or would be shut down to stop video uploads. However, that might not be true, and a version that uses existing cell systems might make sense, and be cheaper because the hardware is off the shelf. It is more likely that some other independent system would be used, based on the same technology but with slightly different protocols. The anti-atrocity team would send aircraft over the area. These might be manned aircraft (presuming air superiority) or they might be very light, autonomous UAVs of the sort that already are getting cheap in price. These UAVs can be small, and not that high-powered, because they don’t need to do that much transmitting — just a beacon and a few commands and ACKs. The cameras on the ground will do the transmitting. In fact, the UAVs could quite possibly be balloons, again within the budget of aid organizations, not just nations. read more » Submitted by brad on Sat, 2009-04-18 19:37. My prior post about USB charging hubs in hotel rooms brought up the issue of security, as was the case for my hope for a world with bluetooth keyboards scattered around. Is it possible to design our computers to let them connect to untrusted devices? Clearly to a degree, in that an ethernet connection is generally always untrusted. But USB was designed to be fully trusted, and that limits it. Perhaps in the future, an OS can be designed to understand the difference between trusted and untrusted devices connected (wired or wirelessly) to a computer or phone. This might involve a different physical interface, or using the same physical interface, but a secure protocol by which devices can be identified (and then recognized when plugged in again) and tagged once as trusted the first time they are plugged in. For example, an unknown keyboard is a risky thing to plug in. It could watch you type and remember passwords, or it could simply send fake keys to your computer to get it to install trojan software completely taking it over. But we might allow an untrusted keyboard to type plain text into our word processors or E-mail applications. However, we would have to switch to the trusted keyboard (which might just be a touch-screen keyboard on a phone or tablet) for anything dangerous, including of course entry of passwords, URLs and commands that go beyond text entry. Would this be tolerable, constantly switching like this, or would we just get used to it? We would want to mount the inferior keyboard very close to our comfy but untrusted one. A mouse has the same issues. We might allow an untrusted mouse to move the pointer within a text entry window and to go to a set of menus that can’t do anything harmful on the machine, but would it drive us crazy to have to move to a different pointer to move out of the application? Alas, an untrusted mouse can (particularly if it waits until you are not looking) run applications, even bring up the on-screen keyboard most OSs have for the disabled, and then do anything with your computer. It’s easier to trust output devices, like a printer. In fact, the main danger with plugging in an unknown USB printer is that a really nasty one might pretend to be a keyboard or CD-Rom to infect you. A peripheral bus that allows a device to only be an output device would be safer. Of course an untrusted printer could still record what you print. An untrusted screen is a challenge. While mostly safe, one can imagine attacks. An untrusted screen might somehow get you to go to a special web-site. There, it might display something else, perhaps logins for a bank or other site so that it might capture the keys. Attacks here are difficult but not impossible, if I can control what you see. It might be important to have the trusted screen nearby somehow helping you to be sure the untrusted screen is being good. This is a much more involved attack than the simple attacks one can do by pretending to be a keyboard. An untrusted disk (including a USB thumb drive) is actually today’s biggest risk. People pass around thumb drives all the time, and they can pretend to be auto-run CD-roms. In addition, we often copy files from them, and double click on files on them, which is risky. The OS should never allow code to auto-run from an untrusted disk, and should warn if files are double-clicked from them. Of course, even then you are not safe from traps inside the files themselves, even if the disk is just being a disk. Many companies try to establish very tight firewalls but it’s all for naught if they allow people to plug external drives and thumbsticks into the computers. Certain types of files (such as photos) are going to be safer than others (like executables and word processor files with macros or scripts.) Digital cameras, which often look like drives, are a must, and can probably be trusted to hand over jpegs and other image and video files. A network connection is one of the things you can safely plug in. After all, a network connection should always be viewed as hostile, even one behind a firewall. There is a risk in declaring a device trusted, for example, such as your home keyboard. It might be compromised later, and there is not much you can do about that. A common trick today is to install a key-logger in somebody’s keyboard to snoop on them. This is done not just by police but by suspicious spouses and corporate spies. Short of tamper-proof hardware and encryption, this is a difficult problem. For now, that’s too much cost to add to consumer devices. Still, it sure would be nice to be able to go to a hotel and use their keyboard, mouse and monitor. It might be worth putting up with having to constantly switch back to get full sized input devices on computers that are trying to get smaller and smaller. But it would also require rewriting of a lot of software, since no program could be allowed to take input from an untrusted device unless it has been modified to understand such a protocol. For example, your e-mail program would need to be modified to declare that a text input box allows untrusted input. This gets harder in web browsing — each web page would need to have to declare, in its input boxes, whether untrusted input was allowed. As a starter, however, the computer could come with a simple “clipboard editor” which brings up a box in which one can type and edit with untrusted input devices. Then, one could copy the edited text to the OS clipboard and, using the trusted mouse or keyboard, paste it into any application of choice. You could always get back to the special editing windows using the untrusted keyboard and mouse, you would have to use the trusted ones to leave that window. Cumbersome, but not as cumbersome as typing a long e-mail on an iPhone screen. Submitted by brad on Thu, 2009-03-05 00:35. I’m looking at you Ubuntu. For some time now, the standard form for distributing a free OS (ie. Linux, *BSD) has been as a CD-ROM or DVD ISO file. You burn it to a CD, and you can boot and install from that, and also use the disk as a live CD. There are a variety of pages with instructions on how to convert such an ISO into a bootable flash drive, and scripts and programs for linux and even for windows — for those installing linux on a windows box. And these are great and I used one to make a bootable Ubuntu stick on my last install. And wow! It’s such a much nicer, faster experience compared to using CD that it’s silly to use CD on any system that can boot from a USB drive, and that’s most modern systems. With a zero seek time, it is much nicer. So I now advocate going the other way. Give me a flash image I can dd to my flash drive, and a tool to turn that into an ISO if I need an ISO. This has a number of useful advantages: - I always want to try the live CD before installing, to make sure the hardware works in the new release. In fact, I even do that before upgrading most of the time. - Of course, you don’t have old obsolete CDs lying around. - Jumping to 1 gigabyte allows putting more on the distribution, including some important things that are missing these days, such as drivers and mdadm (the RAID control program.) - Because flash is a dynamic medium, the install can be set up so that the user can, after copying the base distro, add files to the flash drive, such as important drivers — whatever they choose. An automatic script could even examine a machine and pull down new stuff that’s needed. - You get a much faster and easier to use “rescue stick.” - It’s easier to carry around. - No need for an “alternate install” and perhaps easier as well to have the upgrader use the USB stick as a cache of packages during upgrades. - At this point these things are really cheap. People give them away. You could sell them. This technique would also work for general external USB drives, or even plain old internal hard drives temporarily connected to a new machine being built if boot from USB is not practical. Great and really fast for eSata. - Using filesystems designed not to wear out flash, the live stick can have a writable partition for /tmp, installed packages and modifications (with some security risk if you run untrusted code.) Submitted by brad on Sat, 2009-02-14 19:34. Product recalls have been around for a while. You get a notice in the mail. You either go into a dealer at some point, any point, for service, or you swap the product via the mail. Nicer recalls mail you a new product first and then you send in the old one, or sign a form saying you destroyed it. All well and good. Some recalls are done as “hidden warranties.” They are never announced, but if you go into the dealer with a problem they just fix it for free, long after the regular warranty, or fix it while working on something else. These usually are for items that don’t involve safety or high liability. Today I had my first run-in with a recall of a connected electronic product. I purchased an “EyeFi” card for my sweetie for valentines day. This is an SD memory card with an wifi transmitter in it. You take pictures, and it stores them until it encounters a wifi network it knows. It then uploads the photos to your computer or to photo sharing sites. All sounds very nice. When she put in the card and tried to initialize it, up popped a screen. “This card has a defect. Please give us your address and we’ll mail you a new one, and you can mail back the old one, and we’ll give you a credit in our store for your trouble.” All fine, but the product refused to let her register and use the product. We can’t even use the product for a few days to try it out (knowing it may lose photos.) What if I wanted to try it out to see if I was going to return it to the store. No luck. I could return it to the store as-is, but that’s work and may just get another one on the recall list. This shows us the new dimension of the electronic recall. The product was remotely disabled to avoid liability for the company. We had no option to say, “Let us use the card until the new one arrives, we agree that it might fail or lose pictures.” For people who already had the card, I don’t know if it shut them down (possibly leaving them with no card) or let them continue with it. You have to agree on the form that you will not use the card any more. This can really put a damper on a gift, when it refuses to even let you do a test the day you get it. With electronic recall, all instances of a product can be shut down. This is similar to problems that people have had with automatic “upgrades” that actually remove features (like adding more DRM) or which fix you jailbreaking your iPhone. You don’t own the product any more. Companies are very worried about liability. They will “do the safe thing” which is shut their product down rather than let you take a risk. With other recalls, things happened on your schedule. You were even able to just decide not to do the recall. The company showed it had tried its best to convince you to do it, and could feel satisfied for having tried. This is one of the risks I list in my essays on robocars. If a software flaw is found in a robocar (or any other product with physical risk) there will be pressure to “recall” the software and shut down people’s cars. Perhaps in extreme cases while they are driving on the street! The liability of being able to shut down the cars and not doing so once you are aware of a risk could result in huge punitive damages under the current legal system. So you play it safe. But if people find their car shutting down because of some very slight risk, they will start wondering if they even want a car that can do that. Or even a memory card. Only with public pressure will we get the right to say, “I will take my own responsibility. You’ve informed me, I will decide when to take the product offline to get it fixed.” Submitted by brad on Mon, 2008-09-29 22:40. Most of us have had to stand in a long will-call line to pick up tickets. We probably even paid a ticket “service fee” for the privilege. Some places are helping by having online printable tickets with a bar code. However, that requires that they have networked bar code readers at the gate which can detect things like duplicate bar codes, and people seem to rather have giant lines and many staff rather than get such machines. Can we do it better? Well, for starters, it would be nice if tickets could be sent not as a printable bar code, but as a message to my cell phone. Perhaps a text message with coded string, which I could then display to a camera which does OCR of it. Same as a bar code, but I can actually get it while I am on the road and don’t have a printer. And I’m less likely to forget it. Or let’s go a bit further and have a downloadable ticket application on the phone. The ticket application would use bluetooth and a deliberately short range reader. I would go up to the reader, and push a button on the cell phone, and it would talk over bluetooth with the ticket scanner and authenticate the use of my ticket. The scanner would then show a symbol or colour and my phone would show that symbol/colour to confirm to the gate staff that it was my phone that synced. (Otherwise it might have been the guy in line behind me.) The scanner would be just an ordinary laptop with bluetooth. You might be able to get away with just one (saving the need for networking) because it would be very fast. People would just walk by holding up their phones, and the gatekeeper would look at the screen of the laptop (hidden) and the screen of the phone, and as long as they matched wave through the number of people it shows on the laptop screen. Alternately you could put the bluetooth antenna in a little faraday box to be sure it doesn’t talk to any other phone but the one in the box. Put phone in box, light goes on, take phone out and proceed. One reason many will-calls are slow is they ask you to show ID, often your photo-ID or the credit card used to purchase the item. But here’s an interesting idea. When I purchase the ticket online, let me offer an image file with a photo. It could be my photo, or it could be the photo of the person I am buying the tickets for. It could be 3 photos if any one of those 3 people can pick up the ticket. You do not need to provide your real name, just the photo. The will call system would then inkjet print the photos on the outside of the envelope containing your tickets. You do need some form of name or code, so the agent can find the envelope, or type the name in the computer to see the records. When the agent gets the envelope, identification will be easy. Look at the photo on the envelope, and see if it’s the person at the ticket window. If so, hand it over, and you’re done! No need to get out cards or hand them back and forth. A great company to implement this would be paypal. I could pay with paypal, not revealing my name (just an E-mail address) and paypal could have a photo stored, and forward it on to the ticket seller if I check the box to do this. The ticket seller never knows my name, just my picture. You may think it’s scary for people to get your picture, but in fact it’s scarier to give them your name. They can collect and share data with you under your name. Your picture is not very useful for this, at least not yet, and if you like you can use one of many different pictures each time — you can’t keep using different names if you need to show ID. This could still be done with credit cards. Many credit cards offer a “virtual credit card number” system which will generate one-time card numbers for online transactions. They could set these up so you don’t have to offer a real name or address, just the photo. When picking up the item, all you need is your face. This doesn’t work if it’s an over-21 venue, alas. They still want photo ID, but they only need to look at it, they don’t have to record the name. It would be more interesting if one could design a system so that people can find their own ticket envelopes. The guard would let you into the room with the ticket envelopes, and let you find yours, and then you can leave by showing your face is on the envelope. The problem is, what if you also palmed somebody else’s envelope and then claimed yours, or said you couldn’t find yours? That needs a pretty watchful guard which doesn’t really save on staff as we’re hoping. It might be possible to have the tickets in a series of closed boxes. You know your box number (it was given to you, or you selected it in advance) so you get your box and bring it to the gate person, who opens it and pulls out your ticket for you, confirming your face. Then the box is closed and returned. Make opening the boxes very noisy. I also thought that for Burning Man, which apparently had a will-call problem this year, you could just require all people fetching their ticket be naked. For those not willing, they could do regular will-call where the ticket agent finds the envelope. :-) I’ve noted before that, absent the need of the TSA to know all our names, this is how boarding passes should work. You buy a ticket, provide a photo of the person who is to fly, and the gate agent just looks to see if the face on the screen is the person flying, no need to get out ID, or tell the airline your name. Submitted by brad on Tue, 2008-05-27 20:49. Hard disks fail. If you prepared properly, you have a backup, or you swap out disks when they first start reporting problems. If you prepare really well you have offsite backup (which is getting easier and easier to do over the internet.) One way to protect yourself from disk failures is RAID, especially RAID-5. With RAID, several disks act together as one. The simplest protecting RAID, RAID-1, just has 2 disks which work in parallel, known as mirroring. Everything you write is copied to both. If one fails, you still have the other, with all your data. It’s good, but twice as expensive. RAID-5 is cleverer. It uses 3 or more disks, and uses error correction techniques so that you can store, for example, 2 disks worth of data on 3 disks. So it’s only 50% more expensive. RAID-5 can be done with many more disks — for example with 5 disks you get 4 disks worth of data, and it’s only 25% more expensive. However, having 5 disks is beyond most systems and has its own secret risk — if 2 of the 5 disks fail at once — and this does happen — you lose all 4 disks worth of data, not just 2 disks worth. (RAID-6 for really large arrays of disks, survives 2 failures but not 3.) Now most people who put in RAID do it for more than data protection. After all, good sysadmins are doing regular backups. They do it because with RAID, the computer doesn’t even stop when a disk fails. You connect up a new disk live to the computer (which you can do with some systems) and it is recreated from the working disks, and you never miss a beat. This is pretty important with a major server. But RAID has value to those who are not in the 99.99% uptime community. Those who are not good at doing manual backups, but who want to be protected from the inevitable disk failures. Today it is hard to set up, or expensive, or both. There are some external boxes like the “readynas” that make it reasonably easy for external disks, but they don’t have the bandwidth to be your full time disks. RAID-5 on old IDE systems was hard, they usually could truly talk to only 2 disks at a time. The new SATA bus is much better, as many motherboards have 4 connectors, though soon one will be required by blu-ray drives. read more » Submitted by brad on Thu, 2008-05-15 13:56. Recently we at the EFF have been trying to fight new rulings about the power of U.S. customs. Right now, it’s been ruled they can search your laptop, taking a complete copy of your drive, even if they don’t have the normally required reasons to suspect you of a crime. The simple fact that you’re crossing the border gives them extraordinary power. We would like to see that changed, but until then what can be done? You can use various software to encrypt your hard drive — there are free packages like truecrypt, and many laptops come with this as an option — but most people find having to enter a password every time you boot to be a pain. And customs can threaten to detain you until you give them the password. There are some tricks you can pull, like having a special inner-drive with a second password that they don’t even know to ask about. You can put your most private data there. But again, people don’t use systems with complex UIs unless they feel really motivated. What we need is a system that is effectively transparent most of the time. However, you could take special actions when going through customs or otherwise having your laptop be out of your control. read more » Submitted by brad on Sat, 2008-05-10 18:46. It seems that half the programs I try and install under Windows want to have a “daemon” process with them, which is to say a portion of the program that is always running and which gets a little task-tray icon from which it can be controlled. Usually they want to also be run at boot time. In Windows parlance this is called a service. There are too many of them, and they don’t all need to be there. Microsoft noticed this, and started having Windows detect if task tray icons were too static. If they are it hides them. This doesn’t work very well — they even hide their own icon for removing hardware, which of course is going to be static most of the time. And of course some programs now play games to make their icons appear non-static so they will stay visible. A pointless arms race. All these daemons eat up memory, and some of them eat up CPU. They tend to slow the boot of the machine too. And usually not to do very much — mostly to wait for some event, like being clicked, or hardware being plugged in, or an OS/internet event. And the worst of them on their menu don’t even have a way to shut them down. I would like to see the creation of a master deaemon/service program. This program would be running all the time, and it would provide a basic scripting language to perform daemon functions. Programs that just need a simple daemon, with a menu or waiting for events, would be strongly encouraged to prepare it in this scripting language, and install it through the master daemon. That way they take up a few kilobytes, not megabytes, and don’t take long to load. The scripting language should be able to react at least in a basic way to all the OS hooks, events and callbacks. It need not do much with them — mainly it would run a real module of the program that would have had a daemon. If the events are fast and furious and don’t pause, this program could stay resident and become a real daemon. But having a stand alone program would be discouraged, certainly for boring purposes like checking for updates, overseeing other programs and waiting for events. The master program itself could get regular updates, as features are added to it as needed by would-be daemons. Unix started with this philosophy. Most internet servers are started up by inetd, which listens on all the server ports you tell it, and fires up a server if somebody tries to connect. Only programs with very frequent requests, like E-mail and web serving, are supposed to keep something constantly running. The problem is, every software package is convinced it’s the most important program on the system, and that the user mostly runs nothing but that program. So they act like they own the place. We need a way to only let them do that if they truly need it. Submitted by brad on Fri, 2008-05-09 00:14. I’m scanning my documents on an ADF document scanner now, and it’s largely pretty impressive, but I’m surprised at some things the system won’t do. Double page feeding is the bane of document scanning. To prevent it, many scanners offer methods of double feed detection, including ultrasonic detection of double thickness and detection when one page is suddenly longer than all the others (because it’s really two.) There are a number of other tricks they could do, I think. I think a paper feeder that used air suction or gecko-foot van-der-waals force pluckers on both sides of a page to try to pull the sides in two different directions could help not just detect, but eliminate such feeds. However, the most the double feed detectors do is signal an exception to stop the scan. Which means work re-feeding and a need to stand by. However, many documents have page numbers. And we’re going to OCR them and the OCR engine is pretty good at detecting page numbers (mostly out of desire to remove them.) However, it seems to me a good approach would be to look for gaps in the page numbers, especially combined with the other results of a double feed. Then don’t stop the scan, just keep going, and report to the operator which pages need to be scanned again. Those would be scanned, their number extracted, and they would be inserted in the right place in the final document. Of course, it’s not perfect. Sometimes page numbers are not put on blank pages, and some documents number only within chapters. So you might not catch everything, but you could catch a lot of stuff. Operators could quickly discern the page numbering scheme (though I think the OCR could do this too) to guide the effort. I’m seeking a maximum convenience workflow. I think to do that the best plan is to have several scanners going, and the OCR after the fact in the background. That way there’s always something for the operator to do — fixing bad feeds, loading new documents, naming them — for maximum throughput. Though I also would hope the OCR software could do better at naming the documents for you, or at least suggesting names. Perhaps it can, the manual for Omnipage is pretty sparse. While some higher end scanners do have the scanner figure out the size of the page (at least the length) I am not sure why it isn’t a trivial feature for all ADF scanners to do this. My $100 Strobe sheetfed scanner does it. That my $6,000 (retail) FI-5650 needs extra software seems odd to me. Submitted by brad on Tue, 2008-05-06 16:25. PCs can go into standby mode (just enough power to preserve the RAM and do wake-on-lan) and into hibernate mode (where they write out the RAM to disk, shut down entirely and restore from disk later) as well as fully shut down. Standby mode comes back up very fast, and should be routinely used on desktops. In fact, non-server PCs should consider doing it as a sort of screen saver since the restart can be so quick. It’s also popular on laptops but does drain the battery in a few days keeping the RAM alive. Many laptops will wake up briefly to hibernate if left in standby so long that the battery gets low, which is good. How about this option: Write the ram contents out to disk, but also keep the ram alive. When the user wants to restart, they can restart instantly, unless something happened to the ram. If there was a power flicker or other trouble, notice the ram is bad and restart from disk. Usually you don’t care too much about the extra time needed to write out to disk when suspending, other than for psychological reasons where you want to be really sure the computer is off before leaving it. It’s when you come back to the computer that you want instant-on. In fact, since RAM doesn’t actually fail all that quickly, you might even find you can restore from RAM after a brief power flicker. In that case, you would want to store a checksum for all blocks of RAM, and restore any from disk that don’t match the checksum. To go further, one could also hibernate to newer generations of fast flash memory. Flash memory is getting quite cheap, and while older generations aren’t that quick, they seek instantaneously. This allows you to reboot a machine with its memory “paged out” to flash, and swap in pages at random as they are needed. This would allow a special sort of hybrid restore: - Predict in advance which pages are highly used, and which are enough to get the most basic functions of the OS up. Write them out to a special contiguous block of hibernation disk. Then write out the rest, to disk and flash. - When turning on again, read this block of contiguous disk and go “live.” Any pages needed can then be paged in from the flash memory as needed, or if the flash wasn’t big enough, unlikely pages can come from disk. - In the background, restore the rest of the pages from the faster disk. Eventually you are fully back to ram. This would allow users to get a fairly fast restore, even from full-off hibernation. If they click on a rarely used program that was in ram, it might be slow as stuff pages in, but still not as bad as waiting for the whole restore. Submitted by brad on Thu, 2008-02-21 12:44. A big trend in systems operation these days is the use of virtual machines — software systems which emulate a standalone machine so you can run a guest operating system as a program on top of another (host) OS. This has become particularly popular for companies selling web hosting. They take one fast machine and run many VMs on it, so that each customer has the illusion of a standalone machine, on which they can do anything. It’s also used for security testing and honeypots. The virtual hosting is great. Typical web activity is “bursty.” You would like to run at a low level most of the time, but occasionally burst to higher capacity. A good VM environment will do that well. A dedicated machine has you pay for full capacity all the time when you only need it rarely. Cloud computing goes beyond this. However, the main limit to a virtual machine’s capacity is memory. Virtual host vendors price their machines mostly on how much RAM they get. And a virtual host with twice the RAM often costs twice as much. This is all based on the machine’s physical ram. A typical vendor might take a machine with 4gb, keep 256mb for the host and then sell 15 virtual machines with 256mb of ram. They will also let you “burst” your ram, either into spare capacity or into what the other customers are not using at the time, but if you do this for too long they will just randomly kill processes on your machine, so you don’t want to depend on this. The problem is when they give you 256MB of ram, that’s what you get. A dedicated linux server with 256mb of ram will actually run fairly well, because it uses paging to disk. The server loads many programs, but a lot of the memory used for these programs (particularly the code) is used rarely, if ever, and swaps out to disk. So your 256mb holds the most important pages of ram. If you have more than 256mb of important, regularly used ram, you’ll thrash (but not die) and know you need to buy more. The virtual machines, however, don’t give you swap space. Everything stays in ram. And the host doesn’t swap it either, because that would not be fair. If one VM were regularly swapping to disk, this would slow the whole system down for everybody. One could build a fair allocation for that but I have not heard of it. In addition, another big memory saving is lost — shared memory. In a typical system, when two processes use the same shared library or same program, this is loaded into memory only once. It’s read-only so you don’t need to have two copies. But on a big virtual machine, we have 15 copies of all the standard stuff — 15 kernels, 15 MYSQL servers, 15 web servers, 15 of just about everything. It’s very wasteful. So I wonder if it might be possible to do one of the following: - Design the VM so that all binaries and shared libraries can be mounted from a special read-only filesystem which is actually on the host. This would be an overlay filesystem so that individual virtual machines could change it if need be. The guest kernel, however, would be able to load pages from these files, and they would be shared with any other virtual machine loading the same file. - Write a daemon that regularly uses spare CPU to scan the pages of each virtual machine, hashing them. When two pages turn out to be identical, release one and have both VMs use the common copy. Mark it so that if one writes to it, a duplicate is created again. When new programs start it would take extra RAM, but within a few minutes the memory would be shared. These techniques require either a very clever virtualizer or modified guests, but their savings are so worthwhile that everybody would want to do it this way on any highly loaded virtual machine. Of course, that goes against the concept of “run anything you like” and makes it “run what you like, but certain standard systems are much cheaper.” This, and allowing some form of fair swapping, could cause a serious increase in the performance and cost of VMs. Submitted by brad on Tue, 2008-02-19 21:11. If you have read my articles on power you know I yearn for the days when we get smart power so we have have universal supplies that power everything. This hit home when we got a new Thinkpad Z61 model, which uses a new power adapter which provides 20 volts at 4.5 amps and uses a new, quite rare power tip which is 8mm in diameter. For almost a decade, thinkpads used 16.5 volts and used a fairly standard 5.5mm plug. It go so that some companies standardized on Thinkpads and put cheap 16 volt TP power supplies in all the conference rooms, allowing employees to just bring their laptops in with no hassle. Lenovo pissed off their customers with this move. I have perhaps 5 older power supplies, including one each at two desks, one that stays in the laptop bag for travel, one downstairs and one running an older ThinkPad. They are no good to me on the new computer. Lenovo says they knew this would annoy people, and did it because they needed more power in their laptops, but could not increase the current in the older plug. I’m not quite sure why they need more power — the newer processors are actually lower wattage — but they did. Here’s something they could have done to make it better. read more » Submitted by brad on Sat, 2008-01-12 16:33. I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. Today I want to report we’re getting a lot closer. A new generation of cheap “buck and boost” ICs which can handle more serious wattages with good efficiency has come to the market. This means cheap DC to DC conversion, both increasing and decreasing voltages. More and more equipment is now able to take a serious range of input voltages, and also to generate them. Being able to use any voltage is important for battery powered devices, since batteries start out with a high voltage (higher than the one they are rated for) and drop over their time to around 2/3s of that before they are viewed as depleted. (With some batteries, heavy depletion can really hurt their life. Some are more able to handle it.) With a simple buck converter chip, at a cost of about 10-15% of the energy, you get a constant voltage out to matter what the battery is putting out. This means more reliable power and also the ability to use the full capacity of the battery, if you need it and it won’t cause too much damage. These same chips are in universal laptop supplies. Most of these supplies use special magic tips which fit the device they are powering and also tell the supply what voltage and current it needs. read more » Submitted by brad on Tue, 2007-11-13 13:20. Ok, I haven't had a new laptop in a while so perhaps this already happens, but I'm now carrying more devices that can charge off the USB power, including my cell phone. It's only 2.5 watts, but it's good enough for many purposes. However, my laptops, and desktops, do not provide USB power when in standby or off. So how about a physical or soft switch to enable that? Or even a smart mode in the US that lets you list what devices you want to keep powered and which ones you don't? (This would probably keep all devices powered if any one such device is connected, unless you had individual power control for each plug.) This would only be when on AC power of course, not on battery unless explicitly asked for as an emergency need. To get really smart a protocol could be developed where the computer can ask the USB device if it needs power. A fully charged device that plans to sleep would say no. A device needing charge could say yes. Of course, you only want to do this if the power supply can efficiently generate 5 volts. Some PC power supplies are not efficient at low loads and so may not be a good choice for this, and smaller power supplies should be used. Submitted by brad on Tue, 2007-07-10 00:42. For much of history, we’ve used removable media for backup. We’ve used tapes of various types, floppy disks, disk cartridges, and burnable optical disks. We take the removable media and keep a copy offsite if we’re good, but otherwise they sit for a few decades until they can’t be read, either because they degraded or we can’t find a reader for the medium any more. But I now declare this era over. Disk drives are so cheap — 25 cents/gb and falling, that it no longer makes sense to do backups to anything but hard disks. We may use external USB drives that are removable, but at this point our backups are not offline, they are online. Thanks to the internet, I even do offsite backup to live storage. I sync up over the internet at night, and if I get too many changes (like after an OS install, or a new crop of photos) I write the changes to a removable hard disk and carry it over to the offsite hard disk. Of course, these hard drives will fail, perhaps even faster than CD-roms or floppies. But the key factor is that the storage is online rather than offline, and each new disk is 2 to 3 times larger than the one it replaced. What this means is that as we change out our disks, we just copy our old online archives to our new online disk. By constantly moving the data to newer and newer media — and storing it redundantly with online, offsite backup, the data are protected from the death that removable media eventually suffer. So long as disks keep getting bigger and cheaper, we won’t lose anything, except by beng lazy. And soon, our systems will get more automated at this, so it’s hard to set up a computer that isn’t backed up online and remotely. We may still lose things because we lose encryption keys, but it won’t be for media. Thus, oddly, the period of the latter part of the 20th century will be a sort of “dark ages” to future data archaeologists. Those disks will be lost. The media may be around, but you will have to do a lot of work to recover them — manual work. However, data from the early 21st onward will be there unless it was actively deleted or encrypted. Of course this has good and bad consequences. Good for historians. Perhaps not so good for privacy. Submitted by brad on Tue, 2007-07-03 15:15. Hotels are now commonly sporting flat widescreen TVs, usually LCD HDTVs at the 720p resolution, which is 1280 x 720 or similar. Some of these TVs have VGA ports or HDMI (DVI) ports, or they have HDTV analog component video (which is found on some laptops but not too many.) While 720p resolution is not as good as the screens on many laptops, it makes a world of difference on a PDA. As our phone/PDA devices become more like the iPhone, it would be very interesting to see hotels guarantee that their room offers the combination of: - A bluetooth keyboard (with USB and mini-USB as a backup) - A similar optical mouse - A means to get video into the HDTV - Of course, wireless internet - Our dreamed of universal DC power jack (or possibly inductive charging.) Tiny devices like the iPhone won’t sport VGA or even component video out 7 pin connectors, though they might do HDMI. It’s also not out of the question to go a step further and do a remote screen protocol like VNC over the wireless ethernet or bluetooth. This would engender a world where you carry a tiny device like the iPhone, which is all touchscreen for when you are using it in the mobile environment. However, when you sit down in your hotel room (or a few other places) you could use it like a full computer with a full screen and keyboard. (There are also quite compact real-key bluetooth keyboards and mice which travelers could also bring. Indeed, since the iPhone depends on a multitouch interface, an ordinary mouse might not be enough for it, but you could always use its screen for such pointing, effectively using the device as the touchpad.) Such stations need not simply be in hotels. Smaller displays (which are now quite cheap) could also be present at workstations on conference tables or meeting rooms, or even for rent in public. Of course rental PCs in public are very common at internet cafes and airport kiosks, but using our own device is more tuned to our needs and more secure (though using a rented keyboard presents security risks.) One could even imagine stations like these randomly scattered around cities behind walls. Many retailers today are putting HDTV flat panels in their windows instead of signs, and this will become a more popular trend. Imagine being able to borrow (for free or for a rental fee) such screens for a short time to do a serious round of web surfing on your portable device with high resolution, and local wifi bandwidth. Such a screen could not provide you with a keyboard or mouse easily, but the surfing experience would be much better than the typical mobile device surfing experience, even the iPhone model of seeing a blurry, full-size web page and using multitouch to zoom in on the relevant parts. Using a protocol like vnc could provide a good surfing experience for pedestrians. Cars are also more commonly becoming equipped with screens, and they are another place we like to do mobile surfing. While the car’s computer should let you surf directly, there is merit in being able to use that screen as a temporary large screen for one’s mobile device. Until we either get really good VR glasses or bright tiny projectors, screen size is going to be an issue in mobile devices. A world full of larger screens that can be grabbed for a few minutes use may be a good answer. Submitted by brad on Fri, 2007-06-08 14:43. For many of us, E-mail has become our most fundamental tool. It is not just the way we communicate with friends and colleagues, it is the way that a large chunk of the tasks on our “to do” lists and calendars arrive. Of course, many E-mail programs like Outlook come integrated with a calendar program and a to-do list, but the integration is marginal at best. (Integration with the contact manager/address book is usually the top priority.) If you’re like me you have a nasty habit. You leave messages in your inbox that you need to deal with if you can’t resolve them with a quick reply when you read them. And then those messages often drift down in the box, off the first screen. As a result, they are dealt with much later or not at all. With luck the person mails you again to remind you of the pending task. There are many time management systems and philosophies out there, of course. A common theme is to manage your to-do list and calendar well, and to understand what you will do and not do, and when you will do it if not right away. I think it’s time to integrate our time management concepts with our E-mail. To realize that a large number of emails or threads are also a task, and should be bound together with the time manager’s concept of a task. For example, one way to “file” an E-mail would be to the calendar or a day oriented to-do list. You might take an E-mail and say, “I need 20 minutes to do this by Friday” or “I’ll do this after my meeting with the boss tomorrow.” The task would be tied to the E-mail. Most often, the tasks would not be tied to a specific time the way calendar entries are, but would just be given a rough block of time within a rough window of hours or days. It would be useful to add these “when to do it” attributes to E-mails, because now delegating a task to somebody else can be as simple as forwarding the E-mail-message-as-task to them. In fact, because, as I have noted, I like calendars with free-form input (ie. saying “Lunch with Peter 1pm tomorrow” and having the calender understand exactly what to do with it) it makes sense to consider the E-mail window as a primary means of input to the calendar. For example, one might add calendar entries by emailing them to a special address that is processed by the calendar. (That’s a useful idea for any calendar, even one not tied at all to the E-mail program.) One should also be able to assign tasks to places (a concept from the “Getting Things Done” book I have had recommended to me.) In this case, items that will be done when one is shopping, or going out to a specific meeting, could be synced or sent appropriately to one’s mobile device, but all with the E-mail metaphor. Because there are different philosophies of time management, all with their fans, one monolithic e-mail/time/calendar/todo program may not be the perfect answer. A plug-in architecture that lets time managers integrate nicely with E-mail could be a better way to do it. Some of these concepts apply to the shared calendar concepts I wrote about last month. Submitted by brad on Mon, 2007-06-04 11:01. Here’s a new approach to linux adoption. Create a linux distro which converts a Windows machine to linux, marketed as a way to solve many of your virus/malware/phishing woes. Yes, for a long time linux distros have installed themselves on top of a windows machine dual-boot. And there are distros that can run in a VM on windows, or look windows like, but here’s a set of steps to go much further, thanks to how cheap disk space is today. read more » - Yes, the distro keeps the Windows install around dual boot, but it also builds a virtual machine so it can be run under linux. Of course hardware drivers differ when running under a VM, so this is non-trivial, and Windows XP and later will claim they are stolen if they wake up in different hardware. You may have to call Microsoft, which they may eventually try to stop. - Look through the Windows copy and see what apps are installed. For apps that migrate well to linux, either because they have equivalents or run at silver or gold level under Wine, move them into linux. Extract their settings and files and move those into the linux environment. Of course this is easiest to do when you have something like Firefox as the browser, but IE settings and bookmarks can also be imported. - Examine the windows registry for other OS settings, desktop behaviours etc. Import them into a windows-like linux desktop. Ideally when it boots up, the user will see it looking and feeling a lot like their windows environment. - Using remote window protocols, it’s possible to run windows programs in a virtual machine with their window on the X desktop. Try this for some apps, though understand some things like inter-program communication may not do as well. - Next, offer programs directly in the virtual machine as another desktop. Put the windows programs on the windows-like “start” menu, but have them fire up the program in the virtual machine, or possibly even fire up the VM as needed. Again, memory is getting very cheap. - Strongly encourage the Windows VM be operated in a checkpointing manner, where it is regularly reverted to a base state, if this is possible. - The linux box, sitting outside the windows VM, can examine its TCP traffic to check for possible infections or strange traffic to unusual sites. A database like the siteadvisor one can help spot these unusual things, and encourage restoring the windows box back to a safe checkpoint. Submitted by brad on Sun, 2007-04-15 16:45. The use of virtual machines is getting very popular in the web hosting world. Particularly exciting to many people is Amazon.com’s EC2 — which means Elastic Compute Cloud. It’s a large pool of virtual machines that you can rent by the hour. I know people planning on basing whole companies on this system, because they can build an application that scales up by adding more virtual machines on demand. It’s decently priced and a lot cheaper than building it yourself in most cases. In many ways, something like EC2 would be great for all those web sites which deal with the “slashdot” effect. I hope to see web hosters, servers and web applications just naturally allow scaling through the addition of extra machines. This typically means either some round-robin-DNS, or a master server that does redirects to a pool of servers, or a master cache that processes the data from a pool of servers, or a few other methods. Dealing with persistent state that can’t be kept in cookies requires a shared database among all the servers, which may make the database the limiting factor. Rumours suggest Amazon will release an SQL interface to their internal storage system which presumably is highly scalable, solving that problem. As noted, this would be great for small to medium web sites. They can mostly run on a single server, but if they ever see a giant burst of traffic, for example by being linked to from a highly popular site, they can in minutes bring up extra servers to share the load. I’ve suggested this approach for the Battlestar Galactica Wiki I’ve been using — normally their load is modest, but while the show is on, each week, predictably, they get such a huge load of traffic when the show actually airs that they have to lock the wiki down. They have tried to solve this the old fashioned way — buying bigger servers — but that’s a waste when they really just need one day a week, 22 weeks a year, of high capacity. However, I digress. What I really want to talk about is using such systems to get access to all sorts of platforms. As I’ve noted before, linux is a huge mishmash of platforms. There are many revisions of Ubuntu, Fedora, SuSE, Debian, Gentoo and many others out there. Not just the current release, but all the past releases, in both stable, testing and unstable branches. On top of that there are many versions of the BSD variants. read more » Submitted by brad on Sun, 2007-03-04 19:50. Most of us, when we travel, put appointments we will have while on the road into our calendars. And we usually enter them in local time. ie. if I have a 1pm appointment in New York, I set it for 1pm not 10am in my Pacific home time zone. While some calendar programs let you specify the time zone for an event, most people don't, and many people also don't change the time zone when they cross a border, at least not right away. (I presume that some cell phone PDAs pick up the new time from the cell network and import it into the PDA, if the network provides that.) Many PDAs don't really even let you set the time zone, just the time. Here's an idea that's simple for the user. Most people put their flights into their calendars. In fact, most of the airline web sites now let you download your flight details right into your calendar. Those flight details include flight times and the airport codes. So the calendar software should notice the flight, look up the destination airport code, and trigger a time zone change during the flight. This would also let the flight duration look correct in the calendar view window, though it would mean some "days" would be longer than others, and hours would repeat or be missing in the display. You could also manually enter magic entries like "TZ to PST" or similar which the calendar could understand as a command to change the zone at that time. Of course, I could go on many long rants about the things lacking from current calendar software, and perhaps at some point I will, but this one struck me as interesting because, in the downloaded case, the UI for the user is close to invisible, and I always like that. It becomes important when we start importing our "presence" from our calendar, or get alerts from our devices about events, we don't want these things to trigger in the wrong time zone.
<urn:uuid:b5ec4d5b-457e-4138-8f6f-1ec7393313b6>
CC-MAIN-2013-20
http://ideas.4brad.com/archives/cat_technology.html?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00008-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959134
12,523
2.890625
3