Search is not available for this dataset
text
stringlengths
185
4.03k
id
stringlengths
47
47
token_count
int64
410
512
preceding_token_count
int64
0
23
Click HERE for specifications on the media Calcium helps to reduce risk of osteoporosis, high blood pressure as well as colon and rectal cancers. Treated water devoid of calcium and natural minerals acts as an active absorber. It will increase body acidity, the risk of osteoarthritis, osteoporosis, hypothyroidism, coronary artery disease, high blood pressure, premature aging and cardiovascular disease. Hard vs. Soft Water The hardness of water relates to the amount of calcium, magnesium and sometimes iron in the water. The more minerals present, the harder the water. Soft water may contain sodium and other minerals or chemicals; however, it contains very little calcium, magnesium or iron. Many people prefer soft water because it makes soap lather better, gets clothes cleaner and leaves less of a ring around the tub. Some municipalities and individuals remove calcium and magnesium, both essential nutrients, and add sodium in an ion-exchange process to soften their water. The harder the water, the more sodium that must be added in exchange for calcium and magnesium ions to soften the water. This process has drawbacks from a nutritional standpoint. First, soft water is more likely to dissolve certain metals from pipes than hard water. These metals include cadmium and lead, which are potentially toxic. Second, soft water may be a significant source of sodium for those who need to restrict their sodium intake for health reasons. Approximately 75 milligrams of sodium is added to each quart of water per 10 g.p.g. (grains per gallon) hardness. Finally, there is epidemiological evidence to suggest a lower incidence of heart disease in communities with hard water. The Environmental Protection Agency (EPA) doesn't set a mandatory upper limit for sodium in water, but suggests an upper limit of 20 milligrams per liter (quart) to protect individuals on sodium-restricted diets. If you use a water softener, two ways to avoid excess sodium in drinking water are: 1) use low sodium bottled water, and 2) install a separate faucet in the kitchen with a purifier for un
<urn:uuid:379209bf-1a87-4af4-a16b-f5d56e4d5039>
512
0
CRISPR, a new genome editing tool, could transform the field of biology—and a recent study on genetically-engineered human embryos has converted this promise into media hype. But scientists have been tinkering with genomes for decades. Why is CRISPR suddenly such a big deal? The short answer is that CRISPR allows scientists to edit genomes with unprecedented precision, efficiency, and flexibility. The past few years have seen a flurry of “firsts” with CRISPR, from creating monkeys with targeted mutations to preventing HIV infection in human cells. Earlier this month, Chinese scientists announced they applied the technique to nonviable human embryos, hinting at CRISPR’s potential to cure any genetic disease. And yes, it might even lead to designer babies. (Though, as the results of that study show, it’s still far from ready for the doctor’s office.) In short, CRISPR is far better than older techniques for gene splicing and editing. And you know what? Scientists didn’t invent it. CRISPR/Cas9 comes from strep bacteria... CRISPR is actually a naturally-occurring, ancient defense mechanism found in a wide range of bacteria. As far as back the 1980s, scientists observed a strange pattern in some bacterial genomes. One DNA sequence would be repeated over and over again, with unique sequences in between the repeats. They called this odd configuration “clustered regularly interspaced short palindromic repeats,” or CRISPR. This was all puzzling until scientists realized the unique sequences in between the repeats matched the DNA of viruses—specifically viruses that prey on bacteria. It turns out CRISPR is one part of the bacteria’s immune system, which keeps bits of dangerous viruses around so it can recognize and defend against those viruses next time they attack. The second part of the defense mechanism is a set of enzymes called Cas (CRISPR-associated proteins), which can precisely snip DNA and slice the hell out of invading viruses. Conveniently, the genes that encode for Cas are always sitting somewhere near the CRISPR sequences. Here is how they work together to disable viruses, as Carl Zimmer elegantly explains in Quanta: As the CRISPR region
<urn:uuid:4720e95c-0fa2-4c59-9f2e-130429e87432>
512
0
CRISPR, a new genome editing tool, could transform the field of biology—and [...] fills with virus DNA, it becomes a molecular most-wanted gallery, representing the enemies the microbe has encountered. The microbe can then use this viral DNA to turn Cas enzymes into precision-guided weapons. The microbe copies the genetic material in each spacer into an RNA molecule. Cas enzymes then take up one of the RNA molecules and cradle it. Together, the viral RNA and the Cas enzymes drift through the cell. If they encounter genetic material from a virus that matches the CRISPR RNA, the RNA latches on tightly. The Cas enzymes then chop the DNA in two, preventing the virus from replicating. There are a number Cas enzymes, but the best known is called Cas9. It comes from Streptococcus pyogenes, better known as the bacteria that causes strep throat. Together, they form the CRISPR/Cas9 system, though it’s often shortened to just CRISPR. Top image: Screenshot from this MIT video explaining CRISPR It is a more precise way of editing the genome... As this point, you can start connecting the dots: Cas9 is an enzyme that snips DNA, and CRISPR is a collection of DNA sequences that tells Cas9 exactly where to snip. All biologists have to do is feed Cas9 the right sequence, called a guide RNA, and boom, you can cut and paste bits of DNA sequence into the genome wherever you want. DNA is a very long string of four different bases: A, T, C, and G. Other enzymes used in molecular biology might make a cut every time they see, say, a TCGA sequence, going wild and dicing up the entire genome. The CRISPR/Cas9 system doesn’t do that. Cas9 can recognize a sequence about 20 bases long, so it can be better tailored to a specific gene. All you have to do is design a target sequence using an online tool and order the guide RNA to match. It takes no longer than few days for the guide sequence to arrive by mail. You can even repair a faulty gene by cutting out it with CRISPR/C
<urn:uuid:4720e95c-0fa2-4c59-9f2e-130429e87432>
512
23
CRISPR, a new genome editing tool, could transform the field of biology—and [...] as9 and injecting a normal copy of it into a cell. Occasionally, though, the enzyme still cuts in the wrong place, which is one of the stumbling blocks for wider use, especially in the clinic. ...and way more efficient... Mice whose genes have been altered or “knocked out” (disabled) are the workhorses for biomedical research. It can take over a year to establish new lines of genetically-altered mice with traditional techniques. But it takes just few months with CRISPR/Cas9, sparing the lives of many mice and saving time. Traditionally, a knockout mouse is made using embryonic stem (ES) cells. Researchers inject the altered DNA sequence into mouse embryos, and hope they are incorporated through a rare process called homologous recombination. Some of first generation mice will be chimeras, their bodies a mixture of cells with and without the mutated sequence. Only some of the chimeras will have reproductive organs that make sperm with mutated sequence. Researchers breed those chimeras with normal mice to get a second generation, and hope that some of them are heterozygous, aka carrying one normal copy of the gene and one mutated copy of the gene in every cell. If you breed two of those heterozygous mice together, you’ll be lucky to get a third generation mouse with two copies of the mutant gene. So it takes at least three generations of mice to get your experimental mutant for research. Here it is summarized in a timeline: But here’s how a knockout mouse is made with CRISPR. Researchers inject the CRISPR/Cas9 sequences into mouse embryos. The system edits both copies of a gene at the same time, and you get the mouse in one generation. With CRISPR/Cas9, you can also alter, say, fives genes at once, whereas you would have to had to go that same laborious, multi-generational process five times before. CRISPR is also more efficient than two other genome engineering techniques called zinc finger nuclease (ZFN) and transcription activator-like effector nucleases (T
<urn:uuid:4720e95c-0fa2-4c59-9f2e-130429e87432>
512
23
CRISPR, a new genome editing tool, could transform the field of biology—and [...] ALENs). ZFN and TALENs can recognize longer DNA sequences and they theoretically have better specificity than CRISPR/Cas9, but they also have a major downside. Scientists have to create a custom-designed ZFN or TALEN protein each time, and they often have to create several variations before finding one that works. It’s far easier to create a RNA guide sequence for CRISPR/Cas9, and it’s far more likely to work. ...and can be used in any organism Most science experiments are done on a limited set of model organisms: mice, rats, zebrafish, fruit flies, and a nematode called C. elegans. That’s mostly because these are the organisms scientists have studied most closely and know how to manipulate genetically. But with CRISPR/Cas9, it’s theoretically possible to modify the genomes of any animal under the sun. That includes humans. CRISPR could one day hold the cure to any number of genetic diseases, but of course human genetic manipulation is ethically fraught and still far from becoming routine. Closer to reality are other genetically modified creatures—and not just the ones in labs. CRISPR could become a major force in ecology and conservation, especially when paired with other molecular biology tools. It could, for example, be used to introduce genes that slowly kill off the mosquitos spreading malaria. Or genes that put the brakes on invasive species like weeds. It could be the next great leap in conserving or enhancing our environment—opening up a whole new box of risks and rewards. With the recent human embryo editing news, CRISPR has been getting a lot of coverage as a future medical treatment. But focusing on medicine alone is narrow-minded. Precise genome engineering has the potential to alter not just us, but the entire world and all its ecosystems. This post has been updated to clarify that the the number of basepairs in guide RNA for CRISPR/Cas9 is different from the number of basepairs it recognizes in a target sequence. Contact the author at email@example.com.
<urn:uuid:4720e95c-0fa2-4c59-9f2e-130429e87432>
505
23
- the use of words to convey a meaning that is the opposite of its literal meaning: the irony of her reply, “How nice!” when I said I had to work all weekend. - a technique of indicating, as through character or plot development, an intention or attitude opposite to that which is actually or ostensibly stated. - (especially in contemporary writing) a manner of organizing a work so as to give full expression to contradictory or complementary impulses, attitudes, etc., especially as a means of indicating detachment from a subject, theme, or emotion. - Socratic irony. - dramatic irony. - an outcome of events contrary to what was, or might have been, expected. - the incongruity of this. - an objectively sardonic style of speech or writing. - an objectively or humorously sardonic utterance, disposition, quality, etc. Origin of irony1 SynonymsSee more synonyms on Thesaurus.com - consisting of, containing, or resembling the metal iron: an irony color. Origin of irony2 Examples from the Web for irony It may be fun and it may get them paid, until oversaturation ruins our sense for irony and destroys the market for it.Trolls and Martyrdom: Je Ne Suis Pas Charlie January 9, 2015 The irony did not escape one local, Laith Hathim, as he stood and watched the newly minted refugees make their way into Mosul.Has the Kurdish Victory at Sinjar Turned the Tide of ISIS War? December 27, 2014 The irony has thinned with the economy, perhaps: Who can really afford just to pretend to DIY today?Glenn Beck Is Now Selling Hipster Clothes. Really. Ana Marie Cox December 20, 2014 Lacking any sense of irony, Eldridge made campaign-finance reform a signature plank.The Rise and Fall of Chris Hughes and Sean Eldridge, America’s Worst Gay Power Couple December 9, 2014 The irony is that communities are protesting stereotyping—as cops respond in stereotypical
<urn:uuid:a06e9bca-d791-4b70-b675-88e9889eb487>
512
0
Neuroscience is significant to treat the patients of mental disorders but it may be important also for the learning process of students. The volume of grey matter increases in the specific parts of our brain when something is practiced to succeed in work performance. It can be observed in neuroscience for clinical perspectives by using advanced technology. This is the advancement of neuroscience but it has done nothing in school system. Instance guided object learning (IGOL) is very powerful in knowledge transfer because instance has strong projection in posterior cingulate cortex to reflect the reactance of learning emotions. Teaching is the system of knowledge transfer in education, about 5000 years old system. We know that the learning mechanism of brain is important for knowledge transfer and neuroscience can provide a smart system of knowledge transfer for our children in classroom. - Default mode network (DMN) of brain circuits - Attention management system in the working brain - Permutations of fear factor in the emotional processing of perception - Fear of homework and teachers Fear is the strongest factor of human emotions and other emotions such as love, hate, anger, pleasure, reward, failure, sadness and anxiety are its permutations. The disorders of posterior cingulate cortex may be attention eater in which the learning focus of brainpage making process is not maintained for the convergence of knowledge transfer. Sensory areas and roots are found in the posterior part of central nervous system, while motor areas and roots are localized in the anterior region of neural structures. This is the fundamental anatomy of brain circuits. Emotion is modulated to maintain and fix attention in a particular pathway of brain circuit for the performance of working mechanism. Depression is the disorder of fear factors caused in the cingulate cortex of human brain. What is the basic function of emotion? Fear is everywhere in our surroundings. Some people achieve great things in spite of their fear while other people are paralyzed into inaction by those very same fears. We establish relationship with everyone of those fears and anxieties. Depression, anger and anxiety come from a sense of disconnection. Anxiety makes us afraid of what we are are doing and thinking. It may happen that we could lose something, miss an opportunity or be inadequate. Posterior cingulate cortex forms a central node in the default mode network (DMN) of brain. DMN is crucial to the
<urn:uuid:3c00a839-2342-4fda-8a0f-0a3b31cb5e6a>
512
0
Neuroscience is significant to treat the patients of mental disorders but it may be important also for the [...] working mechanism of brain and plays a significant role in the learning mechanism of knowledge transfer. It has been shown to communicate with several brain networks simultaneously and is involved in various functions related to attention management systems. Posterior cingulate cortex of brain is the generator of emotionalized virtual reality to transfer knowledge spectrum in learning process. The posterior cingulate cortex of brain is the caudal part of cingulate cortex, located posterior to anterior cingulate cortex. This is the upper part of limbic lobe. Cingulate cortex is made up of an area around the mid line of brain above corpus callosum. Surrounding areas of posterior cingulate cortex include the retrosplenial cortex and precuneus of subcortical region. Fear factor is the root of emotional generator that is conducted in the posterior cingulate cortex of brain. Along with precuneus, the posterior cingulate cortex of brain has been implicated as a neural substrate for awareness in numerous studies of both anesthesized and vegetative coma state. Imaging studies indicate a prominent role for posterior cingulate cortex in pain and episodic memory retrieval. Increased size of posterior ventral cingulate cortex is related to declines in working memory performance. The posterior cingulate cortex of human brain has been strongly implicated as a key part of several intrinsic control networks. Structural and functional abnormalities in the posterior cingulate cortex of human brain result in a range of neurological and psychiatric disorders. Posterior cingulate cortex likely integrates and mediates information in the working networks of brain. Therefore, its functional and anatomical abnormalities might be an accumulation of remote and widespread damage in the brain circuits. - Alzheimer’s disease - Autism spectrum disorder (ASD) - Attention deficit hyperactivity disorder (ADHD) - Traumatic brain injury - Anxiety disorders Default mode network is most commonly shown to be active when a person is not focused on the outside world and brain is at wakeful rest such as day-dreaming and mind-wandering. But it is also active when an individual is thinking about others, thinking about themselves
<urn:uuid:3c00a839-2342-4fda-8a0f-0a3b31cb5e6a>
512
23
Neuroscience is significant to treat the patients of mental disorders but it may be important also for the [...] , remembering the past and planning for the future. Knowledge was transferred from one generation to another by telling about objects, facts and events in the form of stories, poems and essays. When people watch a movie, listen to a story or read a story, their DMNs are highly correlated with each other. DMNs are not correlated if stories are scrambled or are in a language the person does not understand. It is suggesting that network is highly involved in the comprehension and subsequent memory formation of that story. DMN is shown to even be correlated if the same story is presented to different people in different languages. It is further suggesting that DMN is truly involved in the comprehension aspect of knowledge and not the auditory or language aspect. Default mode network may be deactivated during external goal-oriented tasks such as visual attention or cognitive working memory tasks. So, some researchers label this network as task-negative network. If tasks are external goal-oriented for social working memory or an autobiographical task, the DMN is positively activated with the task and correlates with other brain networks such as the network involved in executive function. Posterior cingulate cortex connects our brain and body to the space, object, time, instance and module of external surroundings to generate proper emotions in processing like the cinema, TV serials, music albums, news telecast or live broadcast. Fear factor is the default emotion of limbic circuits and it does not require neurotransmitters to activate the channels of emotion in work performance. It is considered that the emotions of fear and reward are generated in the amygdala of brain. In fact, posterior cingulate cortex is the generator of emotions because this is the central node of default mode network of brain. We know that medial prefrontal cortex, posterior cingulate cortex and angular gyrus are the main nodes of default mode network of brain. Instance produces a specific emotion in the posterior cingulate cortex of brain, then it is passed to anterior cingulate cortex for the modulation of zeid factors. Finally, these zeid factors are projected to amygdala to set emotional markers for the processing of knowledge transfer. Pictures : Microscope study image from the Pexels and brain image showing posterior cingulate
<urn:uuid:3c00a839-2342-4fda-8a0f-0a3b31cb5e6a>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family Lamiaceae native to Mexico. The word chia is derived from the Aztec word chian, meaning oily. The present Mexican state of Chiapas received its name from the Nahua "chia water or river." Chia was cultivated by the Aztecs in pre-Columbian times. Jesuit chroniclers referred to chia as the third most important crop to the Aztecs behind only corn and beans, and ahead of amaranth. Tribute and taxes to the Aztec priesthood and nobility were often paid in chia seed. A great benefit of the chia seed is its durability. Unlike flax-seeds, chia can be stored for long periods without becoming rancid, and does not require grinding. Chia also contains higher levels of omega 3 fatty acids than flax-seeds, yielding 25-30% extractable oil, mostly α-linolenic acid (ALA). Chia seeds typically contain 20% protein, 34% oil, 25% dietary fiber (mostly soluble with high molecular weight), and significant levels of antioxidants (chlorogenic and caffeic acids, myricetin, quercetin, and kaempferol flavonols). The oil from chia seeds contains a very high concentration of omega-3 fatty acid — approximately 64%. Chia seeds contain no gluten and trace levels of sodium.Chia is also is a source of antioxidants and a variety of amino acids. For all these health related benefits, chia is in the process of application before the EU authorities to be considered as a novel food. Known as the running food, its use as a high energy endurance food has been recorded as far back as the ancient Aztecs. It was said the Aztec warriors subsisted on chia seed during the conquests. The Indians of the south west would eat as little as a teaspoon full when going on a 24 hour forced march. Indians running form the Colorado River to the California coast to trade turquoise for seashells would only bring the Chia seed for their nourishment. Chia is grown commercially in its native
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
0
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] Mexico, and in Bolivia, Argentina, Ecuador and Guatemala. In 2008, Australia was the world's largest producer of chia. A similar species, golden chia, is used in the same way but not widely grown commercially. Salvia hispanica seed is marketed most often under its common name "Chia," but also under several trademarks, including "Sachia," "Anutra," "Chia Sage," "Salba," and "Tresalbio." Chia is an annual herb growing to 1m tall, with opposite leaves 4 – 8cm long and 3 – 5cm broad. Its flowers are purple or white and are produced in numerous clusters in a spike at the end of each stem. Chia seeds are typically small ovals with a diameter of about one millimeter. They are mottle-colored with brown, gray, black and white. How to Use Chia Chia seed may be eaten raw as a dietary fiber and omega-3 supplement. Grinding chia seeds produces a meal called pinole, which can be made into porridge or cakes. Chia seeds soaked in water or fruit juice is also often consumed and is known in Mexico as chia fresca. The soaked seeds are gelatinous in texture and are used in gruels, porridges and puddings. Ground chia seed is used in baked goods including breads, cakes and biscuits. Chia sprouts are used in a similar manner as alfalfa sprouts in salads, sandwiches and other dishes. If you try placing a spoonful of chia in a glass of water and leaving it for approximately 30 minutes or so, when you return the glass will appear to contain not seeds or water, but an almost solid gelatin. This gel-forming reaction is due to the soluble fiber in the chia. Researchers believe this same gel-forming phenomenon takes place in the stomach when food containing these gummy fibers, known as mucilages, are eaten. The gel that is formed in the stomach creates a physical barrier between carbohydrates and the digestive enzymes that break them down, thus slowing the conversion of carb
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] ohydrates into sugar. In addition to the obvious benefits for diabetics, this slowing in the conversion of carbohydrates into sugar offers the ability for creating endurance. Carbohydrates are the fuel for energy in our bodies. Prolonging their conversion into sugar stabilises metabolic changes, diminishing the surges of highs and lows creating a longer duration in their fueling effects. One of the exceptional qualities of the Chia seed is its hydrophilic properties, having the ability to absorb more than 12 times its weight in water. Its ability to hold on to water offers the ability to prolong hydration. Fluids and electrolytes provide the environment that supports the life of all the body’s cells. Their concentration and composition are regulated to remain as constant as possible. With chia seeds, you retain moisture, regulate, more efficiently, the bodies absorption of nutrients and body fluids. Because there is a greater efficiency in the utilization of body fluids, the electrolyte balance is maintained. Fluid and electrolyte imbalances occur when large amounts of fluids are lost resulting from vomiting, diarrhea, high fever, or more commonly from sweating? The loss of extracellular fluid occurs in these conditions. Intercellular fluid then shifts out of cells to compensate, causing abnormal distribution of electrolytes across cell membranes resulting in cellular malfunction. Retaining and efficiently utilising body fluids maintains the integrity of extracellular fluids, protecting intercellular fluid balance. The results of which ensure normal electrolyte dispersion across cell membranes (electrolyte balance), maintaining fluid balances, resulting in normal cellular function. Chia seeds are the definitive hydrophilic colloid for the 21 century diet. Hydrophilic colloids, (a watery, gelatinous, glue-like substance) form the underlying elements of all living cells. They posses the property of readily taking up and giving off the substances essential to cell life. The precipitation of the hydrophilic colloids cause cell death. The food we eat, in the raw state, consist largely of hydrophilic colloids. When cooked
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] on the other had, precipitates its colloidal integrity. This change in the colloidal state alters the hydration capacity of our foods so as to interfere with their ability to absorb digestive juices. If we were to eat a raw diet we wouldn’t need to introduce the addition of any hydrophilic colloid to our diet. Uncooked foods contain sufficient hydrophilic colloid to keep gastric mucosa in the proper condition. But even with raw foods, they must first be partially broken down by the digestive juices, beginning in the mouth and continuing through he upper tract, to allow the gelatinous reaction to take place. Because of this upper tract digestive process, those who suffer from slow digestion, gas formation, relaxed cardia and heartburn in which the burning is due to organic acids instead of an excess of the normal hydrochloric acid, which frequently accompanies chronic inflammation disease affecting such organs as the heart, lungs, gall bladder and appendix, are usually restricted from eating raw foods. A hydrophilic colloid incorporated with these foods may be used either in connection with the patients regular food or with whatever diet the physician feels is best suited for his patient. The patient with gastric atony or nervous indigestion who complains of heartburn and/or vomiting four to five hours after eating is often helped. There is a lessening of emptying time if the stomach and an improvement in gastric tone. Chia seed may be used in conjunction with almost any diet your doctor or nutritionist feels is necessary for your condition. The Chia’s hydrophilic colloidal properties aid the digestion of any foods contributing to the patients suffering as a result of a sour stomach. Even if you have sensitivity to certain foods, they may be tolerated with slight discomfort or none at all if a hydrophilic colloid is made a part of your diet. The positive effects on the digestion in the upper portion of the gastrointestinal tract often leads to puree their foods may find benefits from hydrophilic colloids which may lead to eliminating the necessity for pureeing. Even raw vegetables, green salads and fruits, which are largely restricted, may often be given
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] to these patients with little or no discomfort after a short time. There are several hydrophilic foods available that offer these natural benefits. Cactus juice, beet juice, agar, the edible seaweeds, and many proprietary preparations, which include the silica gels, mucilaginous substance of vegetables origin, are among colloids that prove effective. Each one of the above mentioned substances have one or more drawbacks. They are either too expensive, they may produce toxic side effects, bad tasting, not readily available, insufficient hydration capability, or it is indigestible. Chia seed, a muscle and tissue builder and an energizer of endurance with extensive hydration properties, possesses none of the above disadvantage, and because if its physiochemical properties, supports effective treatment in immediate problems of digestion. Exactly why this should be true may be puzzling at first. However, if we consider the effect of unusual irritation upon the nerves of the gastrointestinal canal, it is reasonable the think that a less violent and more balanced digestion might quiet the activity of the otherwise hyperactive gut. Inasmuch as the same foods, which formerly produced irritation, may frequently be continued without harm when hydrophilic colloids are used. The relief to nerve irritation seems to offer a logical explanation. The change, in the lower gastrointestinal tract, is due to the effect of the hydrophilic colloid and to a more complete digestion-taking place along the entire tract due to physiochemical alterations. Both factors are important, as there is undoubtedly a better assimilation of food that supports enhanced nutritional absorption while significantly extending necessary hydration as well as encouraging proper elimination. As a source of protein, the Chia, after ingestion, is digested and absorbed very easily. This results in rapid transport to the tissue and utilization by the cells. This efficient assimilation makes the Chia very effective when rapid development of tissue takes place, primarily during growth periods if children and adolescents. Also for the growth and regeneration of tissue during pregnancy and lactation, and this would also include regeneration of muscle tissue for conditioning, athletes, weight lifters
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] , etc. Another unique quality if the Chia seed is its high oil content, and the richest vegetables source for the essential omega-3 fatty acid. It has approximately three to ten times the oil concentrations of most grains and one and a half to two times the protein concentrations of other grains. These oils, unsaturated fatty acids, are the essential oils your body needs to help emulsify and absorb the fat soluble vitamins, A, D, E, & K. Chia seeds are rich in the unsaturated fatty acid, linoleic, which the body cannot manufacture. When there are rich amounts of linoleic acid sufficiently supplied to the body trough diet, linoleic and arachidonic acids can be synthesized from linoleic acid. Unsaturated fatty acids are important for respiration of vital organs and make it easier for oxygen to be transported by the blood stream to all cells, tissues, and organs. They also help maintain resilience and lubrication of all cells and combine with protein and cholesterol to form living membranes that hold the body cells together. Unsaturated fatty acids are essential for normal glandular activity, especially of the adrenal glands and the thyroid glad. They nourish the skin cells and are essential for healthy mucus membranes and nerves. The unsaturated fatty acids function in the body by cooperating with vitamin D in making calcium available to the tissues, assisting in the assimilation of phosphorus, and stimulating the conversion of carotene into vitamin A. Fatty acids are related to normal functioning of the reproductive system. Chia sees contain beneficial long-chain triglycerides (LCT) in the right proportion to reduce cholesterol on arterial walls. The Chia seed is also a rich source of calcium as it contains the important mineral boron, which acts as catalyst for the absorption and utilization of the calcium by the body. Chia, as an ingredient, is a dieters dream food. There are limitless ways to incorporate the Chia seed into your diet. Chia must be
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
Salvia hispanica (Chia) is a plant of the genus Salvia in the family [...] prepared with pure water before using recipes. The seed will absorb 9 times it’s weight in water in less than 10 minutes and is very simple to prepare. Food Extender/Calorie Displacer: The optimum ratio of water to seed, for most recipes, is 9 part water to 1 part seed. One pound if seed will make 10 pounds of Chia gel. This is the most unique structural quality of the Chia seed. The seed’s hydrophilic (water absorbing) saturated cells hold the water, so when it is mixed with foods, it displaces calories and fat without diluting flavor. In fact, I have found that because Chia gel displaces rather than dilutes, it creates more surface area and can actually enhance the flavor rather than dilute it. Chia gel also works as a fat replacer for many recipes. Making Chia Gel (9 to 1 ratio): Put water in a sealable container and slowly pour seed into water while whisking briskly. This process will avoid any clumping of the seed. Wait a couple of minutes, whisk again and let stand for 5 to 10 minutes. Whisk again before using or storing in refrigerator (Gel will keep up to 2 weeks). You can add this mix to jams, jellies, hot or cold cereals, yogurts, mustard, catsup, tarter sauce, BBQ sauce, etc.. Add the gel, between 50% to 75% by volume, to any of the non-bake mentioned foods, mix well and taste. You will notice a very smooth texture with the integrity of the flavour intact. In addition to adding up to 50% to 75% more volume to the foods used, you have displaced calories and fat by incorporating an ingredient that is 90% water. Use as a fat replacer, for energy and endurance, or for added great taste, buy substituting the oil in your breads with Chia gel. Top your favorite bread dough before baking with Chia gel (for toping on baked goods, breads, cookies, piecrust, etc., reduce the water ration to 8 parts water to 1 part Chia
<urn:uuid:28976744-5afb-48ff-907d-2f03256c0219>
512
23
USGS Map Reveals Geologic History of Mauna Loa Volcano’s NE FlankJuly 18, 2017, 8:39 AM HST (Updated July 18, 2017, 9:10 AM) The new “Geologic map of the northeast flank of Mauna Loa volcano, Island of Hawaiʻi,” the culmination of many years of work by Hawaiian Volcano Observatory (HVO) geologists, was recently published by the U.S. Geological Survey. The work was spearheaded by John P. Lockwood (affectionately known as “Mr. Mauna Loa”), who is now retired from USGS and HVO, and Frank Trusdell, HVO’s current Mauna Loa Project geologist. For the northeast region of Mauna Loa, this updated map supersedes the “Geologic Map of the Island of Hawai‘i” (1996) and the “Geologic Map of the State of Hawai‘i” (2007). Encompassing 440 square miles of the northeast flank of Mauna Loa, the new map comprises an area equivalent to the islands of Moloka‘i and Lāna‘i combined. The mapped area extends from an elevation of 10,880 feet to sea level, from Pu‘u‘ula‘ula (“Red Hill”) on the southwest to Hilo on the northeast. Mauna Loa, the largest active volcano on Earth, is known to have erupted 33 times since written descriptions became available in 1832. Some eruptions were preceded by only brief seismic unrest, while others followed several months to a year of increased seismicity. Since 1832, seven eruptions occurred within the area covered by the map: 1852, 1855–56, 1880–81, 1899, 1935–36, 1942 and 1984. The Northeast Rift Zone (NERZ) of Mauna Loa is about 25 miles long and 1.2 to 2.5 miles wide.
<urn:uuid:f8053981-ed07-4ac6-8506-d2cc0763d544>
512
0
USGS Map Reveals Geologic History of Mauna Loa Volcano’s [...] It narrows at Moku‘āweoweo, the volcano’s summit caldera, but becomes diffuse (3.4–4.3 miles wide) down rift near Pu‘umaka‘ala Cone, about 7.4 miles west of Mountain View. The rift zone is marked by low spatter ramparts and spatter cones as high as 197 feet. Eruptive fissures and ground cracks cut volcanic deposits and flows in and near the crest of the rift zone. Lava typically flows from the NERZ to the north, east, or south, depending on vent location relative to the rift crest. For instance, during the 1880-81 eruption of Mauna Loa, flows initially traveled south towards Kīlauea, but later, northeast towards Hilo. Although most of the NERZ source vents are more than 19 miles from Hilo, one branch of the 1880-81 flow nearly reached Hilo Bay. In fact, Hilo is built entirely on lava flows erupted from the NERZ, most of them older than 1852. The map shows the distribution of 105 eruptive units (flows)—separated into 15 age groups ranging from more than 30,000 years before present to 1984 CE—as well as the relations of volcanic and surficial sedimentary deposits. The color scheme adopted for the map is based on the age of the volcanic deposits. Warm colors (red, pink, and orange) represent deposits from recent epochs of time, while cool colors (blue and purple) represent older deposits. From the geologic record, we can deduce several facts about the geologic history of the NERZ. For example, in the past 4,000 years, the middle to uppermost sections of the rift zone were more active than the lower section, perhaps due to buttressing (compression) of the lower northeast rift zone by the adjacent Mauna Kea and Kīlauea volcanoes. Other interesting tidbits glean
<urn:uuid:f8053981-ed07-4ac6-8506-d2cc0763d544>
512
23
In today’s world, students have access to laptops, tablets, smart phones, mp3 players, video recorders and so on allowing them to have a different learning environment. So how is that you can use these tools and technologies to your advantage? The answer is—Blended Learning! Blended learning by its name is a new era of learning through a blend of traditional classroom and modern digital technology. It is a formal education program in which a student learns at least in part through delivery of content and instruction via digital and online media with some element of student control over time, place, path, or pace. Blended learning designates the range of possibilities presented by combining Internet and digital media with established classroom forms that require the physical co-presence of teachers and students. Blended learning is now almost two decades old, having been imported into K–12 education in the late 1990s from corporate education, business training firms and the post-secondary education sector. In the world of eLearning, blended learning refers to the complementary use of eLearning in the standard education model, due to the benefits it offers on a broad scale. Tridat has the capability and resources to bring blended learning solutions—part e-based, part classroom based—under one roof! Not only this, we have developed specific self-help guidebooks that keep the learning active and relevant for a long time. You can now choose e-modules for pre-learning and post-learning efficacy, along with off-the-shelf or custom designed FLPs. We assure you this not only creates better learning and retention, but is also cost effective. Apart from blended solutions we at Trident provide the requirements of your employees through their cell phones, tablets and other devices. The entire idea of mobile learning is to provide content on the go. We don't believe in providing e-learning on a mobile device but proposing mobile learning to be an integral part of blended learning. Right from quick animations, to videos, podcasts, assessments and much more, our team has the expertise to couple creativity and technology to produce the best possible mobile learning courses in the market.
<urn:uuid:a50a0fb7-de98-4e59-b90c-b848d55fbb5f>
466
0
|Home > Public Information > Scientific Highlights > 2004 > The Largest Known Planetary Nebula on the Sky| THE LARGEST KNOWN PLANETARY NEBULA ON THE SKY The vast majority of Planetary Nebulae in our own Galaxy have been identified via wide-field narrow-band Hα surveys or through wide-field low-resolution slitless spectroscopic surveys, with both techniques attempting to isolate objects showing very high equivalent width emission lines that are characteristic of PN. Examining the results of an automated search of the Sloan Digital Sky Survey (SDSS) spectroscopic database for emission lines from putative high-redshift sources, one particular galaxy showed an unambiguous emission line detection with a somewhat weaker feature to the blue. The emission line pair was immediately identifiable as emission from [OIII] 4959, 5007. Not an entirely unexpected occurrence but the unusual feature of the detection was that the wavelength of the detection placed the emission at essentially zero radial velocity. Querying the output of the emission line search for similar detections produced more spectra showing a similar signature. All of the objects possessing [OIII] emission occurred in an approximately circular region with a diameter of ~1.5°, with not a single detection anywhere else on the sky. Investigation of SDSS spectra of stars, quasars and even sky fibres revealed further detections, all concentrated in the same region of sky. A series of checks fairly rapidly eliminated the majority of instrumental artifacts or transient phenomena as the cause of the emission. Combining spectra beyond the boundaries of the region where [OIII] emission was detected produced clear detections of [OIII] emission extending over a region more than 2° in diameter. A smaller number of individual spectra also showed the presence of emission from Hα and [NII] 6548, 6583. The spatial distribution of the individual emission line detections revealed clear trends and composite spectra, made up from objects contiguous on the sky, confirmed the trends and even allowed the detection of [SII] 6718, 6732. Narrowband imaging of the central part of the region was carried out using the WFC. The results were unambiguous, with excellent agreement between
<urn:uuid:adfdc01a-f46b-4558-ba3e-08ef9c60850f>
512
0
Although there are many ways you can begin an AP Studio Art program, you can make things easier for yourself and make sure your program runs as smoothly as it can. Read the following to get a grip on the basics and take advantage of training and resources. Let's get started. Understand the Basics The first step is to understand the basics of the program. AP Studio Art offers three "portfolios." Each of the following portfolios has its own focus and requirements: - Drawing: Students address drawing issues and mark-making concerns. They can submit not only work in traditional drawing media—such as pencils, ink, and pastels—but also painting, and printmaking, in both analog and digital formats, as long as mark-making, line quality, and surface manipulation are predominant. - 2-D Design: Students focus on the elements and principles of design. This portfolio can include photography and digital work. It can also contain drawings, paintings, prints, and any other two-dimensional art form that focuses on composition. - 3-D Design: Students explore form, depth, and space—that is to say, the issues of working in three dimensions, whether actual or virtual. Each portfolio has three equally weighted sections: - Selected Works (Quality) promotes the development of a sense of accomplishment. - Sustained Investigation (Concentration) shows the student's in-depth sustained study of an idea in art that is personally significant. - Range of Approaches (Breadth) shows a range of technical and/or conceptual approaches. A program may offer one of these three portfolios, two, or all three. Students can also generate the required work over two years. At the end of the school year, students' portfolios can be evaluated and graded by the College Board. Don't work alone. Collaborate with your AP coordinator, school administrator, and other AP teachers at your school and online via the AP Studio Art Teacher Community. Work with your middle school art educator colleagues to build the visual art program. Enroll in Course Audit The next step is to complete the AP Course Audit. You must participate in AP Course Audit in order to label a course "AP." If this is the first AP Studio Art program at your school, you should submit a specific form and course
<urn:uuid:7e712a14-403b-473f-b1c2-9a765eb6a83a>
512
0
To show the importance of time, Allah swears by time in the Qur’an for example in Surah ‘Asr [wal ‘asr...] and many others. Time is like a vehicle without reverse – you can never turn back time. Time is limited by hours, days, weeks, months and years. Ibn ‘Abbas narrates that the Prophet [pbuh] said: “There are two blessings which many people do not make the most of: good health and free time.” [al-Bukhari] 1. Beating procrastination 2. Activity log 3. Action plans 4. Prioritized to do lists 5. Scheduling skills 6. Personal goal setting Take the following example, in a pot you have a few large pebbles, these are the “core values” – your KEY priorities, then pour sand into the pot, until it is filled to the top. The sand represents the other activities in your life that you fit around your core values ya’ni your important activities for the day. Had you filled the pot with sand BEFORE putting the pebbles in, you would not have managed to have put enough pebbles into the pot. Therefore, the importance of the core values or priorities is highlighted by this, they are non-negotiable and everything else comes in later! In other words, sometimes we worry about the sand more than the pebbles, when really it’s the pebbles that are our priorities. The productivity pyramid The above Productivity Pyramid begins at the base. First things first, you need to identify your values, clear and convicted values. For example, your deen, family, parents, work, friends, education etc. These are non-negotiable! Once these are in place, then you can set your goals. The Four Quadrants of Goal Setting: Three Main types of Goals: - Short-term goals (0 -1year goal) - Mid-term goals (1 – 5 years goal) - Long-term goals (1 – 20 years goal) How to set your goals? - You have to know what you want! - You need SMART Goals [Specific, measureable, achievable, realistic and time-bound goals] - You have to have a strong desire - Visualise your
<urn:uuid:22ee63a8-d775-4332-ba83-0cbae527a752>
512
0
Q - Why study justification by faith? A - Luther wrote that attacks against the Christian faith fall into three categories: Q - What is justification? A - Justification is God's gracious declaration that an individual is forgiven all his sin through faith. Q - Is there another justification? A - No. The word justification and the verb to justify are used in the Scriptures only in the sense of justification by faith. Unless one is inventing new meanings, justification simply means justification by faith. Q - Why do some people talk about a General Justification, an Objective Justification, or a Universal Justification? A - All three terms come from Calvinistic influence in Lutheran Pietism. Following Knapp and others, Synodical Conference Lutherans began to speak of two justifications - one objective, the other subjective. Q - What does General, Objective, or Universal Objective Justification claim? A - The terms are used to claim falsely that God absolved the entire world, without the Word, without the Means of Grace, without faith - when Christ died on the cross, or alternately when He rose from the dead. Q - Where is this definition found? A - The Synodical Conference Brief Statement, 1932: "Scripture teaches that God has already declared the whole world to be righteous in Christ, Rom. 5:19; 2 Cor. 5:18-21; Rom. 4:25..." Q - What are the implications of Universal Objective Justification? A - Every single person is a guilt-free saint, including Hitler, Mao, Stalin, the Hottentots, Hindu, Muslim, and those who died in the Flood. Q - How is Universal Objective Justification different from Universalism, the opinion that everyone is saved? A - In spite of claims to the contrary, the Wisconsin Synod has already had a campaign where posters said to the public, "You are saved - just like me." That is Universalism, universal salvation. Q - What do the Scriptures and Confessions teach about the crucifixion of Christ? A - They speak with one voice that Christ died for the sins of the world, so that no one would be tempted to think that any work was required of man to satisfy God
<urn:uuid:f5afbf34-9473-4fb7-925c-20d78ab9b21b>
512
0
Q - Why study justification by faith? A - Luther wrote that attacks against the Christian faith [...] . Q - Why is faith denounced as a work among UOJ adherents? A - They take their argument back to Calvinism, where the strict Calvinists disagreed with Arminians, who viewed faith a a virtue in man and also as their part of the transaction, fulfilling what God started. Q - Why is this a false accusation among Lutherans? A - The Book of Concord teaches in harmony with the Scriptures that faith is trust in the Word, a trust caused by the Holy Spirit working in the Word. Christ Himself extolled faith in individuals and rebuked His disciples' lack of faith - "O ye of little faith!" Q - How is an individual justified by faith? A - He becomes aware of his sinful nature and his inability to save himself through the preaching of the Law. Jesus identified this as convicting the world of its sin, "because they believe not on Me." John 16:8. The Holy Spirit works through the Promises of God in the Word to create and sustain faith, which receives the Gospel message of forgiveness and all its blessings. KJV John 16:8 And when He is come, He will reprove the world of sin, and of righteousness, and of judgment: 9 Of sin, because they believe not on Me; 10 Of righteousness, because I go to My Father, and ye see Me no more; 11 Of judgment, because the prince of this world is judged. Q - How does this grace of God come to people? A - God has given us the Means of Grace to convey His forgiveness to us. Q - What are the Means of Grace? A - We use this term to designate the invisible Word of preaching and teaching, the visible Word of Holy Baptism and Holy Communion. Q - Does the Holy Spirit ever work apart from the Word of God? A - No, that is the essence of Enthusiasm, divorcing the Holy Spirit from the Word. Enthusiasm is the basis for all false doctrine, all false religion. Q - Where can this work of the Holy Spirit through the Word be found? A - Isaiah 55:8
<urn:uuid:f5afbf34-9473-4fb7-925c-20d78ab9b21b>
512
23
Q - Why study justification by faith? A - Luther wrote that attacks against the Christian faith [...] -10 KJV Isaiah 55:8 For my thoughts are not your thoughts, neither are your ways my ways, saith the LORD. 9 For as the heavens are higher than the earth, so are my ways higher than your ways, and my thoughts than your thoughts. 10 For as the rain cometh down, and the snow from heaven, and returneth not thither, but watereth the earth, and maketh it bring forth and bud, that it may give seed to the sower, and bread to the eater: 11 So shall my word be that goeth forth out of my mouth: it shall not return unto me void, but it shall accomplish that which I please, and it shall prosper in the thing whereto I sent it. Q - How can UOJ be defined as false doctrine? Q - Was not Jesus "raised for our justification," showing that His resurrection is the absolution of the world, as C. F. W. Walther preached? A - The Romans passage is about justification by faith, as the context shows: KJV Romans 4:22 And therefore it was imputed to him for righteousness. 23 Now it was not written for his sake alone, that it was imputed to him; 24 But for us also, to whom it shall be imputed, if we believe on him that raised up Jesus our Lord from the dead; 25 Who was delivered for our offences, and was raised again for our justification. Abraham is the father of faith in the entire chapter, which transitions from 4:25 to this declaration: KJV Romans 5:1 Therefore being justified by faith, we have peace with God through our Lord Jesus Christ: 2 By whom also we have access by faith into this grace wherein we stand, and rejoice in hope of the glory of God. Q - Why do UOJ advocates use reconciliation in 2 Corinthians 5 to prove the universal absolution? A - Calvin's rationalism has had a lasting, damaging effect on Lutherans via Pietism. One proof of that impact is the transition of Pietism to pure rationalism in one generation,
<urn:uuid:f5afbf34-9473-4fb7-925c-20d78ab9b21b>
512
23
The french revolution of 1789 had many long-range causes political, social, and economic conditions in france contributed to the discontent. Free essay on causes of the french revolution of 1789 available totally free at echeatcom, the largest free essay community. American, french, and latin american revolutions in the revolutions of america, france, and latin america there was a common thread that united these revolutions as. Free essays the mexican revolution from the american revolution to the french what was the reason and who played a critical role in the mexican revolution. Can anyone help me compare and contrast russian and mexican revolutions i have to write a paper on this and i have no idea any french, and american. Free revolutions papers, essays, and research papers my account search results free essays that is exactly what the french and the mexican revolutions were all. Texas papers on mexico university of texas at austin issn 1041-3715 interpreting the mexican revolution alan knight have a go at the french that will become. Get an answer for 'compare and contrast the french and haitian revolutions immediate and long range causes and impact, and any other relevant comparisons. One of the many differences between the american and french revolutions is sean busick is a senior contributor at the imaginative conservative recent essays. Run for the border comparison of the mexican and french revolutions essaysit is easier to run a revolution than a government (ferdinand e marcos (1917-81. The french revolution brought about great changes in the society and government of france the revolution, which lasted from 1789 to 1799, also had far-reaching. Compare and contrast the american and french cc essay 2/26/13 cc essay french and american revolution both the american and french revolutions were focused on. The mexican revolution essay the mexican revolution was a violent political and social upheaval that occurred in mexico in the early 20th century. History of the americas the economic, social, and political causes of the and political causes of the mexican revolution 1840-1910 the revolution dbq essay. What is the
<urn:uuid:bfb370d9-e740-4032-912f-6cd251fd7f04>
512
0
The french revolution of 1789 had many long-range causes political, social, [...] connection between the french revolution and the spain's involvement in napoleon's french revolution led mexico the mexican-born creoles and. Home forums musicians women in the mexican revolution essay – 860599 0 replies, 1 voice last updated by anonymous 5 months, 1 week ago viewing 1 post. Suggested essay topics and study questions for history sparknotes's the french revolution (1789–1799) perfect for students who have to write the french revolution. Compare and contrast essay on the french and american revolutions: this was an essay designed to explain the similarities and differences on the french and american. Second french intervention in mexico revolution la decena trágica monarchy in mexico would ensure europeans' access to latin american markets and french. French vs mexican by: fatima salas & kassy ramirez french revolution mexican revolution how it began french revolution timeline how it began mexican revolution. Open document below is an essay on women and the mexican revolution from anti essays, your source for research papers, essays, and term paper examples. The mexican and russian revolution essaystwo revolutions shaped the history of two countries: mexico and russia both revolutions drastically changed the life of. The atlantic revolutions had a big impact on the development of world history starting with the american revolution, where americans fought for their independence. French revolution essay questions aug 17, the french revolution mexican revolution 2 french revolution essay you think the compelling essay and action. The similarities between the mexican and american revolution pages 2 sign up to view the complete essay american revolution, mexican revolution. Free essay: what is a revolution by definition it means the overthrow of a government by those who are governed that is exactly what the french and the. The mexican revolution essay, buy custom the mexican revolution essay paper cheap, the mexican revolution essay paper sample, the mexican revolution essay sample. For teachers only the university of some suggestions you might wish to consider include the french revolution (1789), mexican • is a well-develop
<urn:uuid:bfb370d9-e740-4032-912f-6cd251fd7f04>
512
23
There are hundreds of surprising, perspective-shifting insights about the nature of reality that come from neuroscience. Every bizarre neurological syndrome, every visual illusion, and every clever psychological experiment reveals something entirely unexpected about our experience of the world that we take for granted. Here are a few to give a flavor: 1. Perceptual reality is entirely generated by our brain. We hear voices and meaning from air pressure waves. We see colors and objects, yet our brain only receives signals about reflected photons. The objects we perceive are a construct of the brain, which is why optical illusions can fool the brain. 2. We see the world in narrow disjointed fragments. We think we see the whole world, but we are looking through a narrow visual portal onto a small region of space. You have to move your eyes when you read because most of the page is blurry. We don’t see this, because as soon as we become curious about part of the world, our eyes move there to fill in the detail before we see it was missing. While our eyes are in motion, we should see a blank blur, but our brain edits this out. 3. Body image is dynamic and flexible. Our brain can be fooled into thinking a rubber arm or a virtual reality hand is actually a part of our body. In one syndrome, people believe one of their limbs does not belong to them. One man thought a cadaver limb had been sewn onto his body as a practical joke by doctors. 4. Our behavior is mostly automatic, even though we think we are controlling it. The fact that we can operate a vehicle at 60 mph on the highway while lost in thought shows just how much behavior the brain can take care of on its own. Addiction is possible because so much of what we do is already automatic, including directing our goals and desires. In utilization behavior, people might grab and start using a comb presented to them without having any idea why they are doing it. In impulsivity, people act even though they know they shouldn’t.
<urn:uuid:3e20ff85-1b85-40c0-88d1-69deaa19ab23>
434
0
Anxiety is a normal reaction to stress and affects all of us at one time or another: we are anxious about speaking in public, apprehensive about going to the doctor, and may worry obsessively while waiting for the results of a medical test. Some anxiety is healthy – it can keep us vigilant about things that are important for our well-being, compel us to move forward with our lives and inform us of a concern we need to address. However, anxiety that overwhelms one, making it difficult to function, may indicate an Anxiety Disorder. Specific anxiety disorders affect 11% of people over the age of 55, but only a small percentage receive evaluation and treatment. Also, an estimated 17-21% of people over 55 have symptoms of anxiety that do not meet the criteria of a specific anxiety disorder. “Due to the lack of evidence, doctors often think that [anxiety] is rare in the elderly or that it is a normal part of aging, so they don’t diagnose or treat anxiety in their older patients, when, in fact, anxiety is quite common in the elderly and can have a serious impact on quality of life,” says researcher Eric J. Lenze, M.D. Older adults are more likely to be facing enormous changes, loss, illness, or dementia that can cause or exacerbate anxiety. Conversely, when one is very anxious one may become forgetful or confused. Although it is usual for anxiety to increase with major life changes, anxiety that disrupts a person’s usual activities can and should be evaluated and treated. Anxiety disorders are among the most treatable of illnesses, and include panic disorders, post traumatic stress disorder, social anxiety, and generalized anxiety disorder. Treatments vary and include medication, cognitive behavioral therapy, desensitization and relaxation techniques, yoga and exercise, and natural remedies. “Facing the future, even with a sure faith, is not easy. I am cautious at every step forward, taking time and believing I shall be told where to go and what to do. Waiting patiently and creatively is at times unbearably difficult, but I know it must be so.” Jennifer Morris, 1980, PYM Faith and Practice 2002 Symptoms of Generalized Anxiety Disorder
<urn:uuid:9fb41056-1f9f-4761-a4eb-e58367b1da34>
512
0
Anxiety is a normal reaction to stress and affects all of us at one time or another: [...] : - Excessive, ongoing worry and tension - An unrealistic view of problems - Restlessness or a feeling of being “edgy” - Muscle tension - Difficulty concentrating - Nausea or other stomach problems - The need to go to the bathroom frequently - Tiredness and being easily fatigued - Trouble falling or staying asleep - Trembling or tingling feelings in limbs - Being easily startled As this list shows, the symptoms of anxiety often mimic symptoms of physical illness and vice versa. An evaluation by a doctor or mental health professional can help sort out the cause of one’s symptoms, allowing proper treatment. How can we help? A spiritual community can provide spiritual support so that the whole person is addressed in the healing process. - Challenge stigma and fear of mental illness by educating oneself and others - Establish a climate of safety in your community for those with differences or facing major life changes. - Always ask. Let the person know you are there to help, and ask what they need. One would not question talking to a person about help they need related to physical illness. - Quaker Meetings may offer Clearness Committees for Friends or caregivers experiencing anxiety. - Remember that feelings are real to all of us. Regardless of how unrealistic a fear may seem, validate the person’s feelings. (See Quaker Aging Resources brochure on Validation) - Provide reassurance, but try not to belittle the person’s fear, and remember they may need to work in small steps. - Encourage but do not push a person with anxiety. - Refer to professionals. Encourage Friends to see their doctor and/or seek counseling. - Offer to walk beside the person on this journey. Even simply accompanying the person to an appointment can support and validate their care. - A very small group or individual visit can provide spiritual support if the person has trouble attending worship. If necessary, meet without the person to pray or hold them in the light, and let them know you are doing so. - Encourage physical activity, which has the capacity to alleviate anxiety. Offer to take a walk or a yoga class together. -
<urn:uuid:9fb41056-1f9f-4761-a4eb-e58367b1da34>
512
23
Low Back Pain Low Back Pain (LBP) can be divided into two categories: - Lasts a few days to a few weeks. - Is usually mechanical in nature (a result of trauma to the low back or arthritis). - LBP that has been present for more than 3 months. - Can often be progressive (getting worse over time). Common causes of LBP include: - Muscle strain: Injury to the back can strain the back muscles. - Bulging Disc(protruding, herniated, ruptured): As the disc degenerates and weakens (with age or trauma), part of the disc can be bulge or be pushed into the space containing the spinal cord or nerve root. a herniated disc puts pressure on the sciatic nerve (the large nerve that extends out of the pelvis and down into the leg). - Spinal Stenosis: The narrowing of the bony canal (which puts pressure on the nerve roots). - Spinal Degeneration: Wear and tear over times leads to narrowing of the spinal canal (which puts pressure on the spinal cord). A decrease in bone density creates a loss of bone strength. This can lead to fracture or collapse (compression fracture) of the vertebrae in the low back. - Skeletal Irregularities: These place strain on the vertebrae and supporting structures (muscles and ligaments). An example of this is scoliosis. Chronis back pain and stiffness is caused by infection or inflammation of the spinal joints. - Muscle Aches - Shooting or stabbing pain - Decreased flexibility - Decreased range of motion (ROM) - Difficulty sitting, standing or walking - Difficulty doing functional activities - Chronic: all of the above for more than 3 months. - LBP with pain through the buttocks and down one leg. - Numbness of the leg is possible. - Loss of motor control can occur. A diagnosis of LBP will be made through medical history, physical examination and special tests such as X-rays, MRIís, CT scans, discograms, reflex testing and nerve conduction tests. Often these tests are done to rule out other
<urn:uuid:00c33400-3283-4e79-991e-0a7151ae1e6b>
512
0
A U.S. federal task force is prepared to recommend that teens, adults and pregnant women not be routinely tested for genital herpes if they don’t have signs of infection. About one in every six Americans between the ages of 14 and 49 has genital herpes, according to the U.S. Centers for Disease Control and Prevention. The disease, which is transmitted through vaginal, anal and oral sex, causes symptoms like blisters, discharge, burning and bleeding between periods. Though symptoms can be treated, genital herpes is incurable. In support of its proposed guidelines, the U.S. Preventive Services Task Force says the benefit of routine herpes screening is small, because early treatments aren’t likely to make much of a difference. “Because there’s no cure, there isn’t much doctors and nurses can do for people who don’t have symptoms,” Dr. Maureen Phipps said in a news release from the task force, of which she is a member. Phipps is chairwoman of obstetrics and gynecology at the Warren Alpert Medical School of Brown University in Rhode Island. The task force also says screening people who have no signs of herpes may cause harm, because the blood test can be inaccurate. The task force does, however, recommend screening for other sexually transmitted infections such as chlamydia, gonorrhea, syphilis and HIV. It also recommends health care professionals counsel patients who are at high risk of developing sexually transmitted diseases. Summary of Recommendations |Asymptomatic adolescents and adults, including those who are pregnant||The USPSTF recommends against routine serologic screening for genital herpes simplex virus (HSV) infection in asymptomatic adolescents and adults, including those who are pregnant.||D| Draft Recommendation Statement Genital Herpes Infection: Serologic Screening This opportunity for public comment expires on August 29, 2016 at 8:00 PM EST Genital herpes is a prevalent sexually transmitted infection (STI) in the United States; the Centers for Disease Control and Prevention (CDC) estimates that almost one in six persons ages 14 to
<urn:uuid:a2522a14-427d-4755-b558-2a59ad353281>
512
0
A U.S. federal task force is prepared to recommend that teens, adults and pregnant women not [...] 49 years have genital herpes.1 Genital herpes infection is caused by two subtypes of HSV (HSV-1 and HSV-2). Unlike other infections for which screening is recommended, HSV infection may not have a long asymptomatic period during which screening, early identification, and treatment might alter its course. Antiviral medications may provide symptomatic relief from outbreaks; however, they do not cure HSV infection. Although vertical transmission can occur between an infected pregnant woman and her infant during vaginal delivery, interventions can help limit transmission. Neonatal herpes infection, while uncommon, can result in substantial morbidity and mortality. In the United States, most cases of genital herpes historically have been caused by infection with HSV-2. There is adequate evidence that the most widely used currently available serologic screening test for HSV-2 approved by the U.S Food and Drug Administration is not suitable for population-based screening due to its low specificity, lack of widely available confirmatory testing, and high false-positive rate. Rates of genital herpes due to HSV-1 infection in the United States may be increasing. There is no serologic screening test for genital herpes resulting from HSV-1 infection. Benefits of Early Detection and Intervention Based on limited evidence from a small number of trials on the potential benefit of screening and interventions among asymptomatic populations and an understanding of the natural history and epidemiology of genital HSV infection, the USPSTF concluded that the evidence is adequate to bound the potential benefits of screening in asymptomatic adolescents and adults, including those who are pregnant, to be no greater than small. Harms of Early Detection and Intervention Based on evidence on potential harms from a small number of trials, the high false-positive rate, and the potential anxiety and disruption of relationships related to diagnosis, the USPSTF found that the evidence is adequate to bound the potential harms of screening in asymptomatic adolescents and adults, including those who are pregnant, as at least moderate. The USPSTF concludes with moderate certainty that the harms outweigh the benefits for population-based screening in asymptomatic adolescents and
<urn:uuid:a2522a14-427d-4755-b558-2a59ad353281>
512
23
A U.S. federal task force is prepared to recommend that teens, adults and pregnant women not [...] adults, including those who are pregnant. - Centers for Disease Control and Prevention. Genital herpes: CDC fact sheet. http://www.cdc.gov/std/herpes/stdfact-herpes.htmThis link goes offsite. Click to read the external link disclaimer. Accessed July 12, 2016. - U.S Preventive Services Task Force. Behavioral counseling interventions to prevent sexually transmitted infections: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;161(12):894-901. - U.S Preventive Services Task Force. Screening for chlamydia and gonorrhea: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;161(12):902-10. - U.S. Preventive Services Task Force. Screening for hepatitis B virus infection in nonpregnant adolescents and adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;161(1):58-66. - U.S. Preventive Services Task Force. Screening for HIV: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2013;159(1):51-60. - U.S. Preventive Services Task Force. Screening for syphilis infection in nonpregnant adults and adolescents: U.S. Preventive Services Task Force recommendation statement. JAMA. 2016;315(21):2321-7. - Patel R, Rompalo A. Genital herpes infections. In: Zenilman JM, Shahmahnesh M, eds. Sexually Transmitted Infections: Diagnosis, Management, and Treatment. Burlington, MA: Jones & Bartlett; 2012. - Benedetti JK, Zeh J, Corey L. Clinical reactivation of genital
<urn:uuid:a2522a14-427d-4755-b558-2a59ad353281>
512
23
A U.S. federal task force is prepared to recommend that teens, adults and pregnant women not [...] herpes simplex virus infection decreases in frequency over time. Ann Intern Med. 1999;131(1):14-20. - Centers for Disease Control and Prevention (CDC). Seroprevalence of herpes simplex virus type 2 among persons aged 14-49 years—United States, 2005-2008. MMWR Morb Mortal Wkly Rep. 2010;59(15);456-9. - Watts DH, Brown ZA, Money D, et al. A double-blind, randomized, placebo-controlled trial of acyclovir in late pregnancy for the reduction of herpes simplex virus shedding and cesarean delivery. Am J Obstet Gynecol. 2003;188(3):836-43. - Sheffield JS, Hollier LM, Hill JB, Stuart GS, Wendel GD. Acyclovir prophylaxis to prevent herpes simplex virus recurrence at delivery: a systematic review. Obstet Gynecol. 2003;102(6):1396-403. - Brown ZA, Wald A, Morrow RA, et al. Effect of serologic status and cesarean delivery on transmission rates of herpes simplex virus from mother to infant. JAMA. 2003;289(2):203-9. - ACOG practice bulletin. Management of herpes in pregnancy. Number 8 October 1999. Clinical management guidelines for obstetrician-gynecologists. Int J Gynaecol Obstet. 2000;68(2):165-73. - Flagg EW, Weinstock H. Incidence of neonatal herpes simplex virus infections in the United States, 2006. Pediatrics. 2011;127(1):e1-8. - Handel S, Klingler EJ, Washburn K, Bl
<urn:uuid:a2522a14-427d-4755-b558-2a59ad353281>
512
23
A U.S. federal task force is prepared to recommend that teens, adults and pregnant women not [...] ank S, Schillinger JA. Population-based surveillance for neonatal herpes in New York City, April 2006-September 2010. Sex Transm Dis. 2011;38(8):705-11. - Mahnert N, Roberts SW, Laibl VR, Sheffield JS, Wendel GD Jr. The incidence of neonatal herpes infection. Am J Obstet Gynecol. 2007;196(5):e55-6. - Hollier LM, Wendel GD. Third trimester antiviral prophylaxis for preventing maternal genital herpes simplex virus (HSV) recurrences and neonatal infection. Cochrane Database Syst Rev. 2008(1):CD004946. - Kimberlin DW, Rouse DJ. Clinical practice. Genital herpes. N Engl J Med. 2004;350(19):1970-7. - Feltner C, Grodensky CA, Middleton JC, et al. Serologic Screening for Genital Herpes Infection: An Evidence Review for the U.S. Preventive Services Task Force. Evidence Synthesis No. 149. AHRQ Publication No. 15-05223-EF-1. Rockville, MD: Agency for Healthcare Research and Quality; 2016. - Melville J, Sniffen S, Crosby R, et al. Psychosocial impact of serological diagnosis of herpes simplex virus type 2: a qualitative assessment. Sex Transm Infect. 2003;79(4):280-5. - Rosenthal SL, Zimet GD, Leichliter JS, et al. The psychosocial impact of serological diagnosis of asymptomatic herpes simplex virus type 2 infection. Sex Transm Infect. 2006;82(2):154-7; discussion 157-8. - American
<urn:uuid:a2522a14-427d-4755-b558-2a59ad353281>
512
23
When some 30 world leaders and hundreds of other dignitaries gathered in Beijing between May 14 and 16, 2017, for the first Belt and Road Forum, they were greeted by clear blue skies, which are rare most of the year but common at major international events like the Asia-Pacific Economic Cooperation (APEC) summit in 2014 and the Olympic Games in 2008. For the opening day of the Belt and Road Forum hosted by Chinese President Xi Jinping himself, readings of fine but dangerous PM2.5 particles (particulate matter with diameter ≤ 2.5 micrometers) were near zero. As a capital city notorious for heavy air pollution, Beijing typically averages a level about three times the World Health Organization’s recommendation of no more than 25 microgrammes of PM2.5 per cubic meter of air, and even reaches 1,000 microgrammes on some days. As with earlier events, the “blue skies” for this international gathering did not last long. Only five days after the conclusion of the Belt and Road Forum, Beijing residents braced for another round of polluted days caused by photochemical smog. Each time Beijing achieves this kind of temporary clean air during high-profile international events, the urban dwellers who have been suffering smog agony in Chinese cities always have this question in mind: why cannot the government try to make this kind of “Summit Blue” last longer? If the government can sufficiently curb air pollution through contingency measures including traffic control and the closing down of nearby factories, why cannot it make these regulations function regularly to achieve blue skies throughout the year? The achievable “Summit Blue” in Beijing, Shanghai and other Chinese metropolises actually reveals a plain but inconvenient truth: the real obstacle that prevents China from solving the smog issue is a lack of political will rather than technological barriers or policy implementation difficulties. Most municipal governments in China, despite their intense endeavors to promote investment, infrastructure, and local economic development, have failed to pay sufficient heed to growing public demands for a cleaner environment, which in many cases conflicts with economic goals. Only the political hype for a better international image during grand gatherings is sufficient to pressure local officials to take the necessary measures to tackle air pollution. Many of
<urn:uuid:14522c52-3102-4ccc-ba2a-dfccf37ded2e>
512
0
When some 30 world leaders and hundreds of other dignitaries gathered in Beijing between May [...] these measures are already in the policy toolboxes of municipal officials, but for fear of these policies hampering local economic activities, the government shelves them aside most of the time and only puts them into practice as contingency plans when a large number of international dignitaries are in town. Environmental pollution is a global issue that most fast-growing economies have had to face, and is related to many factors such as economic structure, technological level, political systems, governance capacity, institution building, as well as public awareness and social participation. Like many other developing nations in the world, China is becoming more urban, with more than half of its population already dwelling in cities of various scales, most of which are being quickly industrialized and ready to absorb even more people from vast rural areas in the next two decades. As a consequence of poor urban planning, the swelling of the residential population at an incredible pace has made many Chinese cities even less habitable, with local people suffering from traffic congestion, polluted air, water shortage and contamination, loss of greenery, and land degradation. China’s economic miracles over the last three decades have imposed enormous pressures upon the country’s already worsened environment and scant resources, with mounting ecological problems like air pollution, water pollution and shortages, soil contamination, desertification, and loss of bio-diversity having caught intensive attention from the Chinese government, domestic public, and international community. In the long run, the Chinese government needs to introduce more economic incentives and disincentives to curb pollution and ecological destruction instead of relying too much on short-term administrative orders. By enhancing its capacity for environmental governance, Chinese authorities have made concrete steps in curbing pollution with environmental conservation tasks having risen to the highest platform in the political agenda of the ruling Communist Party of China (Carter and Mol, 2007; Economy, 2007). Nevertheless, a society’s ability to identify and resolve environmental problems is not merely based on the knowledge and resources embedded in its bureaucracy and legal framework (Weidner, 2002). Up to now, China’s environmental protection has been mainly a state-led process, which has been severely restrained by the existing implementation
<urn:uuid:14522c52-3102-4ccc-ba2a-dfccf37ded2e>
512
23
When some 30 world leaders and hundreds of other dignitaries gathered in Beijing between May [...] deficit in environmental governance and the inability of the administration to monitor and reduce pollution in this vast nation. The presence of social actors who can act as advocates for the environment and the integration of these non-governmental forces in processes of planning and policy-making can substantially enhance the opportunities for the ongoing environmental transition (Jänicke, 1996). For years the Chinese government has been reporting daily air pollution levels at major cities based on the data collected from monitoring stations around those cities by the Ministry of Environmental Protection and its local branches. Local environmental protection bureaus then calculate the Air Quality Index (AQI) indicating the potential harm to human health at a range of 1-500. The higher the AQI, the more polluted the air. However, the AQI system implemented before 2012 did not include the tiniest but potentially harmful PM2.5 particles in its list of major pollutants, such as SO2, NO2, PM10 (particulate matter with diameter ≤10 micrometers), O3 and CO. The Chinese government had been claiming that more “blue sky” days were achieved for the past decade when the daily mean of AQI is equal to, or lower than, 100. Local people on the other hand were getting increasingly doubtful about the authenticity of official data and anxious about the deteriorating visibility and breathability of ambient air quality. Beijing and other major cities did not release their PM2.5 data until 2012 when the term “PM2.5” became a hot topic in on-line forums and mini-blogs amidst frequent thick smog throughout the year. Chinese officials once refused to publicize PM 2.5 readings, accusing the US embassy in Beijing of meddling in China’s internal affairs for publishing its own monitoring data of PM2.5 online. In 2012, the new leadership headed by Xi Jinping and Li Keqiang had to yield to growing social pressure for the disclosure of PM2.5 readings in major cities. After Li’s statement that the government should present PM2.5 data to the public in a transparent and timely manner, citizens in at least 7
<urn:uuid:14522c52-3102-4ccc-ba2a-dfccf37ded2e>
512
23
When some 30 world leaders and hundreds of other dignitaries gathered in Beijing between May [...] 4 cities have begun to have access to real time environmental indicators including PM2.5. Smog in Beijing is an image problem as well as a health hazard for hundreds of millions of city dwellers. Many cities have taken ad hoc measures, including suspending construction projects, cutting back on burning coal, shutting down polluting factories, and taking certain classes of vehicle off the roads on heavily polluted days to ensure clean air. Such temporary measures have reminded people of similar contingency plans rolled out before the Beijing Olympics in 2008 or the APEC summit in 2014, which only have short-lived effect instead of offering a long-term solution to the smog problem. The smog woes experienced by Chinese cities can be mainly attributed to the extensive use of coal, the growing number of motor vehicles and the ongoing massive urbanization and industrialization process in the country. The smog in Beijing is just a telling example of China’s environmental deterioration against the backdrop of breakneck economic growth. The smoggy woes have all the more pointed to the need to shift the growth strategy from the single-minded pursuit of more GDP to the quality aspects of economic growth. After three decades of breathtaking growth that has transformed China into a middle-income country, the government should seriously consider trading off less GDP growth with a better quality of life. Pollution has become a public health hazard that affects both the rich and poor as well as rapidly increasing the social costs of economic growth. Air pollution is to some extent an inevitable by-product of the rapid industrialization and urbanization of such a large-sized state. The country’s pollution level has been on the rise as China modernizes its economy; it is now approaching the peak of a Kuznets curve, in which a developing country’s pollution level first increases during the economic takeoff stage, and then decreases after the country completes industrialization and starts to outsource much of its manufacturing activities. In the long run, the Chinese government needs to introduce more economic incentives and disincentives to curb pollution and ecological destruction instead of relying too much on short-term administrative orders such as closing polluting enterprises or blocking polluting sources. Institutional innovations such as the
<urn:uuid:14522c52-3102-4ccc-ba2a-dfccf37ded2e>
512
23
Materials Science & Engineering, Stanford Minor, Electrical Engineering, Stanford of focus: Nanotechnology, including nanomaterials and nanoelectronics are silicon and carbon similar when it comes to transistors? Let's start with carbon because it has so many different allotropes, from carbon nanotubes, graphene to diamonds. But diamonds, for example, are electrical insulators, not semiconductors – which are what we need for a transistor. Graphene is a two-dimensional sheet of pure carbon (yes, one-atom-thick) that can conduct current well, but it does not have a bandgap, therefore, transistors made with graphene cannot be switched off. Carbon nanotubes are a rolled-up form of graphene, which are somewhat similar to Silicon since they both have band gap and can be used as the center piece of the transistor – the channel. are carbon nanotubes not in use like silicon? Silicon has offered many advantages as a transistor material for the last half century. One biggest perhaps was that it forms a great gate dielectric – SiO2. It also comes with a very pure and high quality substrate, silicon wafers, to start with. And over time we’ve used other materials and device structures to improve its abilities, such as transitioning to high-k metal gate transistors and FinFETs. On the other hand, for carbonnanotubes, many material issues have to be solved to obtain similar high-quality carbon nanotube wafers for device fabrication. We can’t switch to an entirely new material over night, but silicon is reaching its scaling limits. have you and your team solved this issue of contact resistance? Carbon nanotubes conduct electricity much faster than silicon, and perhaps more importantly, they use less power than silicon. Plus, at just slightly over one nanometer in body thickness, they’re significantly thinner than today’s silicon, providing good electrostatic control. The challenge has, until now, been how to form high quality contacts between metal electrodes and carbon nanotubes.
<urn:uuid:4e9a3b3e-72ba-4373-81af-3f7daf450dbe>
512
0
Materials Science & Engineering, Stanford Minor, Electrical Engineering, Stanford of focus: [...] In any transistor, two things scale: the channel and its two contacts. It's at the contacts where carbon nanotube resistance, like silicon, has hindered performance. Especially when channel continues to shrink and channel resistance becomes less and less important. Essentially, current just cannot flow into the channel effectively when you hit Qing Cao and my other teammates at [the IBM Watson Research Center] developed a way, at the atomic level, to weld - or bond – the metal molybdenum to the carbon nanotubes' ends, forming carbide. Previously, we could only place a metal directly on top of the entire nanotube. The resistance was too great to use the transistor once we reached about 20 nm. But welding the metal at the nanotubes' ends, or end-bonded contacts, is a unique feature for carbon nanotubes due to its 1-D structure, and reduced the resistance down to 9 nm contacts. Key to the breakthrough was shrinking the size of the contacts without increasing electrical resistance, which impedes performance. Until now, decreasing the size of device contacts caused a commensurate drop in performance. is necessary to scale this technology? And what is your next step in this work? We must scale our carbon nanotube transistor onto a wafer. The challenge is twofold: it includes how to orient and place these 1 dimensional structures from the solution onto the wafer as well as how to purify them (initial solution has about 1/3 metallic tubes which are not useful for transistors and need be removed). developed a way for carbon nanotubes to self-assemble and bind to specialized molecules on a wafer. The next step is to push the density of these placed nanotubes (to 10 nm apart) and reproducibility across an entire wafer. future nanotechnology are you looking forward to? I can see the potential of our carbon nanotube chips to replace silicon for conventional computing uses. Better transistors can offer higher speed while consume less
<urn:uuid:4e9a3b3e-72ba-4373-81af-3f7daf450dbe>
512
23
The Fifth Report of the Intergovernmental Panel on Climate Change released today re-emphasizes the conclusions expressed in previous IPCC Reports. What is new is a focus on risk. The Fifth Report sets out the impacts of climate change in considerable detail, with a careful statement of the probability of their occurrence. Popular opinion may regard risk consequent on climate change differently from country to country. People in Northern Temperate Climate Countries may think that global warming can’t be all that bad. Particularly when they have come through a long, cold winter in which the snowfall compares with winters they experienced as children, oh so long ago – even before the term “climate change” had worked its way into a publicly consciousness. (Does that sound like Canada?) At a press conference on the publication of the Fifth Report, the IPCC chairman, Dr. Rajendra K. Pachauri said: “Nobody on this planet is going to be untouched by the impacts of climate change”. Climate change already affects countries to a greater or lesser degree, but the experience of most countries is minimal compared to what the future has in store. The unfortunate fact is that although the affects of climate change could be minimized, the world has done little to reduce CO2 emissions or to prepare for future changes. This indifference increases the risk of serious impact. Still there is hope that the countries of the world will take positive steps to mitigate climate change. We must act now, and not burden our grandchildren – and their grandchildren – with the very difficult, if not impossible task, of rescuing civilization. As the slogan says: “We didn’t inherit the earth from our grandparents – we borrow it from our children”. If we don’t take steps now, the results will be catastrophic. And there will be no geo-engineering equivalent to the ark to save civilization from our irresponsible conduct. Follow this link for the Final Draft Report For a video of the IPCC Press Conference when the report click here. For a summary for Policy Makers
<urn:uuid:8698e8d4-45af-4dc8-a07d-85e4244cdc5f>
451
0
Customize your JAMA Network experience by selecting one or more topics from the list below. Vaginal symptoms are one of the most common reasons for which women seek medical care. Vaginal complaints account for approximately 10 million medical office visits per year. Most vaginal symptoms are not a sign of a serious disease such as cancer or AIDS, and the majority of such symptoms are not due to a sexually transmitted disease. The March 17, 2004, issue of JAMA includes an article about diagnosing vaginal symptoms. The vagina and surrounding areas are examined for redness or inflammation. A sample of any discharge is taken for testing and observation under a microscope. inflammation of the vagina caused by bacteria, this condition is responsible for 40% to 50% of vaginal symptoms. Symptoms often include a fishy-smelling discharge and itching or burning in the vagina. with Trichomonas, a protozoan organism, is a common sexually transmitted disease (STD). The most common symptoms are a yellow, frothy discharge and pain during intercourse. About 15% to 20% of vaginal symptoms are caused by trichomoniasis. known as a yeast infection, this condition is caused by an overgrowth of fungus that occurs naturally in the vagina and accounts for about 20% to 25% of vaginal symptoms. Women often experience intense vaginal itching and a thick, white, cottage cheese-like discharge. Antibiotic or antifungal medications can be taken orally, applied to the vagina as creams or gels, or inserted into the vagina Women whose vaginal symptoms have not been diagnosed should not use over-the-counter therapies until they have a medical evaluation to determine the cause. Using a condom can help prevent sexually transmitted diseases, including trichomoniasis, and a condom should always be used if you are being treated for trichomoniasis to prevent reinfection by your partner. Avoid using douches and vaginal deodorant sprays. American College of Obstetricians and Gyne
<urn:uuid:f65add6a-c264-4107-b8c2-48751692fb35>
512
0
Introducing Caihong juji, a tiny, Jurassic-era dinosaur that lived 161 million years ago in what is now China. The feathered theropod featured an iridescent, rainbow colored ring of feathers around its neck, which scientists believe it used to attract mates. Paleontologists uncovered a strange new dinosaur a few years ago—a crazy, patchwork quilt of a creature dubbed Chilesaurus diegosuarezi. Its bizarre and often conflicting characteristics defied classification, forcing scientists to make an educated guess about its place on the dino family tree. New research suggests… Paleontologists working in Argentina uncovered the remains of a Cretaceous-era dinosaur that featured the same kind of miniaturized arms found on the T. rex. These ancient creatures weren’t closely related, so scientists now suspect that tiny arms evolved independently. Injuries are common in the fossilized remains of dinosaurs, but the recent discovery of a severely roughed-up skeleton in Arizona establishes a new record for the most bone injuries sustained by a single theropod. This guy got wrecked. If you thought Jurassic World had the craziest picture of dinosaur behavior, get ready to be reminded that reality can always get weirder. Researchers have found evidence that dinosaurs danced, both to terrify their enemies and impress their would-be lovers. Paleontologists have known for years that Tyrannosaurus Rex and other closely related theropods had jagged teeth to help them chew through flesh. But close inspection of crack-like features at the base of these serrations has revealed there’s more to these fearsome teeth than previously believed. Meet Chilesaurus diegosuarezi, a newly described dinosaur discovered by a seven-year-old boy in Chile. The theropod was related to famous meat-eaters like T. rex, but researchers think it was a vegetarian. Stranger still: It possessed a mixture of anatomical features unlike anything researchers have seen before. Meet Nanuqsaurus hoglundi. Based on its 25-inch-long skull, its body was probably half the length of a Tyrannosaurus Rex's, but
<urn:uuid:2da58770-78ae-41ee-90c7-aeb36b1311a6>
512
0
Imagine a 20-year-old musician publishing his work today. Let’s pretend he’s living the fast and reckless life of a rock star and will die young at 45. Because the copyright term has been ratcheted up to life of the author plus 70 years (or 95 years from publication for corporate works), you won’t be able to sample his work without permission (for your heartfelt tribute song, of course), until 2105. But since you’re not living his rock star lifestyle, maybe you can hang on another 95 years to grab your chance. “We are the first generation in history to deny our culture to ourselves,” Jennifer Jenkins said. Furthermore, as the new year approaches, we’ll soon again “celebrate” Public Domain Day, January 1, which is the day when works entering the public domain in a given year do so. But as I explained for this year’s non-celebration, because of copyright changes and extensions, there will be no previously copyrighted works entering the public domain in the US until 2019. Under the law as it stood until 1978, most music would go into the public domain in 28 years, which would put works from the 80s into the public domain now. But the new terms have been retrospectively applied, sometimes applying to dead musicians, who presumably have other things to worry about besides their copyrights. Copyright law has a built-in, careful balance between control and freedom. And we haven’t just added a few marbles to one side of the scale–we’ve dropped an anvil on it. Outside of a conscious choice to release work to the public domain or to use a tool like Creative Commons, nothing you or any of your contemporaries creates will be available for building on, which was not the case for the works of Brahms or Beethoven, or many of the giants of jazz, blues, or rock ‘n roll. The real tragedy is that we’re unlike the classical composers and rock ‘n roll pioneers in another way. We have the Internet. Remixing software. Sharing tools. The technologies we have now offer anyone unprecedented opportunities for creating and sharing music. We live in a time that has the potential to be the most creative period in history.
<urn:uuid:fb0ea0c7-48fa-4eb2-af86-7abb27ff03be>
512
0
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If Alan Turing had been in charge of the EDSAC (Electronic Delay Storage Automatic Calculator) project in the late 1940s, the first computer memory might not have been based on mercury - but on a good gin. In his Turing Award speech in 1967, Sir Maurice Wilkes, the actual EDSAC project chief, recalled Turing's input on … Surely that runs the risk of alcoholics breaking into the system and drinking the data? Yeah, but the risk of hatters is much higher, they're intelligent, organised, violent and mad... Mercury: so good the hatters want it back... I always found Gin had a negative impact on memory retention. RUM = Random Under-the-influence Memory I suspect the final decision was based on the credibility of using a very expensive, hard to handle "scientific" substance rather than a cheap, easy to use one that wouldn't have impressed the people providing the funding. In a 13A fuse, the substance that absorbs the energy when the fuse blows (and for which I don't think anyone ever found a better replacement) is sand. But the literature has to refer to it as granular silicon dioxide. So lets call it a Semi Organic Ethanol DiHydrogen Monoxide delay line and move on. :) "Granular silicon dioxide" isn't just "high-faluting language". It's more precise. It tells you, for example, that sand from a tropical beach (being mostly calcium carbonate) is unacceptable. I suspect it probably had more to do with reliability. Sure for the first test it probably would have worked. But do you really think it would have worked on the fifth day? Drinking mercury will kill you pretty quick. Gin in the proper quantities is quite different. And the savings on the bar tab could have been huge. In fact, it might not even have worked on the second day. You are correct of course, but perhaps I should explain that the preferred material is in fact silicate beach sand, sieved to get the right grain size, and is not manufactured. Early literature tried to give
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
0
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] the impression that this was some exotic substance, not something that someone had shovelled into a van on the Isle of Wight. That there Dihydrogen Monoxide is dangerous stuff! ;) It was clearly a waste of good gin. I feel sure they could have used one of the ubiquitous spring delay lines... No really they couldn't, spring delay lines introduce an echo which could cause a tiny problem with data integrity, if you want to borrow from archaic rock 'n' roll stage and studio gear then a tape loop would provide a much better delay. "It's a pity that they won't have the time or the resources to prove whether Turing's fanciful idea of a gin delay line memory would have actually worked." I think I see your next special project on the horizon.... I'll drink to that!... we practical IT guys still hate doing bloody documentation. During the 60's and 70's many weird and wonderful technologies were developed to solve the memory problem of basically how to store a page (about 80x24 characters) of data in the terminal. IIRC Univac went with a "Torsional" ultrasonic delay line with a clever coupler that converted the efficient driving modes to the torsional wave. Torsional waves were slower so more delay (or more characters) per unit of (NiCr or NiFe) wire. Keep in mind in 1970 an Intel 1024 bit DRAM was cutting edge stuff. You'd need 16 for the characters, 16 more if they each had an individual "attribute" byte associated with them. Good luck with getting the model working. It was said that if you gave an acoustic delay line video terminal a hard knock - then the characters currently on the screen became corrupted. I don't really delve much into electronics, but a delay line does nothing but just take a little while to transmit a bit of information and then retrieve it automatically at a later time, which you can use as a "refresh" memory kind of deal? Cycle it enough and you can keep refreshing it with the same data while also being able to pull that data off any
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] sensor you're using and send it to the rest of the computer. Is that right? In that case, mercury, or even alcohol, seems an incredibly complex way to achieve this, even in an era void of semiconductors (how do you propagate the signal back to the refresh AND somehow read from it without amplifying it along the way?). In fact the whole concept of acoustic delay lines seems... too advanced for a relatively simple task. Was there really no other simple way to do this? I'm honestly asking out of curiosity because it seems a most odd situation to result in rooms full of mercury-filled tubes just to delay a signal by a relatively small amount. Are we in the era of analogue computers? Could not a (more) primitive physical system have been available? I sometimes use the concept that a computer can be easily replicated using pieces of wood, falling water or just ball-bearings to achieve the same *kinds* of calculations. It's a feat of engineering, no doubt, but early mechanical computers existed using much more rigorous build processes, so there's nothing stopping you making a simple binary adder or equivalent using the most basic of materials. But does it really necessitate a mercury/alcohol filled tube to delay a signal for a short while? I understand the analogue-signal requirement, and maybe the incoming signal correlated directly to ultrasonic frequencies - coming out of a radar system - but while you're waiting for them to propagate through a fluid, is it really particular to mercury or alcohol that something could be made to do that. Surely, even air-pressure in an appropriate sealed tube would do something similar, or a more basic hydraulic system of just about any design (The article states minimal energy loss and a fast speed of sound as the primary factors, but surely a fast speed of sound in the fluid you're using is a BAD thing that means it needs to be longer and refresh more often, and energy losses would be higher in a system moving a much more dense fluid than air - or whatever gas?). Would this not have been possible with 1940's-era speakers and microphones and a sealed tube of some fixed length manufactured by, say, a trombone maker? The sort of thing
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] we've had since the 1800's and requiring no specialist materials? Dammit, it almost makes me want to go and tinker myself to find out what the problem is. (I'm reminded of a small clock on display in the British Museum. It has a balanced tray, with a path inscribed on it, and a ball-bearing. The ball bearing slides down the path, taking 15 seconds to do so, and when it hits the end it nudges a lever that tilts the tray the other way. The path reverses, the ball goes the other way, hits another lever, the tray tilts and so on ad infinitum. And it turns out to produce an amazingly accurate (and beautiful) clock. It was made in about 1805. (how do you propagate the signal back to the refresh AND somehow read from it without amplifying it along the way?) You *do* amplify it along the way, using a little gizmo known as a "thermionic valve". Amplification didn't begin with semiconductors, which your question seems to assume. I suspect that the advantage of a relatively stiff viscous fluid compared to air has to do with getting less multi-path reflections off the side of the channel relative to the main wave propagating down the center of it. There was a lot of research in to mechanisms for storing data, because it was a previously untackled problem before. Telephone and telegraph systems didn't store data, just transmitted it onwards to the next part of the system. For fast access you need a pure electronic bistable circuit to record a binary state, but these use several valves each, and so you only used them on the internal registers of the CPU itself. Another, a bit later, was to use a CRT with long persistence phosphor. You mounted a 2D array of photodiodes across the screen and implemented a refresh cycle circuit. If you look at the transistor structure of a register it looks like two inverters driving each other so, yes, they do amplify the signal around and it stays stable. Computer programs only work because they do stuff in the right order so you need to
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] delay the logic circuits so that everything doesn't happen in random order. Flip flops are used now with a clock in the GHz range. These devices ran at kHz and could also store more than one bit in each loop. Although they are called delay lines they were used because they were fast (in comparison to relays) and low power (in comparison to thermionic valves). The point is that the delay line doesn't store just one bit - it stores a whole bunch of 'em, all running in one end and out the other, being amplified, and shoved in the front again. Think of a drum memory (and the story of Mel!) wherein the processor has to wait until the required bits arrive at the read head again. p.s. Congreve escapements are notoriously unreliable timekeepers: they're sensitive to dirt in the tracks and to external vibration. But they're very pretty, and somewhat soothing. It's all a problem of timing. You not only need a delay which, as you note, could be achieved by any number of Heath Robinson type contraptions. You need a delay that reliably takes in a signal at clock n and spits it out again at clock n+x. Mechanical systems are very bad at that, especially when the clock period is very short. You can actually experience this first hand if you try building computers in Minecraft. The electrical signals propagate quite slowly and worse, they propagate at variable speed depending on server load. It's insanely hard to build high clock rate computers under those circumstances. PAL TV sets had a glass delay line buffer, and early ones were all valve. A simple bistable device could be made from contemporaneous devices such as a couple of valves ("tubes" to Americans and other aliens) or or relays per bit, the problem is one of scale, you could either have a room full of racks of relays or valves with the inevitable reliability and heat-dissipation problems or you could have a couple of tubes full of Hg which if handled correctly is quite safe (well it was in the 1940's, I wonder what changed?). Also it's far
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] more elegant which should always be an important consideration. Many years ago, in a very large office which didn't have enough storage space, I worked alongside someone who had masses and masses of documentation - manuals, reports, program listings, and so on. His solution to the shortage of local storage was to make up parcels of documentation addressed to himself, and leave them in the secretaries' office as outgoing mail. At any one time he would have several such packages "stored" in the internal mail. They would, of course, return to him in a day or two - and that was an acceptable retrieval time. He claimed to have based this on his knowledge of mercury delay line technology. He is long gone, and nowadays I don't even have a desk, let alone cupboards.... Presumably, as someone with a Reg-reading personality, you helped him out by introducing him to the magical round file. Infinite capacity and essentially self-sorting, the round file cuts down the time taken by jobs like that by whole orders of magnitude. In a similar way, it was claimed that when the Japanese claimed to have invented just in time manufacturing, they did in fact have conventional warehouses. They were, however, in the form of articulated lorries stuck in the traffic jams around the major cities. At one time, many governments have had "inventory taxes", paid on the value of goods available for sale, but not those in transit. At least one company I know of found it worthwhile to rent semitrailers, load them with goods for "inventory day", and unload them the next day. Apparently this started out with actual transit, but some legal decision held that just being loaded _for_ transit was sufficient to dodge the tax. The one company I know of doing this was a major computer manufacturer, since deceased, whose first computers used delay-line memory. As for "those oldsters", note that the maximum latency (in CPU clock cycles) for a memory access today is in the same range as that of a delay-line or drum-memory computer. That's why "All programming can be viewed as an exercise in caching". And CRT (Williams-Kilburn) memories did not use photodiod
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] es. The phosphor itself stored the "bits" via secondary emission. Early analog-addressedd DRAM. JIT manufacturing was a collaborative development between the Japanese who needed to rebuild their industry after WWII and US manufacturing consultants who came from the world of Ford's assembly lines. Japan also had very few natural resources, so they needed to import everything and imports were expensive. That meant i) they needed to use imports more efficiently and ii) they couldn't afford to make things to send to warehouses because they needed to sell as much as possible, especially for export - to pay for the imports. If you read the history, though, JIT/Lean/Toyota Production system wasn't implemented overnight and it took significant re-training of all levels of manufacturing staff from CEOs to workers via managers. But yes, it is true, JIT is dependent on efficient transport mechanisms and heijunka (re-introducing batches and tuning batch size to smooth production flow while keeping as close as possible to the target takt time) otherwise you can get caught with no input materials or a backlog of output very very quickly. Come on guys, you've been trying to get paris into space, how about prooving that Gin is good for memory.. You could even try differant brands of gin!, For the love of God your jornalists and you haven't realies the potential for the tax deductable after experiment party!! - PS if you do try and compare with mercury delay pipes, make sure those pipes are clearly marked, locked down, shackled, painted a distatesful colour. Just to prevent over enthusiasic consumption at the after experiment party. Note mercury is not a good drink, even with tonic.. It was observed in the 18th century, when it was the only known treatment for syphilis, that a night of Venus might mean a lifetime of mercury. However, you would need to be very drunk indeed not to notice if you were drinking it. The wine glass weighing a couple of kilos might be a clue. And the average drunk would really struggle with the beer glass (about 7 kilos). But...nasty, yes you are right. I was
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] comparing notes once with someone else responsible for radiological protection. "Oh," he said, "Tritium doesn't give me any worries at all. But we have mercury on site and that gives me nightmares." So I did an audit. We turned out to have a test machine made by an "engineer" (ex TV repairman) that did some moderate voltage switching (about 5000VDC). This was accomplished with a wooden box full of large glass bodied mercury switches each of which was attached to a rotary solenoid. If one of them ever broke, which become increasingly probable as the glass aged, about 100g of mercury would spill on a large production floor and a factory would have to be evacuated. The best I could do was to get it stood in a polypropylene bund with a warning on it, and wait for the product concerned to be discontinued. In 1960 our Physics teacher was introducing us to various physical properties of materials - like expansion and density. At one point he produced a beaker of mercury to show it was a heavy, liquid metal. In a vacuum tube it had a meniscus, opposite to that of water, that showed that it was "dry". He probably showed us heavy objects that sank in water and floated on mercury. I remember we had fun chasing small beads of mercury round the bench top - presumably why it used to be called "quicksilver". ... lodged in Cambridge. For some reason the guy that owned the house had a plasic bottle of mercury in the cupboard on the stairs - must have been a couple of kilograms in weight. Mind you, he also had a couple of gallons of neat caustic soda in there too - he tried to clean the bath with it and took the enamel off back to the metal ... The English Electric Deuce was programmed using punched cards. The columns were apparently printed with timings. Some calculations involved cleverly picking chosen data off the mercury delay lines by the program timng where it had reached in its circuit. IIRC the program store was very small - and a sequence of cards had to be read at exactly the right time for the next part
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] of the instruction stream. My boss Maurice Marvin mentioned these insights to me about 45 years ago when I was doing system support on the emerging 3rd generation System 4 machines. Deuce was still doing the payroll about that time. The Deuce mercury delay line units looked like large mushrooms about 4 feet high - each with a cable plugged into a 13amp floor socket for the heater. Apparently the cleaners had to be taught not to unplug them when looking for a convenient socket for their vacuum cleaners. Further proof that Gin is the answer to everything. I would love to see this tried out. The case modders crowd could use bombay saphire to add some colour. Gin is truly miraculous! We've long used it to help us forget, and now we find we can use it to help us remember. How does it know?! When you add the tonic, you need to make sure it is all sealed up so the bubbles don't come out of solution! All of this begs the question (stated above): When will someone try this out. The research questions: Is Gin better than Vodka? Does vermouth make a difference? Olive, or Onion? Shaken or Stirred? Hb. Chem.& Phys. 53rd Ed., E41 gives the velocity of sound in m/s at 25+t°C as: Water (distilled): 1496.7+2.4t --> 37.5% ethanol, 62.5% water 1388+0t (estimated) So Turing was right. Gin (and Vodka) are OK. Vermouth (16-18% ABV) won't work in a delay line. "Ever the pragmatist, Wilkes was not too concerned about the elegance of the storage medium - he just wanted something that worked and was reasonably reliable." Then, by that standard, every IT person is a pragmatist sunshine! :) One does wonder if all that mercury was disposed of properly when they broke the system down. This should not be surprising
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] . Turing certainly was capable of the calculations required to determine that a water/alcohol solution would work. In the book (named by the posting title) there is an example of programming a delay line computer. Because the delay line cycle was so slow compared with the instruction execution time, each instruction gave the address of the next instruction to be executed. If the instructions had been contiguous in memory, as modern machines are, the execution rate would only be one per cycle. Optimisation of the program was done by calculating each instruction time, and the timing of collecting and storing the data that it would do, and laying out the data items and the next instruction so they would be available as soon after they were required as possible. Each address was the tube number plus the word number so there was some flexibility. The delay lines needed to be short to give faster cycle times and long to give more storage. Having more tubes meant that there would more often be incorrect synchronisation. Oh dear. So many mistakes! As any fule kno, the first working memory used by a working stored program digital electronic computer was the Williams-Kilburn tube - used by the Manchester Small Scale Experimental Machine (SSEM aka Baby), which executed its first program on 21st June 1948. There's a replica at the Manchester Museum of Science and Industry - not sure if it's still working, though. The mercury delay line memory used by EDSAC (widely considered the SECOND full scale stored program digital electronic computer, after the Manchester Mark 1) was developed not at Cambridge University here in Blighty but over the pond by J. Presper Eckert at the University of Pennsylvania's Moore School of Electrical Engineering during the Second World War for use in radar to remove ground clutter. When Turing made his boozy suggestion, acoustic delay line memory was proven working technology - working with mercury. Eckert was one of the bods behind the design of EDVAC, which was the foundation of the EDSAC project in Cambridge. Eckert and his colleague John Mauchly were behind the mercury delay-line memory BINAC computer, delivered to Northrop Aircraft in September 1949. If it had
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
Surely that runs the risk of alcoholics breaking into the system and drinking the data? If [...] ever worked properly after delivery then BINAC would be in the running for the title of first full scale useful stored program digital electronic computer, since it ran its first program in February 1949 - before the Manchester Mark 1 became operational in April 1949 or EDSAC became operational in May 1949. ED 13 should note that the Williams-Kilburn tube (the original CRT memory) operated electrostatically - no photodiodes required - and provided both the fast main store of the early Manchester computers as well as the fast internal registers used by the processing elements. (On the Manchester Mark 1, the CRT memory could be considered similar to modern high speed cache, with the slow but larger mag drum store being considered similar to modern main RAM. File store in the modern sense could be considered as being provided by punched paper tape.) Williams-Kilburn tubes have various advantages over mercury delay line memory: faster, random access rather than serial access, not needing ovens for temperature stability, needing lesser PSU stability, and not needing large amounts of an expensive and toxic liquid metal. It's just that when the EDSAC project started, mercury delay lines were proven US technology and part of the EDVAC design which EDSAC was based on, while Williams-Kilburn tubes were a new idea still under development - details of which weren't widely available until late 1947. ..GEC, Coventry, once telling me how they corrupted 'programs' by jumping on the floor, under which were said delay lines... Maybe he was after the gin, to which he was partial, esp. home-made Sloe Gin ( From him, I learned to make a 'Sloe pricker', from a plastic BNC socket cover, a box of pins, and Araldite) Every Friday afternoon. When the IT department starts winding down for the weekend. They made a delay line based standards converter, so they could convert 59.94 Hz NTSC to 50 Hz PAL. It essentially involved delaying the video by a variable amount of time and dropping frames. This was done with a series of quartz delay lines and an additional smaller delay line consisting of
<urn:uuid:16629f05-2de7-402d-b339-ca5ab2e77c2e>
512
23
The United States Bureau of Reclamation began planning for the Weber Basin Project in 1942, and Congressional authorization of the Project was received in 1949. The Weber Basin Water Conservancy District was created on June 26, 1950, by a decree of the Second District Court of Utah, under the guidelines of the Utah Water Conservancy Act. The District was formed to act as the local sponsor of the federal project and to further supply water resources to the population within its boundaries. The original Weber Basin Project was constructed by the Bureau of Reclamation from 1952 through 1969 and includes canals, power plants, irrigation and drainage systems, and six major reservoirs on the Ogden and Weber rivers. Subsequent to the original Project, the District constructed a seventh dam, Smith and Morehouse. Four of the seven reservoirs—Wanship, Lost Creek, East Canyon, and Smith and Morehouse—regulate the flow of the Weber River before it emerges from its mountain watershed to the Wasatch Front. Causey and Pineview reservoirs regulate the flow of the Ogden River before it emerges from its watershed and joins the Weber River. Willard Bay, the largest reservoir, is an off-stream reservoir that stores water from the lower reaches of both the Ogden and Weber rivers for uses and exchanges on the Wasatch Front. The complex transmission system that was constructed as part of the Project includes facilities such as Gateway Canal and Tunnel, Weber and Davis aqueducts, Ogden Valley Canal and Diversion Dam, Slaterville Diversion Dam, and Stoddard Diversion Dam as well as dozens of secondary reservoirs and many miles of canals, pipelines, and other laterals. Hydropower stations located at Causey Dam, Wanship Dam, and Gateway Canal generate power for District consumption and excess power sales. In 1952 and 1961, the voters within its boundaries authorized the District to enter into contracts with the United States to repay the original construction costs and the ongoing operation and maintenance of the federal project. The funding for those costs is generated through water sales and the original ad valorem tax approved by the voters in both elections. In addition to the
<urn:uuid:1bbc26d6-e7f8-4cd3-9fdd-80c7c9f31710>
512
0
People still die from diabetic ketoacidosis. Poor patient education is probably the most important determinant of the incidence of the catastrophe that constitutes "DKA". In several series, only about a fifth of patients with DKA are first-time presenters with recently acquired Type I diabetes mellitus. The remainder are recognised diabetics who are either noncompliant with insulin therapy, or have serious underlying illess that precipitates DKA. Most such patients have type I ("insulin dependent", "juvenile onset") diabetes mellitus, but it has recently been increasingly recognised that patients with type II diabetes mellitus may present with ketoacidosis, and that some such patients present with "typical hyperosmolar nonketotic coma", but on closer inspection have varying degrees of ketoacidosis. DKA is best seen as a disorder that follows on an imbalance between insulin levels and levels of counterregulatory hormones. Put simply: |"Diabetic ketoacidosis is due to a marked deficiency of insulin in the face of high levels of hormones that oppose the effects of insulin, particularly glucagon. Even small amounts of insulin can turn off ketoacid formation".| Many hormones antagonise the effects of insulin. These include: In addition, several cytokines such as IL1, IL6 and TNF alpha antagonise the effects of insulin. [J Biol Chem 2001 Jul 13;276(28):25889-93] It is thus not surprising that many causes of stress and/or the systemic inflammatory response syndrome, appear to precipitate DKA in patients lacking insulin. Mechanisms by which these hormones and cytokines antagonise insulin are complex, including inhibition of insulin release (catecholamines), antagonistic metabolic effects (decreased glycogen production, inhibition of glycolysis), and promotion of peripheral resistance to the effects of insulin. Persons presenting with DKA are often seriously ill, not only because DKA itself is a metabolic catastrophe, but also because significant underlying infection or other disorders may be present. Common precipit
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
0
People still die from diabetic ketoacidosis. Poor patient education is probably the most important [...] ants of DKA are: Patients with DKA have marked fluid and electrolyte deficits. They commonly have a fluid deficit of nearly 100ml/kg, and need several hundred millimoles of potassium ion (3-5+mmol/kg) and sodium (2-10mmol/kg), as well as being deficient in phosphage (1+ mmol/kg), and magnesium. Replacement of these deficits is made more difficult due to a variety of factors, including the pH derangement that goes with DKA. Mainly in children, an added concern is the uncommon occurrence of cerebral oedema, thought by some to be related to hypotonic fluid replacement. There are several mechanisms for fluid depletion in DKA. These include osmotic diuresis due to hyperglycaemia, the vomiting commonly associated with DKA, and, eventually, inability to take in fluid due to a diminished level of consciousness. Electrolyte depletion is in part related to the osmotic diuresis. Potassium loss is also due to the acidotic state, and the fact that, despite total body potassium depletion, serum potassium levels are often high, predisposing to renal losses. Ketoacidosis is an extension of normal physiological mechanisms that compensate for starvation. Normally, in the fasting state, the body changes from metabolism based on carbohydrate, to fat oxidation. Free fatty acids are produced in adipocytes, and transported to the liver bound to albumin. There they are broken down into acetate, and then turned into ketoacids (acetoacetate and beta-hydroxybutyrate). The ketoacids are then exported from the liver to peripheral tissues (notably brain and muscle) where they can be oxidised. Note that during ketosis, a relatively small amount of acetone is produced, this giving ketotic patients their typical smell, often described as 'fruity'. DKA represents a derangement of the above mechanism. Despite vast amounts of circulating
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
23
People still die from diabetic ketoacidosis. Poor patient education is probably the most important [...] glucose, this carbohydrate cannot be used owing to lack of insulin. Ketogenic pathways are maximally "turned on", supply of ketones exceeds peripheral utilisation, and ketosis results. (There are a few other clinical states where similar keto-acidosis is seen. One is in alcoholics, who may present with marked ketosis, and a variable degree of either hypo- or mild hyperglycaemia. Another is in some pregnant women, particularly associated with hyperemesis gravidarum). The physiological mechanism of ketoacidosis is interesting. The rate-limiting step in the manufacture of ketones in the liver is the transfer of fatty acids (acyl groups) from Coenzyme A to carnitine. Carnitine acyl transferase I is the relevant enzyme, often referred to as CAT-I. To a certain degree, increased levels of carnitine will drive this transfer, but the main factor that inhibits CAT-I is the level of malonyl CoA in the liver. High levels of malonyl CoA effectively turn off the enzyme. Malonyl CoA is manufactured by another enzyme called Acetyl CoA carboxylase. Acetyl CoA carboxylase activity is in turn regulated by the amount of citric acid in the cell. The more the Krebs' cycle is whirling around (and citrate is being produced), the greater the activity of Acetyl CoA carboxylase, which in turn results in inhibition of ketoacid production. Turn off the supply of substrate into Krebs' cycle, and ketoacids are formed. You can work out that in the fasted state, glycolysis is diminished, the flow of substrate into the citric acid cycle drops, and ketone manufacture is turned on. This is unfortunately just what happens in diabetic ketoacidosis. We now understand how, in the midst of plenty, the liver cell in DKA cries 'starvation' and produces ketones! Both absence of insulin and excess glucagon result in inhibition of glycolysis. Such inhibition
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
23
People still die from diabetic ketoacidosis. Poor patient education is probably the most important [...] not only raises glucose levels, but stimulates ketone formation. Let's look in more detail at how these hormones inhibit glycolysis. The marked hyperglycaemia seen associated with diabetic ketoacidosis (and that encountered in nonketotic hyperosmolar coma) is not as straightforward as was once thought! The combination of insulin lack and high glucagon levels has a variety of effects on the liver including: Glucagon excess and low insulin levels both appear to have similar effects in inhibiting glycolysis. Glucagon ultimately has a potent inhibitory effect on the formation of fructose 2,6 bisphosphate . This product is very important, because it's an extremely potent allosteric regulator of a major rate-limiting enzyme in the glycolysis pathway, phosphofructokinase (often abbreviated to "PFK1"). The effect of glucagon is well characterised. When glucagon binds its cell-surface receptor, through a fairly direct G protein-receptor coupled mechanism, protein Kinase A is stimulated. Then the fun really starts, because protein kinase A phosphorylates an important regulatory enzyme called phospofructokinase 2 (PFK2 ). This latter protein is a strange duplicitous enzyme - when phosphorylated it wears one face, quite different from the unphosporylated enzyme. When phosphorylated, PFK2 acts as a phosphatase, but when un phosphorylated, it's a kinase. Phosphorylated PFK2 takes the vitally important Fructose 2,6 bisphosphate and lops off a phosphate to turn it into fructose 6 phosphate. The kinase form of PFK2 does the opposite, and results in the creation of more fructose 2,6 bisphosphate. As we hinted above, fructose 2,6 bisphosphate is a potent allosteric stimulator of the enzyme PFK1. The bottom line is that glucagon lowers f
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
23
People still die from diabetic ketoacidosis. Poor patient education is probably the most important [...] ructose 2,6 bisphosphate levels and inhibits glycolysis; if glycolysis is inhibited, then flow of carbon atoms into the citric acid cycle slows, and ketogenesis is stimulated. The insulin effect is far less well characterised, although we know that the effect is opposite to that of glucagon. It used to be thought that the main effect of insulin was mediated by a complex pathway involving a kinase called MAPK. We now know that this pathway is important in the long-term effects of insulin on cellular proliferation, but not the acute metabolic effects. The key regulator in the metabolic effects of insulin appears to be the enzyme phosphatidylinositol 3-kinase (PI3K). This in turn causes activation of a variety of kinases (atypical protein kinases C, and protein kinase B), which have profound metabolic effects, including inhibition of glycolysis and stimulation of glycogen synthesis. [J Clin Endocrinol Metab 2001 Mar;86(3):972-9; Philos Trans R Soc Lond B Biol Sci 1999 Feb 28;354(1382):485-95; Diabetes Metab 1998 Dec;24(6):477-89] Insulin raises fructose 2,6 bisphosphate levels by a mechanism that seems to depend on activation of PI3K [J Biol Chem 1996 Sep 13;271(37):22289-92]. Note that this is not the whole story, because glucagon and insulin also have opposing effects on several other enzymes, including pyruvate kinase, and enzymes involved in glycogen synthesis/breakdown. Death rates in DKA vary widely between published series, with death rates generally in the range of one to ten percent, although higher rates have been reported! Such variation is likely due to different reasons for presentation, and patients presenting at various stages during the evolution of DKA
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
23
People still die from diabetic ketoacidosis. Poor patient education is probably the most important [...] . Differences in management are also likely to affect outcome. Patients who are more likely to die include: As noted, DKA in children may be associated with cerebral oedema. Although uncommon (~1%), this complication may be associated with a high mortality rate (about 25% or more), and a high rate of neurological complications in survivors. The pathogenesis is far from clear. It has been noted that those who develop cerebral oedema are more likely to have a low arterial partial pressure of carbon dioxide on admission [Glaser et al]. Some studies [Krane] suggest that cerebral oedema may even be present on admission. The clinical picture in such cases is often one of initial improvement in level of consciousness, followed by gradual decline over several hours, culminating in sudden collapse, resuscitation, and an adverse outcome. It is often asserted that over-vigorous rehydration (especially with relatively hypotonic fluids) is the prime cause of cerebral oedema in such patients, but there is little or no evidence to support this attractive contention. Implicating relatively hypotonic fluids in the pathogenesis of this cerebral oedema is attractive because we have long known that in the face of extracellular hypertonicity, brain cells undergo complex metabolic changes. "Idiogenic osmoles" are produced in the brain to limit brain cell shrinkage. There is increased intracellular production of osmotically active substances such as myoinositol and taurine . It seems logical that rapidly administered hypotonic fluid will rush into brain cells and result in cerebral oedema. However, in experimental animals, aggressive insulin therapy is more likely to be associated with cerebral oedema than is aggressive fluid therapy! Nevertheless, current texts now generally caution one against over-vigorous fluid resuscitation in children with DKA, recommending that one replenish the fluid deficit over 36 hours or more. In addition, the old-fashioned tendency to give massive amounts of insulin is now considered unacceptable. The initial acidosis seen with DKA is
<urn:uuid:3acefe9a-a099-4745-850a-c92397fbfa04>
512
23
by Patrick McDonnell#566 Why Is It So Fun To Monkey Around? shares a bit of information about Jane Goodall's work with chimpanzees. Just Kids Pictures, Poems and Other Silly Animal Stuff Just for Kids complied by Bonnie Louise Kuchler#816 Who Is the Wisest Bird?. Hunters of the Night by Elaine Landau This book follows raccoons, big cats, owls, bats, snakes, alligators and crocodiles through their life at night, what they eat, their predators and 'fun facts'. Kids love look at the real photographs and reading about them. This book would could easily be paired with Wonder #305 Why Would You Hike At Night?. A Rainbow of Animals by Melissa Stewart This book is organized by color and animals that are that color. After each section, there is a map that shows where each animal of that color is located. The text is very simple, so this is a great book to get students thinking and wondering about animals and their colors. One of the animals featured in the orange section of the book is the panther chameleon. While many of the featured animals could go along with a Wonder, I would use Wonder #651 Why Do Chameleons Change Their Colors?. This Wonder helps explain in more detail how and why chameleons change colors. What is a reptile? and What is an amphibian? by Feana Tu'akoi This is a 'flip me over book' that kids love to look at the photographs and read over and over again. The book helps children to understand what amphibians and reptiles are and how they are classified. For example it says, "If it has webbed feet, it is an amphibian. Not always...." Then goes on share how ducks, pelicans and otters have webbed feet, but are not amphibians. After your students learn what a reptile is, be sure to visit Wonder #365 How Do Fangs Work?. My students loved reading this Wonder. Now & Ben The Modern Inventions of Benjamin Franklin by Gene Barretta I love how this book is organized and would be a great mentor text in showing students different ways to organize their own writing. Each of
<urn:uuid:c55878d3-ba7c-4d0d-ae57-3ecc36430e57>
512
0
Essay starting transition words These are called transitional words and phrases neither will a velcro transition persuade an essay's readers that they are in the hands of a serious writer with. Examples of transition words discuss it with your transition starting with this type of examples of transition words and phrases, essay on air pollution. Transition words that contains all of the development for essays in which are ive been taught to. Using appropriate words in an academic essay 3 using appropriate words in illustrates the correct use of the transition as it signals a contrast. Dissertation submission starting words for an essay example of a research proposal just following my tips to add transition words to your essay can often. Essay starting transition words examples of cv templates he was a my section examples: actually, most of the passage into different. Using words correctly how to begin an essay: 13 engaging strategies i think we're in a time of transition. Having the right vocabulary is crucial for writing a first-class essay these words and phrases will 40 useful words and phrases transition words before a. Affordable prices starting at $1199 knowing about these skills will help you with your persuasive essaytransition words for persuasive essays. If writers are composing their 1st body paragraph, a transition within that first topic sentence will probably be useful see my graphical chart of an essay. Using transitional words in an argumentative essay the purpose of the argumentative mode, sometimes called the persuasive mode, is to change the way a reader thinks. Suggested ways to introduce quotations when you quote another writer's words, it's best to introduce or contextualize the quote don't forget to include author's. What are essay transition words and phrases check out our samples and tips on how to write a superb essay. The next video is starting find out why close essay transition words effective essay transitions: how to use transition words and phrases. Suggested transition words to lead readers through your essay that transition words indicate that one step has been completed and a new one will begin. Paragraphs are an important part of structuring any essay how to write a basic paragraph some good transition words some good transition words for starting. What are some good transition words for starting a if you didn't it would be one giant block of text and that is not an essay what transition words start. - Useful argumentative essay words and phrases 1
<urn:uuid:511b5ab9-064c-4ac3-84cf-f5010be9c53f>
512
0
Soccer is perhaps one of the fastest growing sports in USA. It is also one of the simplest sports in the world. Following are the 10 basic rules of soccer that every youth player and their parents should be aware of: 1) A typical soccer match lasts 90 minutes, divided into two halves of 45 minutes each. Play in each half is started via a kick off. Normally there are 11 players on a team, including a goalkeeper. Following are general standards for game length followed in youth soccer for different age groups: 2) The players should always play the ball, and never the player. Every action should be directed towards controlling the ball or taking away possession, but never to stop a player or tackle him. 3) Arms and hands are the only parts of the body, not allowed to control the ball. 4) Offside: As per the rule book, when the ball is played by the teammate, if you are in front or even with the second to last defender (goalkeeper is the last defender), you are guilty of offside. It is not a foul to be in offside position but if you become involved in the play, offside will be called. This can be a hard rule to understand. Don’t get too hung up on it. Trust the referees. Download the FIFA Laws of the Game . They have good diagrams of what is and isn’t considered offside. 5) Yellow Card typically signifies a caution and main reason for a player to receive a yellow card could be: persistent infringement, failure to ask referee for entering or leaving the field, dissent, unsporting behavior or failure to respect required distance on a restart. 6) Red Card signifies a send-off. If a player has received a red card that means that he/she would have to immediately leave the field and the surrounding area. His/her team will have to now play with 10 players. The reasons for red card could be receiving two yellow cards in a match, serious foul play, committing a foul so that the opposing team was unable to score a goal from a very easy opportunity, spitting or violent behavior. 7) Only goalkeepers are allowed to use their hands or arms to control the ball. There is only one condition when the goalkeeper is not allowed to use their hands, when the ball is kicked
<urn:uuid:2d197ceb-aba2-49ee-a331-9a920dc20b14>
512
0
The SAT Thinking Test is a standard examination used for university admissions. It was formerly referred to as the Scholastic Capacity Test or the SAT I. Released by the College Board, a charitable organization, the SAT is carried out 7 times a year. Currently, SAT scores range from 600 to 2400, and the test is divided right into three equally weighted areas: important reading, math, and also composing. Understanding the material that will certainly be on the examination and also exactly how it is laid out is critical to your success. You may wish to think about taking a SAT practice test or a SAT preparation class to earn certain you do well. rested In the critical reading section, previously known as the verbal area, you will be expected to address multiple-choice concerns developed to check your vocabulary and also analysis understanding. There are two kinds of concerns: sentence completion and also those based upon reading flows. Sentence completion concerns ask the test-taker to choose an appropriate word to complete a sentence. The analysis passages are differed in nature; they vary from narratives to passages from the social scientific researches. Questions regarding the passages test the student’s capacity to identify the important elements of the passage. There is one more form of this kind of concern where the student is asked to contrast two much shorter flows and address concerns about them. The math section includes both multiple-choice inquiries as well as grid-in, or fill-in-the-blank, inquiries. Calculators are permitted, yet not all calculators are allowed. This area examinations on a range of subjects, consisting of, but not only, basic number theory, geometry, and also algebra. There are ten grid-in concerns which require you to write as well as bubble in your solution. The creating section is comprised of an essay and also multiple-choice questions. Multiple-choice inquiries in the composing area examination your capacity to recognize sentence errors and modify writing. SAT Study Schedule Another important element of the SAT is time frame. Overall, you have 3 hours and 45 mins to finish the SAT. The SAT format is as follows. There are 2 25-minute and also one 20-minute critical reading sections; all critical analysis sections are multiple-choice. The writing area contains one
<urn:uuid:287b9fbc-d1bb-46e2-99da-9b979470ca7d>
512
0
English is one language which is used in many parts of the world. It is also the medium of teaching in many nations. Having proficiency in English is always an asset as it is accepted as a spoken language in many nations. Studying it as a language is different from studying it as a subject. While using it as a medium to learn another subject, you will see only the communicative side of the language. But if you opt to pursue English as your main subject, you will see that it has a literary side as well which is much richer and interesting. There are certain things that you should remember while writing an English research paper. It is necessary for assignments in all subjects to be written in flawless language. But while you are dealing with a research paper in English, there will be various other factors also which will form the criteria of judgment. If you are writing on a literary work, it is necessary to have a good background knowledge of the topic you are handling. This would include the background of the author of the work you choose to write on, as well. You will have to mention the year in which the work was published and also why you chose to write on that particular work. There is no need to say that your language should be completely flawless. Even the smallest mistakes in spelling, grammar or structure will look unforgivable in an English research paper. But error-free writing alone cannot create an impression when it is a research paper on English Literature. Your writing style and usage of words will also come under the scrutiny of the panel. You will need to use your words effectively and in the most powerful way while writing a research paper on English literature. There are various sections in English literature. The methods of handling the topics from different sections are not the same. For example, writing a research paper on poetry would be totally different from writing one on prose or fiction. Without knowing the specific approach you need to use towards each of the different genres, you will not be able to complete an English research paper successfully. A thorough knowledge of the author of the work which you are researching on will be inevitable while doing research on the work. The author’s background, his previous works and his special skills are all factors which will influence the outcome of your research. In works of biographic or autobiographic nature, the circumstances in which it was written also would become an important part of your research. Hence researching on a topic in English literature requires a
<urn:uuid:d9787bf6-90f2-4ee0-ae21-8874b050ac31>
512
0
No matter where you live or which college you go to, attending college is a great time in life. You’ll meet great new people, learn interesting things and discover things about yourself you never knew before. You should read these tips to live this experience to its fullest. When you are preparing for college, create a list of the items you need. Even if you are attending school close to home, it is much more convenient to have everything with you rather than calling your parents to deliver things. If you are away at college, far from home, this is even more important. Carry a personal water bottle to school. Remain hydrated all day. This is particularly important if you’ve got numerous classes back to back. Drinking plenty of fresh water is sure to help you remain focused and alert. This is especially important at schools located in warm climates. Spending a large portion of your day on study is crucial. The more you spend applying yourself to your education, the more rewarded you will be. College is more than just party time. Excelling in college will reward you with a much beter career and additional earning power. Always eat a good breakfast before going to class, especially if you have a test. You can even eat light; try some fruit or yogurt. Your stomach can really distract your attention from an exam. Feeling sick or lackluster can negatively impact your results in class. Get plenty of rest. Many college students get little, if any, sleep between late night parties, classes and homework. You may think you’ll do okay if you mess around with your sleep, but lack of sleep makes schoolwork harder. You’ll have trouble memorizing and recalling many things, and you will struggle with just about everything. You are away from home, and no one is going to clean and cook for you. Be sure you’re eating things that are healthy, that you keep your things tidy, and you sleep enough. Schedule equal time for attending classes, studying, recreation and taking care of yourself. Stress and an unhealthy diet can make you sick. If you wish to avoid the “freshman 15,” avoid eating too many simple carbs. Avoid foods that are processed or high in sugar. Stick to produce, whole grains and low-fat dairy to keep energy levels high. Avoid an entirely high protein diet as this is unbalanced and may
<urn:uuid:57a7a777-6e9d-4580-a33b-81ee0a713e54>
512
0
No matter where you live or which college you go to, attending college is a great time in life [...] cause health problems for you. Become familiar with the phone numbers for campus security. You need an easy way to contact them and campus police. Hopefully, you won’t need this number, but you should have it just in case. Do not coast on your reputation from high school. College is much different and many things you accomplished in high school won’t matter to people you encounter in college. Push yourself harder to succeed and try new things rather than expecting things to go the same way they did when you were in high school. Apply for an internship when you’re going to college. By following through with an interning opportunity, you will receive real-world experience and professional relationships. If things go well, you may even be offered a job. You might be able to get help finding an internship at your school. If you are still finding your passion and deciding on a major, do not limit yourself to just taking elective classes. Look around campus for activities that you might enjoy. There are clubs and work study jobs that might be of interest to you. There are various activities that happen at college nearly every week. Attempt to try a new activity each week. Know what constitutes plagiarism. It is illegal to engage in plagiarism. Make sure you understand how to properly cite works in order to avoid plagiarism. Professors have ways of verifying a paper is original, so make sure to write your own papers. Try to establish a regular sleeping schedule while away at school. Sleep deprivation is common amongst college students who balance work, classes and social lives. If you lack sufficient sleep, you will be unable to concentrate on schoolwork. Even study positions that do not involve your major are important, and you must concentrate on them. Future employers will look at both your experience and your academic performance. Work study programs are a great way to get work experience and help pay for your classes. Sick to people in college who have the same goals and study ethics as you. When you surround yourself with those that want to succeed, you’ll be more likely to succeed too. You can always have fun in the group of friends you go with. You’ll discover people that have a balanced approach to college. Take breaks regularly when you study. You can become burnt out from
<urn:uuid:57a7a777-6e9d-4580-a33b-81ee0a713e54>
512
23
Call us: 1.877.283.7882 | Monday–Friday: 8:00 AM–4:30 PM ET Category: Sexually Transmitted Diseases Over the years, scientists have developed condoms and vaccines to help prevent the spread of sexually transmitted diseases, as well as STD testing services that allow people to know their status. However, these testing and prevention methods are only effective if people choose to use them. Unfortunately, research continues to show that many people do not use condoms, and even fewer have been receiving the vaccine that helps protect against the human papillomavirus. HPV is a sexually transmitted virus that can potentially cause cervical cancer. While there is now a vaccine that helps prevent strains of this virus, healthcare professionals have had trouble encouraging young women to get it. Recently, researchers from Ohio State University discovered that when doctors are discussing the benefits of the vaccine to women, they should focus on STD prevention, rather than the fact that the shot may help them avoid developing cancer. Against the grain According to the researchers, these findings go against the conventional thought that women would be more concerned with developing cancer than getting an STD. The scientists explained that many healthcare providers have been stressing the cancer-prevention benefits of the HPV vaccine, and that the failure of this message may be why fewer than 20 percent of adolescent girls in the U.S. have gotten it. Janice Krieger, lead author of the study and assistant professor of communication at Ohio State University, explained that young girls don't respond well to the threat of cancer, and are more concerned about getting an STD. She said that early studies of the HPV vaccine suggested that women were most interested in the cancer prevention aspects of the vaccine. However, these studies were conducted on women of all ages, when the shot is actually targeted to women under the age of 26. "Cancer is something people start to worry about later in life, not when they're in high school and college. We decided to do a clean study that compared what message worked best with college-aged women versus what worked with their mothers," Krieger said. To come to their conclusions, researchers spoke to 188 female college students with an average age of 22, and 115 of their mothers with an average age of 50.
<urn:uuid:0ba463ba-939d-4289-9a5d-c818022fa842>
512
0
Call us: 1.877.283.7882 | Monday [...] The scientists gave half the women a package with HPV vaccine information with the headline "Prevent cervical cancer," while the others got a package that said "Prevent genital warts." Then then asked the women how they felt about the vaccine. Results showed that among young women, the genital warts prevention message was the clear winner. "Cancer may seem to be the more serious issue to some older adults, but it is not the top concern for young women," concluded Krieger. The danger is real Whether or not young women are concerned about it, the threat of cervical cancer caused by HPV is real. According to the Centers for Disease Control and Prevention, almost all cases of cervical cancer are caused by HPV. Furthermore, 95 percent of anal cancer cases and 65 percent of vaginal cancer has been linked to HPV. This is why it is so important for people to protect themselves against this virus. While the vaccine does help prevent certain HPV strains, it does not protect against all forms of the virus, so people who get the HPV shot should still use condoms to help protect themselves against this and other STDs. Most of the time, HPV clears itself up. However, no one wants to be part of the small percentage that does end up developing cancer as a result of contracting this virus, which is why they should protect themselves. Related Articles from Private MD: News Categories:Advanced Lipid Treatment I Allergy Testing Anemia and RBC disorders Autoimmune Diseases Bariatric Lab Testing Blood and Blood Diseases Breast Cancer Detection and Tumor Markers Celiac Disease Testing Chlamydia Coagulation and blood clotting disorders Colon Diabetes DNA, Paternity and Genetic testing Drug Screening Environmental Toxin Testing Female Specific Tests Gastrointestinal Diseases General Health General Wellness Heart Health and Cholesterol Herpes HIV HIV monitoring/Treatment/Testing/Post Diagnos Hormones and Metabolism Infectious Diseases Infertility Testing-Male Infertitlity Hormone Testing Kidney Diseases Leukemia and WBC disorders Liver Liver Diseases
<urn:uuid:0ba463ba-939d-4289-9a5d-c818022fa842>
512
23
Python is a reasonably fast language, but it’s not as fast as compiled programs. That’s because CPython, the standard implementation, is interpreted. To be more precise, your Python code is compiled into byte code that is then interpreted. That’s good for learning, as you can run code in the Python REPL and see results immediately rather than having to compile and run. But because Python programs aren’t that fast, developers have created several Python compilers over the years, including IronPython and Jython. Fast performance isn’t the only reason for compiling; possibly the biggest disadvantage of scripting languages such as Python is that you implicitly provide your source code to customers. I wanted to compare a few Python compilers on the same platform, especially those that support Python 3.x. In the end, I chose four, all running on Ubuntu Linux: Nuitka, PyPy, Cython and cx_Freeze. (Originally I targeted five, but Pythran didn’t like the benchmark programs I used, so it didn’t make the cut.) Comparing Python Compilers Somebody has already done the work of creating a Python benchmark. I opted for PyStone, a translation of a C program by Guido van Rossum, the creator of Python (the C program was itself a translation of an Ada program). I found a converted version by developer Christopher Arndt on GitHub that was capable of testing Python 3. To give a sense of perspective, here’s CPython (i.e., standard Python) performance with Pystone: Python 2.7.15Rc1 2 : 272,647 pystones/second. Python 3.6.5 : 175,817 As you can see, there’s quite a big difference between Python 2 and 3 (the more Pystones per second, the better). In the following breakdowns, all Python compilers were benchmarked against Python 3. Although you can follow the instructions on the download page, the following on Ubuntu worked fine for me: sudo apt install Nuitka sudo apt install clang By default, Nuitka uses gcc, but a parameter lets you use clang, so I tested it with both. The clang
<urn:uuid:e726cba3-1675-4236-a969-b003deecc64c>
512
0
Python is a reasonably fast language, but it’s not as fast as compiled programs. That’ [...] compiler is part of the llvm family, and is intended as a modern replacement for gcc. Compiling pystone.py with gcc was as simple as this (first line), or with clang (second line), and with link time optimization for gcc (third line): nuitka pystone.py nuitka pystone.py --clang nuitka pystone.py --lto After compiling, which took about 10 seconds, I ran the pystone.exe from the terminal with: I did 500,000 passes: Size Execution pystones/sec 1. 223.176 Kb 597,000 2. 195,424 Kb 610,000 3. 194.2 kb 600,000 These were the averages over 5 runs. I’d closed down as many processes as I could, but do take the timings with a bit of salt because there was a +/- 5% on timing values. Guido van Rossum once said: “If you want your code to run faster, you should probably just use PyPy.” I downloaded the portable binaries into a folder, and, in the bin folder under that, copied pystone.py. Then I ran it like this: The result was a stunning 1,776,001 pystones per second, almost three times faster than Nuitka. PyPy uses a just-in-time compiler and does some very clever stuff to achieve its speed. According to reported benchmarks, it is 7.6 times faster than CPython on average. I can easily believe that. The only (slight) disadvantage is that it’s always a little behind Python versions (i.e., up to 2.7.13 (not 2.7.15) and 3.5.3 (not 3.6.5 )). Producing an exe takes a bit of work. You have to write your Python in a subset called RPython. Cython isn’t just a compiler for Python; it’
<urn:uuid:e726cba3-1675-4236-a969-b003deecc64c>
512
23
Python is a reasonably fast language, but it’s not as fast as compiled programs. That’ [...] s for a superset of Python that supports interoperability with C/C++. CPython is written in C, so it’s a language that generally mixes well with Python. Setting things up with Cython is a little bit fiddly. It’s not like Nuitka, which just runs out of the box. First, you have to start with a Python file with a .pyx extension; you run Cython to create a pystone.c file from that: cython pystone.pyx --embed Don’t omit the –embed parameter. It adds in main and that is needed. Next, you compile pystone.c with this lovely line: gcc $(python3-config --includes) pystone.c -lpython3.6m -o pystone.exe If you get any errors, such as ‘can’t find the -lpython version,’ it could be the result of your version of Python. To see what version is installed, run this command: pkg-config --cflags python3 After all that, Cython only gave 228,527 pystones/sec. However, Cython needs you to do a bit of work specifying the types of variables. Python is a dynamic language, so types aren’t specified; Cython uses static compilation, and using C typed variables lets it produce much better optimized code. (The documentation is quite extensive and required reading.) Size Execution pystones/sec 1. 219,552 Kb 228,527 This is a set of scripts and modules for “freezing” Python scripts into executables, and can be found on GitHub. I installed it and created a folder freeze to manage things in: sudo pip3 install cx_Freeze --upgrade One problem I found with the install script was an error missing “lz”. You need to have zlib installed; run this to install it: sudo apt install zlib1g-dev After that, the cx_Freeze command took the pystone.py script and created a dist folder containing a lib folder, a 5MB lib file and the pystone application file: cxfreeze
<urn:uuid:e726cba3-1675-4236-a969-b003deecc64c>
512
23
[Image: “The Dormant Workshop” by Tom Noonan, courtesy of the architect]. While studying at the Bartlett School of Architecture in London, recent graduate Tom Noonan produced a series of variably-sized hand-drawings to illustrate a fictional reforestation of the Thames estuary. [Image: “Log Harvest 2041” by Tom Noonan, courtesy of the architect]. Stewarding, but also openly capitalizing on, this return of woodsy nature is the John Evelyn Institute of Arboreal Science, an imaginary trade organization (of which we will read more, below). [Image: “Reforestation of the Thames Estuary” by Tom Noonan, courtesy of the architect]. The urban scenario thus outlined—imagining a “future timber and plantation industry” stretching “throughout London, and beyond”—is like something out of Roger Deakin’s extraordinary book Wildwood: A Journey Through Trees (previously described here) or even After London by Richard Jeffreys. In that latter book, Jeffreys describes a thoroughly post-human London, as the ruined city is reconquered by forests, mudflats, aquatic grasses, and wild animals: “From an elevation, therefore,” Jeffreys writes, “there was nothing visible but endless forest and marsh. On the level ground and plains the view was limited to a short distance, because of the thickets and the saplings which had now become young trees… By degrees the trees of the vale seemed as it were to invade and march up the hills, and, as we see in our time, in many places the downs are hidden altogether with a stunted kind of forest.” Noonan, in a clearly more domesticated sense—and it would have been interesting to see a more ambitious reforestation of all of southeast England in these images—has illustrated an economically useful version of Jeffreys’s eco-prophetic tale. [Image: “Lecture Preparations” by Tom Noonan, courtesy of the architect]. From Noonan’s own project description: The reforestation of the Thames Estuary sees the transformation of a city and its environment, in a future where timber is to become the City’s main building resource. Forests and
<urn:uuid:8d8a31e5-6cb1-4982-a5b7-86d0f5d29bdd>
512
0
[Image: “The Dormant Workshop” by Tom Noonan, courtesy of the [...] plantations established around the Thames Estuary provide the source for the world’s only truly renewable building material. The river Thames once again becomes a working river, transporting timber throughout the city. It is within these economic circumstances that the John Evelyn Institute of Arboreal Science can establish itself, Noonan suggests: The John Evelyn Institute of Arboreal Scienc eat Deptford is the hub of this new industry. It is a centre for the development and promotion of the use of timber in the construction of London’s future architecture. Its primary aim is to reintroduce wood as a prominent material in construction. Through research, exploration and experimentation the Institute attempts to raise the visibility of wood for architects, engineers, the rest of the construction industry and public alike. Alongside programmes of education and learning, the landscape of the Institute houses the infrastructure required for the timber industry. [Image: “Urban Nature” by Tom Noonan, courtesy of the architect]. And the Institute requires, of course, its own architectural HQ. [Image: “Timber Craft Workshop” by Tom Noonan, courtesy of the architect]. Noonan provides that, as well. He describes the Institute as “a landscape connecting Deptford with the river,” not quite a building at all. It is an “architecture that does not conform to the urban timeframe. Rather, its form and occupation is dependent on the cycles of nature.” The architecture is created slowly—its first years devoid of great activity, as plantations mature. The undercroft of the landscape is used for education and administration. The landscape above becomes an extension of the river bank, returning the privatised spaces of the Thames to the public realm. Gaps and cuts into the landscape offer glimpses into the monumental storage halls and workshops below, which eagerly anticipate the first log harvest. 2041 sees the arrival of the first harvest. The landscape and river burst in a flurry of theatrical activity, reminiscent of centuries before. As the plantations grow and spread, new architectures, infrastructures and environments arise throughout London and the banks of the Thames, and beyond. The drawings are extraordinary, and worth
<urn:uuid:8d8a31e5-6cb1-4982-a5b7-86d0f5d29bdd>
512
23
Researchers in China recently found E.coli bacteria that are resistant to the antibiotic colistin, often called the antibiotic of last resort. While experts have been warning that the finding heralds a post-antibiotic era what is concerning healthcare professionals is that the circle of DNA that makes the bacteria resistant to colistin can be passed on to other strains of harmful bacteria. This circle of DNA, known as MCR-1, was found on a circular structure of DNA known as a plasmid. Plasmids carry “optional extras” for bacteria: genes that are not essential for survival but can provide a benefit. In this case, surviving in the presence of colistin. Some plasmids can be copied and passed on to other bacteria, giving them the optional extras. The researchers in China believe that the resistant E.coli bacteria – first discovered in pigs and meat products – evolved the ability to withstand colistin as a result of their intensive use in animal-feed. Colistin is largely used to help treat antibiotic resistant infections by making the bacterial cell membrane easier for antibiotics to cross. It can kill bacteria by itself, but it is most often used in conjunction with other antibiotics. When bacteria, such as E.coli, are constantly exposed to colistin, those that have no defence, die. Those that gain resistance to the antibiotic, through natural mutation of DNA during cell division, survive and pass on those beneficial changes to the next generation. So you end up with a population of organisms all resistant to the antibiotic. Bacteria that become resistant to colistin can do so in different ways. Colistin could no longer stick to bacteria, the cell membrane could be more resistant or the antibiotic could be ejected back out of the cell by the bacteria. Some are strengthened, others weakened In our own research at Nottingham Trent University we have exposed bacteria to the same antibiotic and had resistance occur in some cases, but not in others. As some disinfectants work in a similar fashion to colistin we wanted to know if exposure to disinfectants at home or in hospitals might lead to resistance to colistin. In the mutants that were generated, some were able to invade human cells better
<urn:uuid:fb658161-023e-403a-b26d-344885109e79>
512
0
Researchers in China recently found E.coli bacteria that are resistant to the antibiotic [...] than before, but others lost the ability to invade at all. So I would like to know what other changes have been seen in the bacteria reported by the researchers in China. I remember, ruefully, that the first colistin resistant bacteria I studied had a much tougher cell membrane, but because of this rigidity, had a tendency to die when I stored them in freezing temperatures. Their rigidity, that was so useful against colistin, meant they effectively shattered at very low temperatures. So the development of resistance to colistin may have affected the bacteria discovered by the team in China in other ways - ways that are more harmful to the bacteria than beneficial. However, the speed at which the bacteria acquired resistance is the most alarming aspect of this latest finding. A single interaction between a resistant and a non-resistant bacteria can now result in two resistant organisms as the plasmid is copied and passed from one bacterium to another. Up to now, bacteria that have developed resistance to colistin would have needed to have been exposed to the antibiotic for a long time before a resistant strain evolved. Plasmid-carried resistance is much more rapid. This is not the slow creep of accumulated changes, but the transfer of the entire set of genes required for resistance in one go. This plasmid transfer of a resistant gene has been seen for some time with many other antibiotics, including those that colistin is used in conjunction with but, until now, not with colistin itself. Bacteria without borders Plasmids are “expensive” for a bacteria to carry as they use a lot of energy, so there has to be a driving force killing off the non-plasmid-carrying bacteria so the population passes on this ability generation to generation. The new research suggests that the use of colistin in farm animals in areas the bacteria were isolated from, is the likely cause, as the bacteria would be constantly exposed to colistin, with only those carrying the plasmid surviving. This is also not the first time that use of antibiotics in farm animals has led to bacteria that could cause antibiotic resistant infections in humans. Colistin to treat farm animals is rarely used in Europe. However, air travel and bacteria’
<urn:uuid:fb658161-023e-403a-b26d-344885109e79>
512
23
Acupuncture is a treatment for pain that has been around for centuries. Based on sixteenth century Chinese medical beliefs that all pain and illness is based on an imbalance of energy in the body, acupuncture is still widely used in both eastern and western cultures to treat many different ailments. Finding the Acupuncture Balance Those who administer acupuncture use stainless steel needles to stimulate certain parts of the body that are known for carrying energy. By triggering these channels in the body, it helps to overcome and resist illness or other conditions that may be present, such as joint pain. It is also thought that acupuncture helps to release endorphins, which are pain-blocking chemicals in the body. There is research that shows that acupuncture can be highly effective in relieving certain types of knee pain, including arthritis in the knee joint. Studies indicate that acupuncture can decrease the pain and stiffness levels felt by those with osteoarthritis, while increasing strength and flexibility in the knee. Not only has acupuncture been shown to provide great relief to those with chronic knee pain, but it can also be an alternative to knee replacement surgery. Alternative to Meds Many people find acupuncture a welcome substitute to traditional pain medications that are often prescribed by doctors when their patients are experiencing joint pain. Some feel that acupuncture gives them longer relief from their pain than pain killers do. They are also happy to avoid any side effects that may come with prescription pain pills. Acupuncture and Physical Therapy When a person is suffering from joint pain that stems from arthritis, it is very difficult to find the motivation to want to use those joints. However, movement and exercise is what can help the most. By exercising the muscles and joints where the pain occurs, you can add strength and reduce inflammation. Acupuncture is often used with physical therapy to promote movement and exercise. Therapists will first use acupuncture to ease pain and stiffness. Once the stiffness is bearable, a physical therapist can then work with the patient on exercises that promote the movement of joints, further relieving pain. Doctors and therapists have found that acupuncture can help with many different pains that their patients experience such as low back pain, neck pain as well
<urn:uuid:8634b544-d4c3-444f-b87d-5ec927ca7a49>
512
0
In the middle of the night, while most residents were sleeping, a devastating fire started at Grenfell Tower in London. The emergency response was rapid and robust: more than 200 firefighters attended the scene, with assistance arriving just six minutes after the first calls were made. Emergency services have confirmed that 17 people are dead. More remain missing and dozens are injured. Already, people are asking how the fire spread so rapidly and why it was so difficult for residents to escape. Doubtless, in time, a thorough investigation will reveal the full details of what caused the disaster. But from an engineering perspective, there are a number of factors in the design of the 24-storey tower block that may have contributed to the speed and scale of the blaze. Most of the current guidelines across the world contain detailed design requirements for fire safety such as evacuation routes, compartmentation and structural fire design. But Grenfell Tower was built in 1974. At that time, the rules and regulations were not as clear and well-developed as they are now. Evacuation and compartmentation The evacuation route is one of the most important design elements when it comes to fire safety. The route should allow occupants to escape the building as quickly as possible, while sheltering them from smoke and flames. Some tall buildings have staircases installed on the outside to prevent people from getting stuck in the corridors and provide access to fresh air while they escape. Other options include installing high-power fans inside buildings, to clear the evacuation route of smoke in the event of a fire. This feature is included in the design of Dubai’s Burj Khalifa, the tallest building in the world. It’s clear that residents were not happy with the fire safety of the escape route from a blog posted in November 2016 and the design below would suggest there was only one set of stairs for evacuation. Investigators will need to determine what evacuation routes were available. Another key strategy is to correctly design fire compartments to keep the fire from spreading quickly. This entails placing barriers in the building – such as fire-resistant doors and walls – to confine the fire to a local area, or at least slow the speed at which it can spread. These compartments are designed based on the function of the buildings by architects, so residential and commercial buildings will
<urn:uuid:0c3c3c2c-0a4f-413b-b6ab-f02d9f3628bd>
512
0
In the middle of the night, while most residents were sleeping, a devastating fire started at Gren [...] have different compartment design strategies. In current design practice, some buildings even include special design measures for fires, such as refuge rooms for occupants in the higher storeys, who could have trouble escaping down stairs. There are also active fire protection methods such as using sprinklers. Though a parliamentary report following the 2009 Lakanal House fire in Camberwell, London, in which six people died, recommended that sprinkler systems be installed in tower blocks across the UK, it’s not clear that these measured were implemented in Grenfell Tower. A local residents action group also claimed that their warnings about a lack of fire safety measures “fell on deaf ears”. The fire risk level of any building also depends on its structural design – that is, the capacity of its materials to resist fire. Different materials receive different fire ratings in each design. For example, steel buildings are normally required to have structural elements such as beams or columns that can stand for one to two hours with the help of fire protection material such intumescent paint, which swells up when heated to protect the material beneath. According to reports, the key structural components of Grenfell Tower are mostly made of concrete – a material which rates highly in terms of fire resistance. While other materials can buckle in high temperatures, concrete structures can help to prevent the collapse of a building in case of fire, as well as making it safer to use helicopters – which can dump up to 9,842 litres of water at a time – to extinguish the blaze. There were also reports relating to cladding added as part of an £8.7m refurbishment in 2016. The material used for the cladding was primarily aluminium, which is not fire resistant. What’s more, aluminium has high conductivity – so the cladding itself could have heated up very quickly, failing to prevent the fire from travelling through the windows and up the exterior of the block from one storey to another. In truth, most old buildings do not conform to the latest guidelines for fire safety design, so it is imperative to update them by installing sprinklers, fire alarms and extra fire evacuation staircases. While those affected may have to wait for some time before the causes of the fire become
<urn:uuid:0c3c3c2c-0a4f-413b-b6ab-f02d9f3628bd>
512
23
Unfortunately we live in a toxic world; there have been thousands of tons of chemicals released into our environment including heavy metals, plastics, pesticides, industrial chemicals, dioxins, phthalates, and xenoestrogens. One hundred percent of new born babies that have been tested have shown positive for traces of rocket fuel, dioxins, DDT and other chemicals in their umbilical cords. Rates of cancer are on the rise with many cancers being attributed to chemical toxicity. Detoxification is a body wide process incorporating the skin, kidneys, lungs and liver. Most of this work is carried out by the liver and the liver and bowel work closely to clean and detoxify your body. Once the gut is cured, the liver needs to be supported to do its job of getting rid of all these chemicals we are exposed to. Overview of the liver's function The liver is the largest internal organ in the human body and it is often the most overworked. It weighs approximately 1.4kg and filters about 1.5 liters of blood every minute. The liver has five primary roles in maintaining health: - Blood filtration - Cholesterol synthesis The liver and bowel are integral to the process of detoxifying toxic compounds. There are two enzymatic pathways of detoxification in the liver – phase 1 or the P450 pathway and the phase 2 pathways. The phase 1 pathway is a set of enzymes that reside inside the liver cells. As blood is filtered through the liver cells these enzymes chemically transform compounds to a less toxic form, making them water-soluble, or converting them into a more toxic form. Making a toxin water-soluble allows it to be directly excreted by the kidneys, where the more toxic compounds are ready to be processed by the phase 2 enzymes. Phase 1 enzymes require a host of nutrients, vitamins and minerals. For each molecule of a toxin metabolized it produces a free radical, then there is a great demand for antioxidants as a by-product of phase 1 detoxification. The main antioxidant required for phase 1 detoxification is glutathione, which itself
<urn:uuid:3656dd9c-5512-4365-96e0-54a4893cb76b>
512
0
Unfortunately we live in a toxic world; there have been thousands of tons of chemicals released into our environment [...] requires support from selenium and vitamin E. The metabolites from phase 1 are then shunted through the 6 different pathways of phase 2 detoxification. Each phase 2 pathway works best at detoxifying certain chemicals, but there is a considerable overlap in activity among the enzymes. During phase 2, toxins are attached or conjugated to certain nutrients and amino acids thus enabling the liver to turn drugs, hormones and various toxins into substances that can be excreted. The six pathways include: 1. Glutathione conjugation accounts which accounts for approximately 60% of the phase 2 enzymatic activities. This is where toxins are bound to the antioxidant glutathione before being excreted by the kidneys. 2. Amino acid conjugation requires several amino acids including glycine, taurine, glutamine, arginine, and ornithine. 3. The sulphation pathway binds toxins to sulphurous compounds and clears the steroid hormones estrogen, testosterone and thyroid hormones. 4. The glucuronidation pathway joins glucuronic acid to toxins. 5. Methylation involves conjugating methyl groups to toxins. 6. The Acetylation pathway joins toxins such as sulfha drugs to a molecule of acetyl-CoA. One of the main routes of elimination for these processed toxins and hormones is through the bile. Gallstones prevent the liver elimination bile and may be attributed to high fat, low fiber diets and alcohol consumption. In the bowels the bile is bound up with dietary fiber and eliminated in the stool. Enzymes in the bowel called beta glucuronidase produced by unfriendly bacteria are capable of breaking the “old” or processed hormones and toxins from dietary fiber making them available to be reabsorbed or “re-cycled” increasing toxicity. Also consider that it is well known that alcohol and the pill depletes folic
<urn:uuid:3656dd9c-5512-4365-96e0-54a4893cb76b>
512
23
History of Jigsaw Jigsaw Puzzles A Brief History "By Anne D. Williams © 1997" The origins of jigsaw puzzles go back to the 1760s when European mapmakers pasted maps onto wood and cut them into small pieces. The “dissected map" has been a successful educational toy ever since. American children still learn geography by playing with puzzle maps of the United States or the world. The eighteenth century inventors of jigsaw puzzles would be amazed to see the transformations of the last 230 years. Children’s puzzles have moved from lessons to entertainment, showing diverse subjects like animals, nursery rhymes, and modern tales of super heroes and Disney. But the biggest surprise for the early puzzle makers would be how adults have embraced puzzling over the last century. Puzzles for adults emerged around 1900, and by 1908 a full-blown craze was in progress in the United States. Contemporary writers depicted the inexorable progression of the puzzle addict: from the skeptic who first ridiculed puzzles as silly and childish, to the perplexed puzzler who ignored meals while chanting "just one more piece;" to the bleary-eyed victor who finally put in the last piece in the wee hours of the morning. The puzzles of those days were quite a challenge. Most had pieces cut exactly on the color lines. There were no transition pieces with two colors to signal, for example, that the brown area (roof) fit next to the blues (sky). A sneeze or a careless move could undo an evening’s work because the pieces did not interlock. And, unlike children’s puzzles, the adult puzzles had no guide picture on the box; if the title was vague or misleading, the true subject could remain a mystery until the last pieces were fitted into place. 0s craze for puzzles, drugstores and circulating libraries added puzzle rentals to their offerings. Because wood puzzles had to be cut one piece at a time, they were expensive. A 500-piece puzzle typically cost $5 in 1908, far beyond the means of the average worker who earned only $50 per month. High society, however, embraced
<urn:uuid:16822185-4e2a-4c2f-9a5a-f6c9550bc827>
512
0
History of Jigsaw Jigsaw Puzzles A Brief History "By Anne [...] the new amusement. Peak sales came on Saturday mornings when customers selected puzzles for their weekend house parties in Newport and other country retreats. The next few years brought two significant innovations. First, Parker Brothers, the famous game manufacturer, introduced figure pieces into its "Pastime" brand puzzles. Figure pieces made puzzles a bit easier to assemble. But the fascination of pieces shaped like dogs, birds, and other recognizable objects more than offset the somewhat reduced challenge. Second, Pastimes and other brands moved to an interlocking style that reduced the risk of spilling or losing pieces. Pastime puzzles were so successful that Parker Brothers stopped making games and devoted its entire factory to puzzle production in 1909. Following this craze, puzzles continued as a regular adult diversion for the next two decades. With the onset of the Great Depression in 1929, puzzles for adults enjoyed a resurgence of popularity, peaking in early 1933 when sales reached an astounding 10 million per week. Puzzles seemed to touch a chord, offering an escape from the troubled times, as well as an opportunity to succeed in a modest way. Completing a jigsaw gave the puzzler a sense of accomplishment that was hard to come by when the unemployment rate was climbing above 25 percent. With incomes depleted, home amusements like puzzles replaced outside entertainment like restaurants and night clubs. Puzzles became more affordable too. Many of the unemployed architects, carpenters, and other skilled craftsmen began to cut jigsaw puzzles in home workshops and to sell or rent them locally. During the 1930s craze for puzzles, drugstores and circulating libraries added puzzle rentals to their offerings. They charged three to ten cents per day, depending on size. Another important development was the introduction of die-cut cardboard puzzles for adults. Mass production and inexpensive cardboard allowed the manufacturers to cut prices substantially. There was a vogue for advertising puzzles in mid-1932. Retail stores offered free puzzles with the purchase of a toothbrush, a flashlight, or hundreds of other products. What better
<urn:uuid:16822185-4e2a-4c2f-9a5a-f6c9550bc827>
512
23
History of Jigsaw Jigsaw Puzzles A Brief History "By Anne [...] way to keep a brand name before the public than to have customers working for hours to assemble a picture of the product? The autumn of 1932 brought a novel concept, the weekly jigsaw puzzle. The die-cut "Jig of the Week" retailed for 25 cents and appeared on the newsstands every Wednesday. People rushed to buy them and to be the first among their friends to solve that week’s puzzle. There were dozens of weekly series including "Picture Puzzle Weekly," "B-Witching Weekly," "Jiggers Weekly," and (featuring popular films) "Movie Cut-Ups." With the competition from the free advertising puzzles and the inexpensive weekly puzzles, the makers of hand-cut wood puzzles were hard-pressed to keep their customers. Yet the top quality brands like Parker Pastimes retained a loyal following throughout the Depression, despite their higher prices. Indeed the Depression led to the birth of Par Puzzles, long dubbed the "Rolls Royce of jigsaw puzzles." Frank Ware and John Henriques, young men with no job prospects, cut their first puzzle at the dining room table in 1932. While other firms were cutting costs (and quality), Par steadily improved their puzzles, and marketed them to affluent movie stars, industrialists and even royalty. Par specialized in customized puzzles, often cutting the owner’s name or birth date as figure pieces. Ware and Henriques also perfected the irregular edge to frustrate traditional puzzlers who tried to start with the corners and edge pieces. They further teased their customers with misleading titles and "par times" that were unattainable for all but the fastest puzzlers. After World War II, the wood jigsaw puzzle went into a decline. Rising wages pushed up costs substantially because wood puzzles took so much time to cut. And as prices rose, sales dropped. At the same time improvements in lithography and die-cutting made the cardboard puzzles more attractive, especially when Springbok introduced high quality reproductions of fine art on jigsaws. In 1965 hundreds of thousands of Americans struggled to assemble Jackson Pollock’s "Conver
<urn:uuid:16822185-4e2a-4c2f-9a5a-f6c9550bc827>
512
23
History of Jigsaw Jigsaw Puzzles A Brief History "By Anne [...] gence," billed by Springbok as "the world’s most difficult jigsaw puzzle." One by one, the surviving brands of wood puzzles disappeared. Parker Brothers discontinued its Pastime puzzles in 1958. By 1974, both Frank Ware of Par and Straus (another long-time manufacturer) had retired from the business. The English "Victory" puzzles, easily found in department stores in the 1950s and 1960s, almost completely vanished. As the true addicts of wood puzzles began to suffer withdrawal symptoms, Steve Richardson and Dave Tibbetts saw an opportunity to fill the void. They founded Stave Puzzles, and within a few years had succeeded Par as the leader in wood puzzles. Indeed, Stave went several steps beyond Par, by commissioning original artwork that was specially designed to interact with the cutting patterns. Experimentation with pop-up figure pieces led to three-dimensional puzzles such as a free-standing carousel. Over the years Richardson invented many trick puzzles that fit together in several different wrong ways, but with only one correct solution. Stave emphasizes personalized puzzles and service, even remembering its customers’’ birthdays. Stave’s success with luxury puzzles convinced others that a market could be found, leading to a broader resurgence of hand-cut and custom puzzles. The last decade has brought many design innovations as new craftspeople have turned to jigsaw puzzles. There are even some wood puzzles cut by computer-controlled water jets or lasers. Puzzle aficionados of today can choose from a number of different styles of wood puzzles to suit their passions for perplexity. And quite a few are graduating from cardboard to wood puzzles, as they discover the satisfying heft of the wood pieces, the challenge of matching their wits against an individual puzzle cutter, and the thrill of watching a picture emerge from a plain box with no guide picture on the lid.
<urn:uuid:16822185-4e2a-4c2f-9a5a-f6c9550bc827>
464
23
Glaucoma is one of the leading causes of blindness worldwide. It is estimated to affect more than 60 million patients around the globe, with the number of affected patients set to rise considerably in coming years as the world population ages. In the United States alone, approximately 2.7 million people have glaucoma. It is a progressive disease of the optic nerve, usually associated with elevation of the intraocular pressure, that can lead to blindness if untreated. Patients must be monitored closely to ensure they are receiving the most appropriate treatment. This requires extensive cooperation between the physician and the patient. At the Boland Eye Center, it is our belief that knowing our patients well and understanding them as individuals is a key component in the treatment of Glaucoma. Patients with Glaucoma need a personalized treatment plan and follow-up schedule, and no two patients are the same. We use state-of-the-art equipment that helps our doctors diagnose Glaucoma earlier to improve the chances of preserving vision. This equipment, combined with years of knowledge and experience, allows our doctors to select the most appropriate medical or surgical treatment to slow the progression of Glaucoma. Our office is equipped with, among other things, a Heidelberg SD-OCT, a Humphrey Visual Field Analyzer, an Ultrasonic Pachymeter, and a Digital Fundus camera. All of these instruments provide important pieces in the puzzle that is Glaucoma diagnosis and treatment. First line, or initial, treatments for glaucoma include Selective Laser Trabeculoplasty (SLT) and prescription eye drops. SLT is done in the office and works by making the eye more effective at controlling the pressure. Specifically, SLT treats the part of the eye that is responsible for draining the fluid that creates pressure in the eye in the first place. By applying laser to that structure, the anatomy of the eye is altered enough to improve its efficiency at regulating fluid outflow. This procedure is not painful and does not restrict patient's activities. It can also be repeated in the future if necessary. It is usually a more economical way of treating glaucoma compared to medications. Today, there are multiple medications used in the treatment of Glaucoma. The goal of Glaucoma treatment is to reduce the Intraoc
<urn:uuid:3ff31da6-f70d-4ff5-bd61-b97611d70788>
512
0
Glaucoma is one of the leading causes of blindness worldwide. It is estimated to affect [...] ular Pressure (IOP) of the eyes. Most medications used to treat Glaucoma are topical medications, namely eye drops. Eye drops can be very effective in controlling the IOP of patients with Glaucoma. Several families of medication exist that reduce IOP. The most widely prescribed family of drops is the Prostaglandin Analogs, which includes the medications Lumigan, Travatan Z, and Xalatan. These medications have a significant impact on IOP with very few side effects. The most common side effects of the Prostaglandin Analogs are redness of the eyes and the thickening and lengthening of the eyelashes. Another family of drops is the Beta-Blockers. This class of drops includes the brands Timoptic, Betimol, and Istalol. These medications also have a significant impact on IOP but have the potential for more side effects. Side effects of Beta-Blockers may include Bradycardia and respiratory distress. These medications must be closely monitored by the doctor to ensure safe usage. Two more families of eye drops are the Alpha-Agonists and Carbonic Anhydrase Inhibitors. These classes are represented by the brands Alphagan P and Azopt or Trusopt, respectively. These drops are often used in conjunction with other drops to lower the IOP. Finally, there are drops that combine medication from two classes in one drop. These include Combigan, Cosopt, and Simbrinza. Not all Glaucoma can be treated the same way. One type of Glaucoma is known as Narrow-Angle Glaucoma. In this type of Glaucoma, damage can occur very quickly, and can sometimes even be painful. Narrow Angle Glaucoma occurs when the anatomy of the eye begins to block access to the drainage structure responsible for regulating eye pressure. If the drain becomes completely blocked, a condition known as angle closure, the Intraocular Pressure (IOP) can spike upward and extensive damage or even blindness can result. This change in anatomy can only be seen during an eye exam using a special lens in a technique known as Gonioscopy. This condition has a genetic predisposition, so immediate
<urn:uuid:3ff31da6-f70d-4ff5-bd61-b97611d70788>
512
23
On 15th March 1939 - six months before the war had even broken out - Prague was occupied by German troops. On 8th May 1945 - over six years later - Prague was the last major European city to be liberated by the Red Army. In the course of this long occupation, the so-called Protectorate of Bohemia and Moravia gradually came under the complete control of the Gestapo, one of the most vicious regimes in the whole of occupied Europe. The sixty-fifth anniversary of the end of the war is a time of mixed emotions in the Czech Republic. On the one hand memories of the liberation are a reason for celebration, but the anniversary also brings back painful memories of the occupation itself: the murder of over 70,000 Czech Jews, the arbitrary executions, the destruction of the Czech villages of Lidice and Ležáky. The liberation - by the Red Army from the east and by General Patton's Third Army from the west - brought great hopes, but it was a bittersweet moment. Only three years later a further tyranny took power in Czechoslovakia, and for over forty years the country was under hard line communist rule. In one sense the war did not really end for Czechs until November 1989, when the Iron Curtain was finally breached. How do we remember the events of the Second World War today? What does this legacy mean for today's Czech Republic? What taboos still remain? Wednesday is the 65th anniversary of the start of the Prague Uprising, when thousands of people took to the streets in an attempt to liberate the city from Nazi occupation, just days before the arrival of the Red Army. Several events have been held to mark the date, including a memorial at Czech Radio, which made a dramatic call on citizens to fight the occupiers on the morning of May 5, 1945. More... “Calling all Czechs! Come quickly to our aid! Calling all Czechs!” It is May 5 1945, and with these words Prague radio appeals to Czechs to join the uprising against the German occupation. This was to be one of the last European battles of World War Two and the greatest moment in the history of Czechoslovak Radio
<urn:uuid:b6d69388-6923-4f8c-bbc2-34ea78e747d6>
512
0
On 15th March 1939 - six months before the war had even broken [...] . For some time radio staff had been working secretly with the Czech underground to prepare the ground for the uprising. Their radio appeal marked the beginning of the battle. In the confusion of the following three days with street battles going on around the city, radio was to play an important role, and the radio building also became the focus of much of the fighting. On some recordings that survive you can still clearly hear gunfire in the background. More... At the time the Red Army was already approaching Prague from the east, and General Patton’s Third Army was in Plzeň just a few dozen kilometres to the west. Many of those fighting in the streets of Prague were untrained and had few weapons, and the scale of the German resistance, especially the SS units, took many by surprise. The radio appealed to the Americans, British and Russians for help. More... There are two widely held stereotypes of Czechs during the war: while some see a plucky little nation that heroically struggled to survive under the Nazi jackboot, others have argued that Czechs buckled and failed to resist the force of Hitler's Germany. But inevitably history is a great deal more complicated than the stereotypes, and in the course of today's programme, we'll be trying to unravel some of these complexities. More... During the German occupation of the Czech Lands more than 77,000 Czech and Moravian Jews were murdered. Today we can read the names and dates of birth of all the known victims on the walls of the Pinkas Synagogue in Prague. At least 6,000 Czech Roma were also murdered. Only a small percentage of Jewish and Romany Czechs survived the Holocaust. More... |24.06.2018||Ceremony marks 76th anniversary of annihilation of village of Ležáky| |02.06.2018||Pilgrimage commemorates 1945 expulsion of German speakers from Brno| |06.05.2018||Politicians pay homage to liberating forces of WWII| |05.05.2018||Czechs mark 7
<urn:uuid:b6d69388-6923-4f8c-bbc2-34ea78e747d6>
512
23
A few weeks ago, I had the delight of revisiting one of my favorite books of the Bible, The Epistle to the Hebrews, for the third time in four years. It’s caused me to reflect back on fond memories of having either participated in or led an in-depth study through this wonderfully challenging book, but also to look back through my notes for gaps or areas where I hadn’t yet fully fleshed out my interpretations (see the Scriptural Index). Apparently this was the case in the last few chapters, but the last chapter more specifically. In that chapter, which is full of practical and ethical exhortations, we have mention of the term “leader” three times, so clearly it is at the forefront of the Author’s mind. The first two uses form brackets around a particular series of exhortations, while the last use is part of the Author’s salutation. Though it has a variety of uses, including references to specific people such as David or Joseph, the word for leader here means leaders in general. The first use occurs in Hebrews 13:7 forming the opening bracket “Remember your leaders, those who spoke to you the word of God. Consider the outcome of their way of life, and imitate their faith.” Several observations need to be made on this use of leaders. Remember your Leaders First is the command to remember them. These leaders are identified as “those who spoke to you the word of God.” While it doesn’t clarify whether this speaking was by way of preaching, teaching, discipleship, individual exhortation, etc., nevertheless these leaders communicated the word of God to the people, and subsequently the Author has exhorted the readers to remember them. It’s quite possible that the leaders being referenced here had died and their life is to be called to mind. Consider their Life Second, we see the command to consider the outcome of the leaders way of life. As stated, its likely that these leaders had died, therefore having completed the race that was set before them, their life should now be viewed as a model of faithfulness. The call then is to consider, literally to hold up and look at repeatedly, the body of their life’s work. Imitate their Faith Finally we have the third command to imitate the faith of these leaders. Not only
<urn:uuid:d85bf8a0-18b1-4652-ba58-2b2e5413210e>
512
0
A few weeks ago, I had the delight of revisiting one of my favorite books of the [...] were they to be remembered, specifically their teaching of God’s word and their lives to be considered as an example, but also their faith was to be emulated. To this pattern of following and emulating godly leadership in doctrine and practice, the Scriptures express the exact same sentiment elsewhere, including a prior use in Hebrews “so that you may not be sluggish, but imitators of those who through faith and patience inherit the promises.” Hebrews 6:12 Similarly we have the following passages throughout the New Testament: “14 I do not write these things to make you ashamed, but to admonish you as my beloved children. 15 For though you have countless guides in Christ, you do not have many fathers. For I became your father in Christ Jesus through the gospel. 16 I urge you, then, be imitators of me.“1 Cor. 4:14-16 “Be imitators of me, as I am of Christ.” 1 Cor. 11:1 “Brothers, join in imitating me, and keep your eyes on those who walk according to the example you have in us.” Philippians 3:17 “What you have learned and received and heard and seen in me—practice these things, and the God of peace will be with you.” Philippians 4:9 “And you became imitators of us and of the Lord, for you received the word in much affliction, with the joy of the Holy Spirit” 1 Thessalonians 1:6 “It was not because we do not have that right, but to give you in ourselves an example to imitate.” 2 Thessalonians 3:9 The pattern for follow-the-leader is a clear Scriptural principle. Never in any of these passages do we see an example of a leader “lording” over or demanding blind allegiance. Instead we see a pattern of humility in following the Lord , submitting to His word, and a call for other believers to imitate these qualities in the lives of those who lead them in the Word of God. This is the mark of a leader and the definition of discipleship. It represents
<urn:uuid:d85bf8a0-18b1-4652-ba58-2b2e5413210e>
512
23
Existence actually exists. We are aware of existence; therefore our consciousness actually occurs. Consciousness can distinguish things and actions one from another. There is identity. Identity in action is causality. These are axiomatically obvious and result in realization that knowledge constitutes conscious apprehension via casual process of the facts of existence. That knowledge is a metal causal process of apprehension of the facts of reality, reached either by perceptual observation or by a process of reason based on perceptual observation means that understanding of what is the good is also a mental casual process. Understanding can only occur via means of reason. Reason is the faculty of individual human beings that identifies and integrates the material provided by the senses; it integrates the individual’s perceptions by means of forming abstractions or conceptions, thus raising the individual’s knowledge from the perceptual level, which she shares with animals, to the conceptual level, which she alone can reach. The method which reason employs in this process is logic—and logic is the art of non-contradictory identification. Individual human beings operate via means of reasoning. To live, individuals must employ reasoning in a rational manner to obtain that which is necessary for life. To improve their circumstances such that a greater degree of benefit is obtained, to thrive, individual human beings must manipulate their environment via means of rational action. The good then is that which is beneficial to the life of a rational individual human being; all that which destroys it is the evil. The good is neither an attribute of “things in themselves” nor of an individual human being’s emotional states, but an evaluation of the facts of reality by an individual human being’s consciousness according to a rational standard of value. (Rational, in this context, means: derived from the facts of reality and validated by a process of reason.) The good is an aspect of reality in relation to individual human beings. It must be discovered, not invented, by the individual human being. It is that which is of value to the life of individual human beings. There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but
<urn:uuid:cbdd35b8-cde0-4e2e-9da5-35e87e32c1ae>
512
0
Existence actually exists. We are aware of existence; therefore our consciousness actually occurs. Consciousness [...] it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to an individual living entity that things can be good or evil. Since the valuation that is foundational for the good only applies to individual rational human beings, then the good can only occur relative to an individual rational human being. The concept of good does not permit the separation of “value” from “purpose,” of benefit from beneficiaries, or of human action from reason. Human beings are called rational, but rationality is a matter of choice, and the alternatives human nature offers are: rational being or suicidal animal. Human beings have to be what they are by choice; they have to hold their individual lives as a value by choice; they have to learn to sustain it —by choice; they have to discover that which is beneficial to their lives and value those things by choice. To be rational, human beings must understand the requirement for and then practice their virtues. A code of values accepted by choice is a code of morality. The objective standard of value in ethics is the standard by which one judges what is good or evil. And that is the individual human’s life, or that which is required for survival and thriving as an individual human. Since reason is the individual human’s basic means of survival, that which is proper to the life of a rational being is the good; that which negates, opposes or destroys it is the evil. Since everything humans need has to be discovered by their own mind and produced by their own effort, the two essentials of the method of survival proper to a rational being are: thinking and productive work. Altruism is the doctrine that human beings have no right to exist for their own sake, that service to others is the only justification for human existence, and that self-sacrifice is the highest moral duty, virtue and value. The irreducible primary of altruism, the basic absolute, is self-sacrifice—which
<urn:uuid:cbdd35b8-cde0-4e2e-9da5-35e87e32c1ae>
512
23
Paint Your Kitchen 'Green' With These Eco-friendly Tips Sharon Palmer, R.D. Are you interested in a lifestyle that's as easy on Planet Earth as it is on your pocketbook? If the answer is yes, just take a stroll into your kitchen and look around. Everyday kitchen items, from appliances and the water faucet to food supplies and cleaning chemicals, have an impact on the environment you live in. Whenever you stock your fridge with groceries, prepare meals or wash the dishes you're engaged in activities that eat up resources -- either directly or indirectly -- such as electricity, fossil fuels and water, as well as contributing greenhouse gas emissions into the ecosystem. Here are 15 eco-conscious kitchen habits: 1. Shop for smarter appliances. If you're in the market for appliances, choose those labeled Energy Star, which are backed by a government program that helps you identify energy-saving devices. If your refrigerator, the largest consumer of energy among home appliances, is more than 15 years old it's probably time to buy a new one. Efficiency standards make newer refrigerators up to three times more efficient than older ones. Watch out for products that claim to be energy-efficient without the proof; check out www.energystar.gov for verification. 2. Shed light on energy savings. The next time you change a light bulb, replace it with an Energy Star compact fluorescent light bulb, which uses 75 percent less energy and lasts up to 10 times longer than standard lighting. 3. Be water-wise. Water is a precious commodity, not an endless supply to squander down the drain. If you're hand-washing dishes, fill up one side of your sink for washing and one side for rinsing, instead of letting the water run. And make sure your dishwasher is full before you run it. 4. Start composting. "Compost non-meat food scraps to create nutrient-rich garden soil," urges Hemmelgarn. Place a compost bucket by your kitchen sink to make composting easier. 5. Eat organic, seasonal and local. "Pesticides, herbicides, and chemical fertilizers require fossil fuels for production and distribution
<urn:uuid:f28cc7fd-17f2-43e0-8853-66bc3ada77fe>
512
0
Paint Your Kitchen 'Green' With These Eco-friendly Tips Sharon Palmer, [...] ," says Hemmelgarn. Buy produce from the farmers market (or grow them at home) to save resources all the way along the food chain, from manufacturing and packaging to transporting. 6. Avoid processed, overly packaged foods. Foods that are highly processed (containing refined ingredients like sugars and oils) and foods that are highly packaged (think individual serving pouches) use up more resources than simple, whole foods like carrots or apples. 7. Put fewer animal products on the menu. Beef, in particular, creates a high environmental burden because cattle naturally emit the greenhouse gas methane, and require large amounts of resources to get to market. 8. Cook wisely. "Use the smallest appliance suited for the task," recommends Hemmelgarn. Don't heat up your whole stove to toast garlic bread when a toaster oven will suffice. And try one-pot cooking techniques that use a single pot (or crock pot) to put an entire meal on the table. 9. Want not, waste not. For every ounce of food that goes into the trash, you're also throwing away water and fossil fuels that went into creating it. Conserve by purchasing only what you need and using up leftovers. 10. Delete "disposable" from your vocab. Cut back on your use of disposable paper towels, napkins, cups, flatware, wipes, and plates. Buy a set of cloth napkins and recycle your old t-shirts as cleaning rags. 11. Rely on reusable glass. Instead of falling back on petroleum-based, disposable plastic wrap, bags and containers, use reusable glass storage containers, suggests Hemmelgarn. 12. Bring your own bags. Say no to plastic or paper, and carry your own reusable shopping bags to cut down on the amount of petroleum-based plastic that ends up in the trash. 13. Give up bottled water. Since water bottles became a daily part of American life, millions of them have found their way into landfills. Fill a reusable sports bottle with tap water instead. 14. Lighten up
<urn:uuid:f28cc7fd-17f2-43e0-8853-66bc3ada77fe>
512
23
Paint Your Kitchen 'Green' With These Eco-friendly Tips Sharon Palmer, [...] your trashcan. Take a gander at what fills your trashcan (and what will end up in the landfill.) Are there items that could be recycled, reused or composted? 15. Use non-toxic cleaning products. Using products such as chlorine bleach, ammonia and deodorizers improperly can contribute to poor air quality in your home. Hemmelgarn says, "Choose safer alternatives to hazardous cleaning products." Instead of products that contain harsh chemicals, search for those with ingredients like baking soda, plant oils and vinegar. Available at Amazon.com: - 5 Elements Guarantee a Perfect Outdoor Living Area - Paint Your Kitchen 'Green' With These Eco-friendly Tips - Trends in Painting: Is Faux Over? - How to Have a Gorgeous and Organized Kitchen - Shovel-Ready Project: Dividing Your Perennials - Can Clutter Make You Sick? - A Rose Is a Rose Is an Heirloom Rose - Banquette Seating Makes For Cozy and Comfy Dining - Controlling Dust in the Home Is a Never-Ending Task - Fireplace Facelift Not a Job for the Faint of Heart - Gorgeous and Glamorous Gray Goes the Distance - Easy Eco-friendly Spring-cleaning - 30 Top-To-Bottom Spring Cleaning Tips - Too Cold to Plant? You Can Still Get Ahead of Garden Maintenance - Area Rugs to the Rescue - Rarin' to Get Planting? Here's What You Can Do Now - Freshen Up Your House Whole House For Spring - Don't Overlook Bulbs of Summer; Now Is Time to Plan Planting - Time to Ready Your Backyard for Return of Avian Tenants - You Can Pick Up Furniture Bargains In Unexpected Ways - No Room for a Cellar? Create a Wine Closet - Greener Living 101 - 5 Cheap and Easy Decorating Ideas - Spring-cleaning Checklist for Your Health and Your House - Te
<urn:uuid:f28cc7fd-17f2-43e0-8853-66bc3ada77fe>
512
23