id
stringlengths
11
133
num_tokens
int64
40
258k
text
stringlengths
208
1.63M
source
stringclasses
7 values
meta
stringlengths
14
10.3k
/index.php/WASF3
63
# WASF3 This gene encodes a member of the Wiskott-Aldrich syndrome protein family. The gene product is a protein that forms a multiprotein complex that links receptor kinases and actin. Binding to actin occurs through a C-terminal verprolin homology domain in all family members. The multiprotein complex serves to tranduce signals that involve changes in cell shape, motility or function.
wikidoc
null
/index.php/WBP11
279
# WBP11 Studies suggest that Wbp11 plays a role in DNA/ RNA transcriptional or post-transcriptional events related to cell division. Wbp11 is found in the nucleus but not the nucleoli of cells in interphase. However it is distributed throughout the cytoplasm in dividing cells. Immunoelectron-microscopy experiments suggest that relocation from a peri-nuclear to a cytoplasmic distribution, coinciding with the onset of mitosis in cell division. Other studies have shown that Wbp11 is a component of the spliceosome. Also, that Wbp11 fragments block pre-mRNA splicing catalysis. Wbp11 is a polypeptide known to interact with other WW domain of proteins such as the nuclear protein Npw38 via two proline-rich regions. It associates with Npw38 (hence the name NpwBP) in the nuclei and with Poly(rG) and G-rich ssDNA. The 70kDa protein has also been found to interact with SH3 (Src homology domain 3) domains. The C-terminal proline-rich sequences of SNP70/NpwBP/Wbp11, which binds to the WW domain of Npw38 also fits with both classic type I and type II SH3 binding sequences, hence the name (SNP70). Wbp11 was found to bind strongly to the tandem SH3 domains of p47phox and to the N-terminal SH3 domain of p47phox, and more weakly to the SH3 domains from c-src and p85α. p47phox. Furthermore, it has been shown to interact with PP1(protein phosphotase 1), hence the name SIPP1. It has an inhibitory effect to PP1, with its inhibitory potency increasing upon phosphorylation with protein kinase CK1. The binding of Wbp11 with PP1 involves a RVXF (Arg-Val-Xaa-Phe) motif, which functions as a PP1- binding sequence in most interactors of PP1.
wikidoc
null
/index.php/WBP4
55
# WBP4 This gene encodes WW domain-containing binding protein 4. The WW domain represents a small and compact globular structure that interacts with proline-rich ligands. This encoded protein is a general spliceosomal protein that may play a role in cross-intron bridging of U1 and U2 snRNPs in the spliceosomal complex A.
wikidoc
null
/index.php/WDR4
115
# WDR4 This gene encodes a member of the WD repeat protein family. WD repeats are minimally conserved regions of approximately 40 amino acids typically bracketed by gly-his and trp-asp (GH-WD), which may facilitate formation of heterotrimeric or multiprotein complexes. Members of this family are involved in a variety of cellular processes, including cell cycle progression, signal transduction, apoptosis, and gene regulation. This gene is excluded as a candidate for a form of nonsyndromic deafness (DFNB10), but is still a candidate for other disorders mapped to 21q22.3 as well as for the development of Down syndrome phenotypes. Two transcript variants encoding the same protein have been found for this gene.
wikidoc
null
/index.php/WFIKKN2
92
# WFIKKN2 The WFIKKN1 protein contains a WAP domain, follistatin domain, immunoglobulin domain, two tandem Kunitz domains, and an NTR domain. This gene encodes a WFIKKN1-related protein which has the same domain organization as the WFIKKN1 protein. The WAP-type, follistatin type, Kunitz-type, and NTR-type protease inhibitory domains may control the action of multiple types of proteases. [provided by RefSeq, Jul 2008]. ##Evidence-Data-START## Transcript exon combination :: AY358142.1, AK127743.1 [ECO:0000332] RNAseq introns :: single sample supports all introns ERS025083, ERS025084 [ECO:0000348] ##Evidence-Data-END##
wikidoc
null
/index.php/WHO
1,967
# World Health Organization The World Health Organization (WHO) is a specialized agency of the United Nations (UN) that acts as a coordinating authority on international public health. Established on 7 April 1948, and headquartered in Geneva, Switzerland, the agency inherited the mandate and resources of its predecessor, the Health Organization, which had been an agency of the League of Nations. The WHO's constitution states that its objective "is the attainment by all peoples of the highest possible level of health." Its major task is to combat disease, especially key infectious diseases, and to promote the general health of the people of the world. The World Health Organization is one of the original agencies of the United Nations, its constitution formally coming into force on the first World Health Day, (7 April, 1948), when it was ratified by the 26th member state. Prior to this its operations, as well as the remaining activities of the League of Nations Health Organization, were under the control of an Interim Commission following an International Health Conference in the summer of 1946. The transfer was authorized by a Resolution of the General Assembly. As well as coordinating international efforts to monitor outbreaks of infectious diseases, such as SARS, malaria, and AIDS, the WHO also sponsors programs to prevent and treat such diseases. The WHO supports the development and distribution of safe and effective vaccines, pharmaceutical diagnostics, and drugs. After over 2 decades of fighting smallpox, the WHO declared in 1980 that the disease had been eradicated - the first disease in history to be eliminated by human effort. The WHO is nearing success in developing vaccines against malaria and schistosomiasis and aims to eradicate polio within the next few years. The organization has already endorsed the world's first official HIV/AIDS Toolkit for Zimbabwe (from October 3 2006), making it an international standard. In addition to its work in eradicating disease, the WHO also carries out various health-related campaigns — for example, to boost the consumption of fruits and vegetables worldwide and to discourage tobacco use. Experts met at the WHO headquarters in Geneva in February, 2007, and reported that their work on pandemic influenza vaccine development had achieved encouraging progress. More than 40 clinical trials have been completed or are ongoing. Most have focused on healthy adults. Some companies, after completing safety analyses in adults, have initiated clinical trials in the elderly and in children. All vaccines so far appear to be safe and well-tolerated in all age groups tested. The WHO also conducts research, on, for instance, whether the electromagnetic field surrounding cell phones has a negative influence on health. Some of this work can be controversial, as illustrated by the April, 2003, joint WHO/FAO report, which recommended that sugar should form no more than 10% of a healthy diet. This report led to lobbying by the sugar industry against the recommendation , to which the WHO/FAO responded by including in the report the statement "The Consultation recognized that a population goal for free sugars of less than 10% of total energy is controversial", but also stood by its recommendation based upon its own analysis of scientific studies. . In addition to the WHO's stated mission, international treaties assign the Organization a variety of responsibilities. For instance, the Single Convention on Narcotic Drugs and the Convention on Psychotropic Substances call on the WHO to issue binding scientific and medical assessments of psychoactive drugs and to recommend how they should be regulated. In this way, the WHO acts as a check on the national drug policymaking Commission on Narcotic Drugs. The WHO also compiles the widely-followed International Classification of Diseases (ICD). The tenth revision of the ICD was released in 1992 and a searchable version is available online on the WHO website. Later revisions are indexed and available in hard-copy versions. The WHO does not permit simultaneous classification in two separate areas. WHO Member States appoint delegations to the World Health Assembly, WHO's supreme decision-making body. All UN member states are eligible for WHO membership, and, according to the WHO web site, "Other countries may be admitted as members when their application has been approved by a simple majority vote of the World Health Assembly." The WHO has 193 member states. The Republic of China (Taiwan) was one of the founding members of the WHO, but was compelled to leave after the People's Republic of China was admitted to the UN in 1972 and Taiwan left the UN. Taiwan has applied for participation in the WHO as a 'health entity' each year since 1997 but is denied each year because of pressure from China. China claims sovereignty over Taiwan, and its position is that Taiwan is represented in the WHO system by China. In practice, Taiwanese doctors and hospitals are denied access to WHO information, and Taiwanese journalists are denied accreditation for participation in WHO activities. The WHO Assembly generally meets in May each year. In addition to appointing the Director-General every five years, the Assembly considers the financial policies of the Organization and reviews and approves the proposed programme budget. The Assembly elects 34 members, technically qualified in the field of health, to the Executive Board for three-year terms. The main functions of the Board are to carry out the decisions and policies of the Assembly, to advise it and to facilitate its work in general. The WHO has 193 Member States, including all UN Member States except Liechtenstein , and 2 non-UN members, Niue and the Cook Islands. Territories that are not UN Member States may join as Associate Members (with full information but limited participation and voting rights) if approved by an Assembly vote: Puerto Rico and Tokelau are Associate Members. Entities may also be granted observer status: examples include the Palestine Liberation Organization and the Holy See (Vatican City). The WHO is financed by contributions from member states and from donors. In recent years, the WHO's work has involved more collaboration; there are currently around 80 such partnerships with NGOs and the pharmaceutical industry, as well as with foundations such as the Bill and Melinda Gates Foundation and the Rockefeller Foundation. Voluntary contributions to the WHO from national and local governments, foundations and NGOs, other UN organizations, and the private sector, now exceed that of assessed contributions (dues) from the 193 member nations. Template:PDFlink Uncharacteristically for a UN Agency, the six Regional Offices of the WHO enjoy remarkable autonomy. Each Regional Office is headed by a Regional Director (RD), who is elected by the Regional Committee for a once-renewable five-year term. The name of the RD-elect is transmitted to the WHO Executive Board in Geneva, which proceeds to confirm the appointment. It is rare that an elected Regional Director is not confirmed. Each Regional Committee of the WHO consists of all the Health Department heads, in all the governments of the countries that constitute the Region. Aside from electing the Regional Director, the Regional Committee is also in charge of setting the guidelines for the implementation, within the region, of the Health and other policies adopted by the World Health Assembly. The Regional Committee also serves as a progress review board for the actions of the WHO within the Region. The Regional Director is effectively the head of the WHO for his or her Region. The RD manages and/or supervises a staff of health and other experts at the regional headquarters and in specialised centres. The RD is also the direct supervising authority — concomitantly with the WHO Director General — of all the heads of WHO country offices, known as WHO Representatives, within the Region. The World Health Organization operates 147 country and liaison offices in all its regions. The presence of a country office is generally motivated by a need, stated by the member country. There will generally be one WHO country office in the capital, occasionally accompanied by satellite-offices in the provinces or sub-regions of the country in question. The country office is headed by a WHO Representative (WR), who is a trained physician, not a national of that country, who holds diplomatic rank and is due privileges and immunities similar to those of a Head of Diplomatic Mission or a diplomatic Ambassador. In most countries, the WR (like Representatives of other UN agencies) is de facto and/or de jure treated like an Ambassador - the distinction here being that instead of being an Ambassador of one sovereign country to another, the WR is a senior UN civil servant, who serves as the "Ambassador" of the WHO to the country to which he or she is accredited. Hence, the slightly less glamorous title of Representative, or Resident Representative. The country office consists of the WR, and several health and other experts, both foreign and local, as well as the necessary support staff. The main functions of WHO country offices include being the primary adviser of that country's government in matters of health and pharmaceutical policies. International liaison offices serve largely the same purpose as country offices, but generally on a smaller scale. These are often found in countries that want WHO presence and cooperation, but do not have the major health system flaws that require the presence of a full-blown country office. Liaison offices are headed by a liaison officer, who is a national from that particular country, without diplomatic immunity. The annual World Health Report, first published in 1995, is the WHO's leading publication. Each year the report combines an expert assessment of global health, including statistics relating to all countries, with a focus on a specific subject. The World Health Report 2007 - A safer future: global public health security in the 21st century was published on August 23, 2006. The WHO website A guide to statistical information at WHO has an online version of the most recent WHO health statistics. According to The WHO Programme on Health Statistics: The production and dissemination of health statistics for health action at country, regional and global levels is a core WHO activity mandated to WHO by its Member States in its Constitution. WHO produced figures carry great weight in national and international resource allocation, policy making and programming, based on its reputation as "unbiased" (impartial and fair), global (not belonging to any camp), and technically competent (consulting leading research and policy institutions and individuals). There is pending controversy on the relation between the WHO and the International Atomic Energy Agency. Since May 28, 1959, there has been an agreement between these organizations, confirmed by World Health Assembly resolution WHA12.40 Numerous people, including Michel Fernex (a retired medical doctor from the WHO), have criticized this agreement as preventing the WHO from properly conducting its activities relating to health effects of ionizing radiation. Notably it is argued that the consequences of the Chernobyl catastrophe are significantly played down by the WHO because of this agreement. The WHO has concluded on 50 near-immediate deaths and potentially 4,000 cancers in the longer term, but other accounts quote between 50,000 and 150,000 people already died, and several hundreds of thousands people are ill, handicapped, etc. Former Secretary-General Kofi Annan said that seven million people are affected by the catastrophe. Particularly, the proceedings of the 1995 Geneva conference and the report of the Kiev 2001 conference on the effects of the Chernobyl disaster were never published, which is very unusual. Dr. Hiroshi Nakajima, former WHO Director-General, admitted in a Swiss television interview that these documents had been censored based on the agreement with the International Atomic Energy Agency. Since April 27, 2007, a permanent presence opposite the main driveway to WHO premises is maintained in protest against the agreement between WHO and IAEA.
wikidoc
null
/index.php/WHO_Centre_for_Health_Development
33
# WHO Centre for Health Development The main office is in Kobe, Japan. Its role is to nurture, support and sustain excellence and innovation in public health research on health in development.
wikidoc
null
/index.php/WIN-7681
102
# WIN-7681 WIN-7681 is an analogue of pethidine where the N-methyl group has been replaced by allyl. In many other opioid derivatives, placing an allyl substituent on the nitrogen instead of a methyl will reverse the normal opioid effects, to produce μ-opioid antagonists which among other things reverse the respiratory depression caused by opioid agonists such as morphine. This is not true with WIN-7681, as while it does partially reverse the respiratory depression produced by morphine it is actually an active analgesic and has no antagonistic properties when administered alone, so is instead a partial agonist.
wikidoc
null
/index.php/WIN_55,212-2
244
# WIN 55,212-2 WIN 55,212-2 is a drug described as an aminoalkylindole derivative, that produces effects similar to those of Cannabinoid derivatives such as THC but has an entirely different chemical structure. WIN 55,212-2 is a potent cannabinoid receptor agonist which has been found to be a potent analgesic in a rat model of neuropathic pain . It activates p42 and p44 MAP kinase via receptor-mediated signaling . WIN55,212-2, along side HU-210 and JWH-133, are implicated in preventing the inflammation caused by Amyloid beta proteins involved in Alzheimer's Disease, in addition to preventing cognitive impairment and loss of neuronal markers. This antiinflamatory action is induced through the agonization of cannabinoid receptors which prevents microglial activation that elicits the inflammation. Additionally, cannabinoids completely abolish neurotoxicity related to microglia activation in rat models. WIN55212-2 is a weaker partial agonist than THC, but with higher affinity to the CB1 receptor. This means that the threshold dose for onset of effects is smaller than that of THC, but the maximum effects attainable are not as strong as those of THC, meaning that WIN55212-2 could potentially be used as a legal cannabis substitute drug, for instance as an alternative to medical marijuana. WIN55212-2 produces cannabis-like effects in humans within the oral dosage range of 1 to 3 miligrams however the effects are described as milder and shorter lasting when compared to THC[citation needed].
wikidoc
null
/index.php/WISP1
188
# WISP1 This gene encodes a member of the WNT1 inducible signaling pathway (WISP) protein subfamily, which belongs to the connective tissue growth factor (CTGF) family. WNT1 is a member of a family of cysteine-rich, glycosylated signaling proteins that mediate diverse developmental processes. The CTGF family members are characterized by four conserved cysteine-rich domains: insulin-like growth factor-binding domain, von Willebrand factor type C module, thrombospondin domain and C-terminal cystine knot-like domain. This gene may be downstream in the WNT1 signaling pathway that is relevant to malignant transformation. It is expressed at a high level in fibroblast cells, and overexpressed in colon tumors. The encoded protein binds to decorin and biglycan, two members of a family of small leucine-rich proteoglycans present in the extracellular matrix of connective tissue, and possibly prevents the inhibitory activity of decorin and biglycan in tumor cell proliferation. It also attenuates p53-mediated apoptosis in response to DNA damage through activation of the Akt kinase. It is 83% identical to the mouse protein at the amino acid level. Alternative splicing of this gene generates 2 transcript variants.
wikidoc
null
/index.php/WISP2
160
# WISP2 This gene encodes a member of the WNT1 inducible signaling pathway (WISP) protein subfamily, which belongs to the connective tissue growth factor (CTGF) family. WNT1 is a member of a family of cysteine-rich, glycosylated signaling proteins that mediate diverse developmental processes. The CTGF family members are characterized by four conserved cysteine-rich domains: insulin-like growth factor-binding domain, von Willebrand factor type C module, thrombospondin domain and C-terminal cystine knot-like (CT) domain. The encoded protein lacks the CT domain which is implicated in dimerization and heparin binding. It is 72% identical to the mouse protein at the amino acid level. This gene may be downstream in the WNT1 signaling pathway that is relevant to malignant transformation. Its expression in colon tumors is reduced while the other two WISP members are overexpressed in colon tumors. It is expressed at high levels in bone tissue, and may play an important role in modulating bone turnover.
wikidoc
null
/index.php/WISP3
137
# WISP3 This gene encodes a member of the WNT1 inducible signaling pathway (WISP) protein subfamily, which belongs to the connective tissue growth factor (CTGF) family. WNT1 is a member of a family of cysteine-rich, glycosylated signaling proteins that mediate diverse developmental processes. The CTGF family members are characterized by four conserved cysteine-rich domains: insulin-like growth factor-binding domain, von Willebrand factor type C module, thrombospondin domain and C-terminal cystine knot-like domain. This gene is overexpressed in colon tumors. It may be downstream in the WNT1 signaling pathway that is relevant to malignant transformation. Mutations of this gene are associated with progressive pseudorheumatoid dysplasia, an autosomal recessive skeletal disorder, indicating that the gene is essential for normal postnatal skeletal growth and cartilage homeostasis. Alternative splicing generates at least three transcript variants.
wikidoc
null
/index.php/Waardenburg_syndrome
500
# Waardenburg syndrome Waardenburg syndrome is a rare genetic disorder most often characterized by varying degrees of deafness, minor defects in structures arising from the neural crest, and pigmentation anomalies. It is named after Dutch ophthalmologist Petrus Johannes Waardenburg (1886-1979), who first defined it in 1951. The condition he described is now categorized as WS1. WS2 was identified in 1971, to describe cases where "dystopia canthorum" did not present. . WS2 is now split into subtypes, based upon the gene responsible. There are several other names used. These include Klein-Waardenburg syndrome, Mende's syndrome II, Van der Hoeve-Halbertsma-Waardenburg syndrome, Ptosis-Epicanthus syndrome, Van der Hoeve-Halbertsma-Gualdi syndrome, Waardenburg type Pierpont , Van der Hoeve-Waardenburg-Klein syndrome, Waardenburg's syndrome II, and Vogt's syndrome. Types I and II are the most common types of the syndrome, whereas types III and IV are rare. Overall, the syndrome affects perhaps 1 in 15,000 people. About 1 in 30 students in schools for the deaf have Waardenburg syndrome. All races and both sexes are affected equally. The highly variable presentation of the syndrome makes it difficult to arrive at precise figures for its prevalence. Waardenburg syndrome has also been associated with a variety of other congenital disorders, such as intestinal and spinal defects, elevation of the scapula, and cleft lip and palate. This condition is usually inherited in an autosomal dominant pattern, which means one copy of the altered gene is sufficient to cause the disorder. In most cases, an affected person has one parent with the condition. A small percentage of cases result from new mutations in the gene; these cases occur in people with no history of the disorder in their family. Some cases of type II and type IV Waardenburg syndrome appear to have an autosomal recessive pattern of inheritance, which means two copies of the gene must be altered for a person to be affected by the disorder. Most often, the parents of a child with an autosomal recessive disorder are not affected but are carriers of one copy of the altered gene. There is currently no treatment or cure for Waardenburg syndrome. The symptom most likely to be of practical importance is deafness, and this is treated as any other irreversible deafness would be. In marked cases there may be cosmetic issues. Other abnormalities (neurological, structural) associated with the syndrome are treated symptomatically. Waardenburg syndrome is known to occur in ferrets. The affected animal will usually have a small white stripe along the top of its head and a somewhat, although barely noticeably, flatter skull than normal ferrets. As a ferret's sense of hearing is poor to begin with it is not easily noticeable except for when the affected animal does not react to loud noises that non-affected ones will respond to. As the disorder is easily spread to offspring, the affected animal will not be used for breeding, although it may still be neutered and sold as a pet.
wikidoc
null
/index.php/Wada_test
490
# Wada test The test is conducted with the patient awake. Essentially, a barbiturate (which is usually sodium amobarbital) is introduced into one of the internal carotid arteries via a cannula or intra-arterial catheter from the femoral artery. The drug is injected into one hemisphere at a time. The effect is to shut down any language and/or memory function in that hemisphere in order to evaluate the other hemisphere ("half of the brain"). Then the patient is engaged in a series of language and memory related tests. The memory is evaluated by showing a series of items or pictures to the patient so that within a few minutes as soon as the effect of the medication is dissipated, the ability to recall can be tested. There is currently great variability in the processes used to administer the test, and so it is difficult to compare results from one patient to the other. The test is usually performed prior to ablative surgery for epilepsy and sometimes prior to tumor resection. The aim is to determine which side of the brain is responsible for certain vital cognitive functions, namely speech and memory. The risk of damaging such structures during surgery can then be assessed, and the need for awake craniotomies can be determined as well. The Wada test has several interesting side-effects. Drastic personality changes are rarely noted, but disinhibition is common. Also, contralateral hemiplegia, hemineglect and shivering are often seen. During one injection, typically the left hemisphere, the patient will have impaired speech or be completely unable to express or understand language. Although the patient may not be able to talk, sometimes their ability to sing is preserved. This is because music and singing utilizes a different part of your brain than speech and language. Most people with aphasia are able to sing, and even learn new songs (as in the case of Cesero Rota, klawans, 2002).Recovery from the anesthesia is rapid, and EEG recordings and distal grip strength are used to determine when the medication has worn off. Generally, recovery of speech is dysphasic (contains errors in speech or comprehension) after a dominant hemisphere injection. Although generally considered a safe procedure, there are at least minimal risks associated with the angiography procedure used to guide the catheter to the internal carotid artery. As such, efforts to utilize non-invasive means to determine language and memory laterality (e.g. fMRI) are being researched. The Wada test is named after Canadian neurologist Juhn A. Wada, of the University of British Columbia. He developed the test while a medical resident in Japan just after World War II, when he was receiving training in neurosurgery. Recognizing that there was no available test for cerebral dominance for speech, Wada developed the carotid amytal test. He published the initial description in 1949, in Japanese. During later training at the Montreal Neurological Institute, he introduced the test to the English-speaking world.
wikidoc
null
/index.php/Wagner%27s_disease
366
# Wagner's disease Wagner's Disease is a familial eye disease of the connective tissue in the eye that causes blindness. Wagner's disease was originally described in 1938. This disorder is frequently confused with Stickler's syndrome, but lacks the systemic features and high incidence of retinal detachments. Inheritance is autosomal dominant. In 1938 Hans Wagner described 13 members of a Canton of Zurich family with a peculiar lesion of the vitreous and retina. Ten additional affected members were observed by Boehringer et al. in 1960 and 5 more by Ricci in 1961. In Holland Jansen in 1962 described 2 families with a total of 39 affected persons. Alexander and Shea in 1965 reported a family. In the last report, characteristic facies (epicanthus, broad sunken nasal bridge, receding chin) was noted. Genu valgum was present in all. In addition to typical changes in the vitreous, retinal detachment occurs in some and cataract is another complication. Wagner's syndrome has been used as a synonym for Stickler's syndrome. Since there may be more than one type of Wagner syndrome, differentiation from Stickler's syndrome is difficult, and doctors disagree as to whether these are the same entity. It may be that Wagner has skeletal effects, but not the joint and hearing problems of Stickler's syndrome. Blair et al. in 1979 concluded that the Stickler and Wagner syndromes are the same disorder. However, retinal detachment, which is a feature of Stickler' syndrome, was not noted in any of the 28 members of the original Swiss family studied by Wagner in 1938 and later by Boehringer in 1960 and Ricci in 1961. An exhaustive genetics study of blood from 54 patients found everyone with Wagner's disease has the same eight "markers," a genetic fingerprint that sets them apart from those with healthy eyes. The gene involved helps regulate how the body makes collagen, a sort of chemical glue that holds tissues together in many parts of the body. This particular collagen gene only becomes active in the jelly-like material that fills the eyeball; in Wagner's disease this "vitreous" jelly grabs too tightly to the already weak retina and pulls it away.
wikidoc
null
/index.php/Waist-hip_ratio
631
# Waist-hip ratio Waist-hip ratio or Waist-to-hip ratio (WHR) is the ratio of the circumference of the waist to that of the hips. It is calculated by measuring the waist circumference (located just above the upper hip bone) and dividing by the hip circumference at its widest part (waist/hip). The concept and significance of WHR was first theorized by evolutionary psychologist Dr. Devendra Singh at the University of Texas at Austin in 1993. A WHR of 0.7 for women and 0.9 for men have been shown to correlate strongly with general health and fertility. Women within the 0.7 range have optimal levels of estrogen and are less susceptible to major diseases such as diabetes, cardiovascular disorders and ovarian cancers. Men with WHRs around 0.9, similarly, have been shown to be more healthy and fertile with less prostate cancer and testicular cancer. WHR has been found to be a more efficient predictor of mortality in older people than waist circumference or body mass index (BMI) . If obesity is redefined using WHR instead of BMI, the proportion of people categorized as at risk of heart attack worldwide increases threefold. Other studies have found waist circumference, not WHR, to be a good indicator of cardiovascular risk factors, body fat distribution, and hypertension in type 2 diabetes. Scientists have discovered that the waist-hip ratio (WHR) is a significant factor in judging female attractiveness. Women with a 0.7 WHR (waist circumference that is 70% of the hip circumference) are usually rated as more attractive by men from European cultures . Such diverse beauty icons as Marilyn Monroe, Sophia Loren, Gong Li, and even the Venus de Milo all have ratios around 0.7, even though they have different weights. In other cultures, preferences appear to vary according to some studies, ranging from 0.6 in China, to 0.8 or 0.9 in parts of South America and Africa, and divergent preferences based on ethnicity, rather than nationality, have also been noted. Note: In the studies referenced above, only frontal WHR preferences differed significantly among racial and cultural groups. When actual (circumferential) measurements were made, the preferred WHR tended toward the expected value of 0.7 universally. The apparent differences are most likely due to the different body fat storage patterns in different population groups. For example, women of African descent tend to store their fat in their buttocks more than women of other groups. Therefore, their WHR as viewed from the front may appear to be much greater than when viewed from the side. The inverse may be true of women of East Asian ancestry. Therefore, African men appear to value a woman's small WHR in profile and an Asian men may place more value on an exaggerated frontal WHR compared to European men. Women with a low waist-hip ratio have been shown in studies to be smarter and have smarter offspring. Using data from the U.S. National Center for Health Statistics, William Lassek at the University of Pittsburgh in Pennsylvania and Steven Gaulin of the University of California, Santa Barbara, found a child's performance in cognition tests was linked to their mother's waist-hip ratio, a proxy for how much fat she stores on her hips. Children whose mothers had wide hips and a low waist-hip ratio scored highest, leading Lassek and Gaulin to suggest that fetuses benefit from hip fat that contains polyunsaturated fatty acids critical for the development of the fetus's brain. Many methods have been used to artificially alter a person's apparent WHR. These include corsets used to reduce the waist size and hip and buttock padding used by some transgendered people to increase the apparent size of the hips and buttocks.
wikidoc
null
/index.php/Wakame
373
# Wakame New studies conducted at Hokkaido University have found that a compound in wakame known as fucoxanthin can help burn fatty tissue. Studies in mice have shown that fucoxanthin induces expression of the fat-burning protein UCP1 that accumulates in fat tissue around the internal organs. Expression of UCP1 protein was significantly increased in mice fed fucoxanthin. Wakame is also used in topical beauty treatments. See also Fucoidan In New Zealand, wakame is a very serious weed, and was nominated one of the 100 worst invasive species in the world. It was first discovered in Wellington Harbour in 1987. It probably arrived accidentally in the late 1980s, via shipping from Asia, in ballast water. Native to cold temperate coastal areas of Japan, Korea and China, in recent decades it has also established in France, Great Britain, Spain, Italy, Argentina and Australia. Wakame is now found around much of south-eastern New Zealand, and as far north as Auckland. It spreads in two ways: naturally, through the millions of microscopic spores released by each fertile organism, and through attachment to vessel hulls and marine farming equipment. It is a highly successful and fertile species, which makes it a serious invader. However, its impacts are not well understood and are likely to vary, depending on the location. Wakame fronds are green and have a subtly sweet flavour and slippery texture. The leaves should be cut into small pieces as they will expand during cooking. In Japan, wakame is distributed either dried or salted, and used in soups (particularly miso soup, and salads (Tofu salad), of often simply as a side dish to Tofu and a salad vegetable like cucumber. These dishes are typically dressed with Japanese ingredients including soya sauce and vinegar/rice vinegar. Wakame is a rich source of EPA, an ω-3 essential fatty acid. At over 400 mg/100 kcal or almost 1 mg/kJ, it has one of the higher nutrient:calorie ratios, and among the very highest for a vegetarian source. However, 100 grams of Wakame is more than 44 tablespoons of dried Wakame. The usual consumpton of Wakame is closer to 1 or 2 tablespoons. Wakame also has high levels of calcium, thiamine, niacin, and vitamin B12.
wikidoc
null
/index.php/Wal-Mart_camel
102
# Wal-Mart camel The Wal-Mart camel is the bone fossil of a prehistoric camel found at a future Wal-Mart store in Mesa, Arizona. Workers digging a hole for an ornamental citrus tree found the bones of a camel that lived 10,000 years ago. Arizona State University geology museum curator Brad Archer calls it an important find and extremely rare. Wal-Mart officials and Greenfield Citrus Nursery owner John Babiarz agreed that the bones will go directly on display in a museum at Arizona State University. Camels lived in what's now Arizona until 8,000 years ago.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_CT
46
# Waldenström's macroglobulinemia CT ## CT scan In Waldenstrom's macroglobulinemia, CT scan imaging of chest, abdomen, and pelvis may show evidences of lymphadenopathy and hepatomegaly. CT of the lungs or abdomen can also be diagnostic for infection, which is particularly relevant to immunocompromised patients.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_bone_marrow_aspiration_and_biopsy
52
# Waldenström's macroglobulinemia bone marrow aspiration and biopsy A bone marrow aspiration and biopsy is essential in the diagnosis of Waldenström macroglobulinemia and shows hypercellular bone marrow, Dutcher bodies, and three patterns of bone marrow infiltration including lymphoplasmacytoid cells, lymphoplasmacytic cells in an interstitial/nodular pattern, and a polymorphous infiltrate.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_causes
43
# Waldenström's macroglobulinemia causes The exact cause of Waldenstrom's macroglobulinemia has not been identified; however, the disease has been highly-associated with somatic mutations in MYD88and CXR4 genes. In addition, less possible common cause of the disease includes chromosomal abnormalities.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_classification
73
# Waldenström's macroglobulinemia classification There is no established system for the classification of Waldenstrom's macroglobulinemia. However, according to a devised criteria based upon patient's symptoms, Waldenström's macroglobulinemia can be further classified into smoldering/asymptomatic and symptomatic WM. There is no established system for the classification of Waldenstrom's macroglobulinemia. However, according to a devised criteria based upon patient's symptoms, Waldenström's macroglobulinemia can be further classified into:
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_diagnostic_study_of_choice
156
# Waldenström's macroglobulinemia diagnostic study of choice Editor-In-Chief: C. Michael Gibson, M.S., M.D. ; Associate Editor(s)-in-Chief: Sara Mohsin, M.D. , Shyam Patel , Roukoz A. Karam, M.D. ; Grammar Reviewer: Natalie Harpenau, B.S. The diagnosis of Waldenstrom's macroglobulinemia is based on bone marrow aspiration and biopsy and serum protein analysis studies such as immunohistochemistry, flow cytometry and cytogenetics to distinguish WM from other types of B-cell lymphomas. CSF flow cytometry, protein electrophoresis and immunofixation is done for the diagnosis of Bing-Neel syndrome (a late, but severe, rare complication). In September 26-30, 2002, in Athens, Greece,the Second International Workshop was held in which a diagnostic criteria for Waldenstrom's Macroglobulinemia was proposed. According to this criteria, the following findings on performing bone marrow biopsy and serum protein analysis are confirmatory of Waldenström macroglobulinemia and exclude other small B cell lymphoid neoplasms with plasmacytic differentiation:
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_differential_diagnosis
35
# Waldenström's macroglobulinemia differential diagnosis Waldenstrom's macroglobulinemia must be differentiated from multiple myeloma, chronic lymphocytic leukemia/small lymphocytic lymphoma, b-cell prolymphocytic leukemia, follicular lymphoma, mantle cell lymphoma, and marginal zone lymphoma.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_echocardiography_or_ultrasound
50
# Waldenström's macroglobulinemia echocardiography or ultrasound ## Echocardiography or Ultrasound There are no specific echocardiography and ultrasound findings associated with Waldenstrom's macroglobulinemia. However, ultrasound can be used to look at enlarged spleen, liver, kidneys, lymph nodes and to help guide a biopsy needle into an enlarged lymph node.
wikidoc
null
/index.php/Waldenstr%C3%B6m%27s_macroglobulinemia_epidemiology_and_demographics
137
# Waldenström's macroglobulinemia epidemiology and demographics Editor-In-Chief: C. Michael Gibson, M.S., M.D. ; Associate Editor(s)-in-Chief: Sara Mohsin, M.D. , Mirdula Sharma, MBBS , Roukoz A. Karam, M.D. ; Grammar Reviewer: Natalie Harpenau, B.S. The prevalence of Waldenstrom's macroglobulinemia is estimated to be 1000-1500 cases in United States annually. Waldenstrom's macroglobulinemia represents 1-2% of all hematological cancers. Overall age-adjusted incidence of Waldenstrom's macroglobulinemia is 0.38 cases per 100,000 persons annually, increasing with age to 2.85 in patients above 80 years. Incidence of Waldenstrom's macroglobulinemia increases after 50 years of age with median age at diagnosis to be 65 years. Men are twice more likely than women to develop WM and there is higher incidence of WM in whites than blacks.
wikidoc
null
/index.php/Walery_Jaworski
69
# Walery Jaworski In 1899 he described bacteria living in the human stomach that he named "Vibrio rugula". He speculated that they were responsible for stomach ulcers, gastric cancer and achylia. It was one of the first observations of Helicobacter pylori. He published those findings in 1899 in a book titled "Podręcznik chorób żołądka" ("Handbook of Gastric Diseases") but it was available only in Polish and went unnoticed.
wikidoc
null
/index.php/WalkAmerica
118
# WalkAmerica The March of Dimes WalkAmerica began in 1970 as the first charitable walking event in the United States. . WalkAmerica is held in 1,100 communities across the nation. Every year, 7 million compassionate people, including 20,000 company and family teams as well as national sponsors, participate. The event has raised more than $1.7 billion since 1970 to bring the March of Dimes closer to the day when all babies are born healthy and full term. Proceeds help fund research to prevent premature births, birth defects and infant mortality. Every year, more half a million babies are born prematurely and more than 120,000 are born with serious birth defects in the United States .
wikidoc
null
/index.php/Walker-Warburg_syndrome
162
# Walker-Warburg syndrome Synonyms and keywords: Hydrocephalus, agyria and retinal dysplasia, Hard syndrome, Hard +/- E syndrome, Warburg syndrome, Chemke syndrome, Pagon syndrome, Cerebroocular dysgenesis, Cerebroocular dysplasia muscular dystrophy syndrome, COD-MD syndrome Walker-Warburg syndrome is a rare form of autosomal recessive congenital muscular dystrophy associated with brain (lissencephaly, hydrocephalus, cerebellar malformations) and eye abnormalities. It is the most severe form of congenital muscular dystrophy with most children dying before the age of three years. This condition has a worldwide distribution. The overall incidence is unknown but a survey in North-eastern Italy has reported an incidence rate of 1.2 per 100,000 live births. Several genes have been implicated in the etiology of Walker-Warburg syndrome, and others are as yet unknown. Several mutations were found in the protein O-Mannosyltransferase 1 and 2 genes, and one mutation was found in each of the fukutin and fukutin-related protein genes. It is inherited in an autosomal recessive manner.
wikidoc
null
/index.php/Walking
959
# Walking Walking is the main form of animal locomotion on land, distinguished from running and crawling. When carried out in shallow waters, it is usually described as wading and when performed over a steeply rising object or an obstacle it becomes scrambling or climbing. The word walking is derived from the Old English walkan (to roll). Walking is generally distinguished from running in that only one foot at a time leaves contact with the ground: for humans and other bipeds running begins when both feet are off the ground with each step. (This distinction has the status of a formal requirement in competitive walking events, often resulting in disqualification even at the Olympic level.) For horses and other quadrupedal species, the running gaits may be numerous, and walking keeps three feet at a time on the ground. While not strictly bipedal, several primarily bipedal human gaits (where the long bones of the arms support at most a small fraction of the body's weight) are generally regarded as variants of walking. These include: For humans, walking is the main form of transportation without a vehicle or riding animal. An average walking speed is about 5 km/h (3 mph), although this depends heavily on factors such as height, weight, age and terrain. A pedestrian is a walking person, in particular on a road (if available on the sidewalk/path/pavement). Human walking is accomplished with a strategy called the double pendulum. During forward motion, the leg that leaves the ground swings forward from the hip. This sweep is the first pendulum. Then the leg strikes the ground with the heel and rolls through to the toe in a motion described as an inverted pendulum. The motion of the two legs is coordinated so that one foot or the other is always in contact with the ground. The process of walking recovers approximately sixty per cent of the energy used due to pendulum dynamics and ground reaction force. The biomechanist Gracovetsky argues that the spine is the major agent in human locomotion. He bases his conclusions on the case of a man born without legs. The man was able to walk albeit slowly on his pelvis. Gracovetsky claims that however important to wellbeing, the function of legs is secondary in a strictly mechanical sense. Legs enable the spine to harvest the energy of gravity in an efficient manner. The legs act as long levers that transfer ground reaction force to the spine. Lumbar motion during walking consists mostly of sideways rotation. Gracovetsky observes that fish use the same lateral motion to swim. He believes the mechanism first evolved in fish and was later adapted by amphibians, reptiles, mammals and humans to their respective modes of locomotion. Many people walk as a hobby, and in our post-industrial age it is often enjoyed as a form of exercise. Fitness walkers and others may use a pedometer to count their steps. The types of walking include bushwalking, racewalking, weight-walking, hillwalking, volksmarching, Nordic walking and hiking on long-distance paths. Sometimes people prefer to walk indoors using a treadmill. In some countries walking as a hobby is known as hiking (the typical North American term), rambling (a somewhat dated British expression, but remaining in use because it is enshrined in the title of the important Ramblers' Association), or tramping (the invariable term in New Zealand). Hiking is a subtype of walking, generally used to mean walking in nature areas on specially designated routes or trails, as opposed to in urban environments; however, hiking can also refer to any long-distance walk. More obscure terms for walking include "to go by Marrow-bone stage", "to take one's daily constitutional", "to ride Shank's pony" or "to go by Walker's bus." The world's largest registration walking event is the International Four Days Marches Nijmegen. The annual Labor Day walk on Mackinac Bridge draws over sixty thousand participants. The Chesapeake Bay Bridge walk annually draws over fifty thousand participants. Walks are often organized as charity events with walkers seeking sponsors to raise money for a specific cause. Charity walks range in length from two mile or five km walks to as far as fifty miles (eighty km). The MS Challenge Walk is an example of a fifty mile walk which raises money to fight multiple sclerosis. The Oxfam Trailwalker is a one hundred km event. Walking is also the most basic and common mode of transportation. People around the world use it to get to work, school, do their shopping and to wherever it is the most convenient way. There has been a recent focus among urban planners in some communities to create pedestrian-friendly areas and roads, allowing commuting, shopping and recreation to be done on foot. Some communities are at least partially car-free, making them particularly supportive of walking and other modes of transportation. In the United States, the Active Living network is an example of a concerted effort to develop communities more friendly to walking and other physical activities. When distances are too great to be convenient, walking can be combined with other modes of transportation, such as cycling, public transport, car sharing, carpooling, hitchhiking, ride sharing, car rentals and taxis. These methods may be more efficient or desirable than private car ownership. The first successful attempts at walking robots tended to have 6 legs. The number of legs was reduced as microprocessor technology advanced, and there are now a number of robots that can walk on 2 legs, albeit not nearly as well as a human being. Increasing walking by 2000 steps per day is associated with reduced mortality . In older women, walking at least 4000 total steps per day is associated with lower mortality.
wikidoc
null
/index.php/Walking_ghost_phase
79
# Walking ghost phase The walking ghost phase of radiation poisoning is a period of apparent health, lasting for hours or days, following a dose of 10-50 sieverts of radiation. As its name would suggest, the walking ghost phase is followed by certain death. A painful death, marked by delirium and coma, inevitably awaits any recipient of such a dose of radiation, between 2-10 days after the completion of the walking ghost phase of radiation poisoning.
wikidoc
null
/index.php/Walking_wounded
76
# Walking wounded Walking wounded is a term used in first aid and triage to indicate injured persons who are of a relatively low priority. These patients are conscious and breathing and usually have only (relatively) minor injuries; thus they are capable of walking. Depending on the resources available, and the abilities of the injured persons these people may sometimes be used to assist treatment of more seriously injured patients, or assist with other tasks.
wikidoc
null
/index.php/Wallis_Zieff_Goldblatt_syndrome
143
# Wallis Zieff Goldblatt syndrome Wallis Zieff Goldblatt syndrome is a rare condition characterized by inherited skeletal disorders manifested mainly as rhizomelic short stature and lateral clavicular defects. It is also known as cleidorhizomelic syndrome. An initial clinical report of this syndrome describes a 6-month-old boy with rhizomelic shortening, particularly in the arms, and protuberances over the lateral aspects of the clavicles. On radiographs the lateral third of the clavicles had a bifid appearance resulting from an abnormal process or protuberance arising from the fusion center. His 22-year-old mother also had a height of 142 cm with an arm span of 136 cm and rhizomelic shortness of the limbs, maximal in the arms, and abnormalities of the acromioclavicular joints. Both the mother and the son had marked bilateral clinodactyly of the fifth fingers associated with hypoplastic middle phalanx.
wikidoc
null
/index.php/Walter_%22Walt%22_Dawson
332
# Walter "Walt" Dawson ## Contents Walter "Walt" Dawson (born April 26 1982 in Portland, Oregon) is an Alzheimer's disease activist. He is the son of British immigrant Cecil Dawson and Oregon native Clara Dawson. As a young boy, Dawson captured the attention of America's leaders and national media by undertaking a letter writing campaign on behalf of his father and other sufferers of Alzheimer's. In 1992, Dawson became a national spokesperson for the Alzheimer's Association. In this role, Dawson traveled to Washington, D.C. several times to testify before the United States Senate and House of Representative committees about his family's experiences. While in Washington, Dawson was granted access to several senior legislators and public officials including President Bill Clinton and Vice-President Al Gore. ## Overview Dawson began his letter-writing campaign after the cost of his father's long care placed the family in serious financial peril. National Public Radio became the first nationwide media outlet to support his original letter-writing campaign, after Dawson (aged 9) read one of his letters on the air. Soon afterwards, NBC, CBS and Nickelodeon picked up the story of Dawson's campaign on behalf of his father and other sufferers of Alzheimer's disease. They covered the Dawson family and their struggle for health care reform over numerous programs, gaining national exposure for the issues that mattered to sufferers of the disease and their families. ## Early adulthood Dawson continues to pursue a life of advocacy, dedicated to fighting the issues surrounding Alzheimer's disease and driving further health care reform.[citation needed] While an undergraduate at the University of Portland, Dawson was elected Student Body Vice-President and served as President of the Student Senate. Dawson then spent a year in AmeriCorps[citation needed] before heading to England to study at the London School of Economics.[citation needed] He begins a post-graduate degree in Comparative Health Policy at Oxford University in the fall of 2007.[citation needed]
wikidoc
null
/index.php/Walter_Noddack
256
# Walter Noddack Walter Noddack (* 17 August 1893 in Berlin, 7 December 1960 in Berlin) was a German chemist. He, Ida Tacke (who later married Noddack), and Otto Berg reported the discovery of element 43 in 1925 and named it masurium (after Masuria in Eastern Prussia).Template:Inote The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray diffraction spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Contemporary experimenters could not replicate the discovery, and in fact it was dismissed as an error for many years.Template:Inote It was not until 1998 that this dismissal began to be questioned. John T. Armstrong of the National Institute of Standards and Technology ran computer simulations of the experiments and obtained results very close to those reported by the 1925 team; the claim was further supported by work published by David Curtis of the Los Alamos National Laboratory measuring the (tiny) natural occurrence of technetium.Template:Inote Debate still exists as to whether the 1925 team actually did discover element 43. Noddack became professor for physical chemistry in 1935 at the University of Freiburg and 1941 at the University of Straßburg. After World War II he changed to the University of Bamberg and in 1956 he became direk­tor dof the newly founded Research Institute for geochemistry there.
wikidoc
null
/index.php/Walter_Reed_Army_Medical_Center
1,258
# Walter Reed Army Medical Center The Walter Reed National Army Medical Center (WRAMC) is the United States Army's medical center on the east coast of the United States. Located on 113 acres (457,000 m²) in Washington, D.C., it serves more than 150,000 active and retired personnel from all branches of the military. The center is named after Major Walter Reed, an army surgeon who led the team which confirmed that yellow fever is transmitted by mosquitoes rather than direct contact. Since its origins, what is now the WRAMC medical care facility has grown from a bed capacity of 80 patients to approximately 5,500 rooms covering more than 28 acres (113,000 m²) of floor space. Fort Lesley J. McNair, located in southwest Washington, D.C. on land set aside by George Washington as a military reservation, is the third oldest U.S. Army installation in continuous use in the United States after West Point and Carlisle Barracks. Its position at the confluence of the Anacostia River and the Potomac River made it an excellent site for the defense of the nation's capital. Dating back to 1791, the post served as an arsenal, played an important role in our nation's defense, and housed the first U.S. Federal Penitentiary from 1839 to 1862. Today, Fort McNair enjoys a strong tradition as the intellectual headquarters for defense. Furthermore, with unparalleled vistas of the picturesque waterfront and the opposing Virginia shoreline, the historic health clinic at Fort McNair, the precursor of today's Walter Reed Army Medical Center (WRAMC), overlooks the residences of top officials who choose the famed facility for the delivery of their health care needs. "Walter Reed's Clinic," the location of the present day health clinic at Ft. McNair, occupies what was from 1898 until 1909 the General Hospital at what was then Washington Barracks, long before the post was renamed in honor of Lt. Gen. McNair who was killed in Normandy in 1944 by friendly fire. The hospital served as the forerunner of Walter Reed General Hospital; however, the Victorian era waterfront dispensary remains and is perhaps one of America's most historically significant military medical treatment facilities. It is reported that Walter Reed lived and worked in the facility when he was assigned as Camp Surgeon from 1881 to 1882. After having served on other assignments, he returned as Professor of Medicine and Curator of the Army Medical Museum. Some of his epidemiological work included studies at Washington Barracks, and he is best known for discovering the transmission of yellow fever. In 1902, Major Reed underwent emergency surgery here for appendicitis and died of complications in this U.S. Army Medical Treatment Facility (MTF), within the very walls of what became his final military duty assignment. Regarding the structure itself, since the 1890's the health clinic was used as an Army General Hospital where physicians, corpsmen and nurses were trained in military health care. In 1899, the morgue was constructed which now houses the Dental Clinic, and in 1901 the hospital became an entirely separate command (military formation). This new organizational command relocated eight years later with the aide of horse drawn wagons and an experimental steam driven ambulance in 1909. Departing from the 50-bed hospital, as documented in The Army Nursing Newsletter, Volume 99, Issue 2, February 2000 , they set out due north transporting with them 11 patients initially to the new 65-bed facility in the northern aspect of our nation's capital. Having departed Ft. McNair, the organization has since developed into the Walter Reed Army Medical Center that we know today. As for the facility they left behind at Fort McNair, it functioned in a smaller role as a post hospital until 1911 when the west wing was converted into a clinic. Today, this renovated medical treatment facility at Fort McNair continues its rich, uninterrupted heritage in providing a wide variety of state of the art health care to the capital region military community as an extension of WRAMC. Congressional legislation authorized construction of Walter Reed General Hospital (WRGH, now known as "Building 1") and the first ten patients were admitted on May 1, 1909. According to the WRAMC website, Lt. Col. William Cline Borden "was the initiator, planner and effective mover for the creation, location, and first Congressional support of the Medical Center" Because of his efforts, the facility was nicknamed "Borden's Dream." In 1923, General John J. Pershing signed the War Department order creating the "Army Medical Center" (AMC) within the same campus as the WRGH. (At this time, the Army Medical School was relocated from 604 Louisiana Avenue and became the "Medical Department Professional Service School" (MDPSS) in the new Building 40.) In September 1951, "General Order Number 8" combined the WRGH with the AMC; the entire complex of 100 rose-brick Georgian buildings was at that time renamed the "Walter Reed Army Medical Center" (WRAMC). In June 1955, the Armed Forces Institute of Pathology (AFIP) occupied the new Building 54 and, in November, what had been MDPSS was renamed the Walter Reed Army Institute of Research (WRAIR). 1964 saw the birth of the Walter Reed Army Institute of Nursing (WRAIN). Former President Dwight D. Eisenhower died at WRAMC on March 28, 1969. Starting in 1972, a huge new WRAMC building (Building 2) was constructed and made ready for occupation by 1977. WRAIR moved from Building 40 to a large new facility on the WRAMC Forest Glen Annex in Maryland in 1999. Subsequently, Building 40 was slated for renovation under an enhanced use lease by a private developer. Today, the U.S. President, Vice President, Senators and Representatives may all receive care at this medical center. WRAMC is considered a tertiary care center and houses numerous medical and surgical specialties. It is part of the larger Walter Reed Health Care System, which includes some ten other hospitals. As part of a Base Realignment and Closure announcement on May 13, 2005, the Department of Defense proposed replacing Walter Reed Army Medical Center with a new Walter Reed National Military Medical Center (WRNMMC); the new center would be on the grounds of the National Naval Medical Center in Bethesda, Maryland, seven miles (11 km) from WRAMC's current location in Washington, D.C. The proposal is part of a program to transform medical facilities into joint facilities, with staff including Army, Navy, and Air Force medical personnel. The transfer of services from the existing to the new facilities will be gradual to allow for continuity of care for the thousands of service members, retirees and family members that currently depend upon WRAMC. The final closure of the current WRAMC facility has been set for September 2011. In February of 2007, The Washington Post published a series of investigative articles outlining cases of alleged neglect (physical deterioration, bureaucratic nightmares, etc) at WRAMC as reported by outpatient soldiers and their family members. A scandal and media furor quickly developed resulting in the firing of the WRAMC commanding general Maj. Gen. George W. Weightman , the resignation of Secretary of the Army Francis J. Harvey (reportedly at the request of Secretary of Defense Robert Gates ), the forced resignation of Lt. Gen. Kevin C. Kiley, commander from 2002 to 2004 , congressional committee hearings, and commentary from numerous politicians including President George W. Bush and Vice-President Dick Cheney. Several independent governmental investigations are ongoing and the controversy has spread to other military health facilities and the Department of Veterans Affairs health care system.
wikidoc
null
/index.php/Wannarexia
238
# Wannarexia Wannarexia, or anorexic yearning, is a label applied to someone who claims to have anorexia nervosa, or wishes they did. These individuals are also called wannarexic "wanna-be ana" or "anorexic wannabe". The neologism wannarexia is a portmanteau of the latter two terms. The condition is a cultural phenomenon, not a diagnosis. Some people fitting this description may also be diagnosed with eating disorder not otherwise specified (EDNOS). Wannarexia is most common in teenage girls who want to be popular. Wannarexia is likely caused by a combination of cultural and media influences. Author and Personal Performance Coach Susan Kano has written, "Most young women have "anorexic thoughts" and attitudes, and there are no diagnostic criteria for wannarexia. The distinction between anorexia and wannarexia is that anorexics aren't satisfied by their weight loss, while wannarexics are more likely to derive pleasure from weight loss. Many people who actually suffer from the eating disorder anorexia are angry, offended, or frustrated about wannarexia. Although wannarexics may be inspired or motivated by the pro-anorexia, or pro-ana, community that promotes or supports anorexia as a lifestyle choice rather than an eating disorder, they are not welcome in this subculture. Participants in pro-ana web forums only want to associate with "real anorexics" and will shun wannarexics who only diet occasionally, and are not dedicated to the lifestyle full-time. In this context, wannarexic is a pejorative term.
wikidoc
null
/index.php/Warfarin_(injection)
128
# Warfarin (injection) Warfarin (injection) is a vitamin K antagonist that is FDA approved for the treatment of venous thrombosis and its extension, pulmonary embolism and thromboembolic complications associated with atrial fibrillation and/or cardiac valve replacement. There is a Black Box Warning for this drug as shown here. Common adverse reactions include fatal and nonfatal hemorrhage from any tissue or organ. Adequate and well-controlled studies with warfarin have not been conducted in any pediatric population, and the optimum dosing, safety, and efficacy in pediatric patients is unknown. Pediatric use of warfarin is based on adult data and recommendations, and available limited pediatric data from observational studies and patient registries. Pediatric patients administered warfarin should avoid any activity or sport that may result in traumatic injury.
wikidoc
null
/index.php/Warfarin_(oral)
87
# Warfarin (oral) Warfarin (oral) is an anticoagulant that is FDA approved for the treatment of venous thromboembolism, pulmonary embolism, thromboembolic complications associated with atrial fibrillation, cardiac valve replacement, and/or myocardial infarction. There is a Black Box Warning for this drug as shown here. Common adverse reactions include hemorrhage, necrosis of skin and other tissues, systemic atheroemboli, and cholesterol microemboli. Nausea, vomiting, diarrhea, taste alteration, abdominal pain, flatulence, bloating, hepatitis, elevated liver enzymes. Cholestatic hepatitis has been associated with concomitant administration of Coumadin and ticlopidine.
wikidoc
null
/index.php/Warm-blooded
934
# Warm-blooded Warm-blooded animals maintain thermal homeostasis; that is, they keep their body temperature at a constant level. This involves the ability to cool down or produce more body heat. Warm-blooded animals mainly control their body temperature by regulating their metabolic rates (e.g. increasing their metabolic rate as the surrounding temperature begins to decrease). Both the terms "warm-blooded" and "cold-blooded" have fallen out of favor with scientists, because of the vagueness of the terms, and due to an increased understanding in this field. Body temperature types do not fall into simple either/or categories. Each term may be replaced with one or more variants (see: Definitions of warm-bloodedness). Body temperature maintenance incorporates a wide range of different techniques that result in a body temperature continuum, with the traditional ideals of warm-blooded and cold-blooded being at opposite ends of the spectrum. A large proportion of the creatures traditionally called "warm-blooded" (mammals and birds) fit all three of these categories. However, over the past 30 years, studies in the field of animal thermophysiology have revealed many species belonging to these two groups that don't fit all these criteria. For example, many bats and small birds are poikilothermic and bradymetabolic when they sleep for the night, or day. For these creatures, another term was coined: heterothermy. Further studies on animals that were traditionally assumed to be cold-blooded have shown that most creatures incorporate different variations of the three terms defined above, along with their counterparts (ectothermy, poikilothermy and bradymetabolism), thus creating a broad spectrum of body temperature types (see temperature control in cold-blooded animals). The creatures traditionally regarded as warm-blooded have a larger number of mitochondria per cell which enables them to generate heat by increasing the rate at which they "burn" fats and sugars. This requires a much greater quantity of food than is needed by cold-blooded animals in order to replace the fat and sugar reserves. Many endothermic animals supplement these reserves by shivering in cold conditions, since muscular activity also converts fats and sugars into heat. In winter, there may not be enough food to enable an endotherm to keep its metabolic rate stable all day, so some organisms go into a controlled state of hypothermia called hibernation, or torpor. This conserves energy by lowering the body temperature. Many birds and small mammals (e.g. tenrecs) also allow their body temperatures to drop at night to reduce the energy cost of maintaining body temperature. Humans also slow down their metabolism slightly during sleep. Heat loss is a major threat to smaller creatures as they have a larger ratio of surface area to volume. Most small warm-blooded animals have insulation in the form of fur or feathers. Aquatic warm-blooded animals generally use deep layers of fat under the skin for insulation, since fur or feathers would spoil their streamlining. Penguins use both feathers and fat, since their need for streamlining limits the degree of insulation which feathers alone can give them. Birds, especially waders, have blood-vessels in their lower legs which act as heat exchangers - veins are right next to arteries and thus extract heat from the arteries and carry it back into the trunk. Many warm-blooded animals blanche (become paler) to reduce heat loss by reducing the blood flow to the skin. In equatorial climates and during temperate summers over-heating is as great a threat as cold. In hot conditions many warm-blooded animals increase heat loss by panting and or flushing (increasing the blood flow to the skin). Hairless and short-haired mammals also sweat, since the evaporation of sweat uses a lot of heat. Elephants keep cool by using their huge ears rather like the radiators in automobiles: they flap their ears to increase the airflow over them. The overall speed of an animal's metabolism increases by a factor of about 2 for every 10 C° rise in temperature (limited by the need to avoid hyperthermia). Warm-bloodedness does not provide greater speed than cold-bloodedness - cold-blooded animals can move as fast as warm-blooded animals of the same size and build. But warm-blooded animals have much greater stamina than cold-blooded creatures of the same size and build, because their faster metabolisms quickly regenerate energy supplies (especially ATP) and break down muscular waste products (especially lactate). This enables warm-blooded predators to run down prey, warm-blooded prey to outrun cold-blooded predators (provided they avoid the initial charge or ambush) and warm-blooded animals to be much more successful foragers. Enzymes have strong temperature preferences and their efficiency is much reduced outside their preferred ranges. A creature with a fairly constant body temperature can therefore use enzymes which are efficient at that temperature. Another advantage of a homeothermic animal would be its ability to maintain its constant body temperature even in freezing cold weather. A poikilotherm must either operate well below optimum efficiency most of the time or spend extra resources making a wider range of enzymes to cover the wider range of body temperatures. Because warm-blooded animals use enzymes which are specialised for a narrow range of body temperatures over-cooling rapidly leads to torpor and then death. Also, the energy required to maintain the homeothermic temperature comes from food - this results in homeothermic animals needing to eat much more food than poikilothermic animals. Scientific understanding of thermal regulation regimes has advanced greatly since the original distinction was made between warm- and cold-blooded animals, and the issue has been studied much more extensively.
wikidoc
null
/index.php/Warming_up
343
# Warming up ## Benefits A warm-up is usually performed before participating in technical sports or exercising. A warm-up generally consists of a gradual increase in intensity in physical activity (pulse raiser), a joint mobility exercise, stretching and a sport related activity. For example, before running or playing an intense sport one might slowly jog to warm muscles and increase heart rate. It is important that warm ups should be specific to the exercise that will follow, which means that exercises (of warm up) should prepare the muscles to be used and to activate the energy systems that are required for that particular activity. The risks and benefits of combining stretching with warming up are mixed and in some cases disputed. Warming up prepares the body mentally & physically. ## See also Athletes not only warm up to physically prepare their bodies for training or competition but also to mentally warm themselves up. Warm ups are a crucial part of performance. If completed correctly they enable the body to perform at its peak performing ability at the current time. There are three different types of warm ups; gradual increase of physical activity to raise the pulse (Eg. cycling), a joint mobility exercise, stretching and a sport related activity (Eg. dribbling for basketball). A warm up should be specific to the task required to perform in order to activate the correct energy systems and prepare the correct muscles. There are many beneficial effects from warm ups including; • Increased heart rate. This enables oxygen in the blood to travel faster meaning the muscles fatigue slower, also, the synovial fluid between the joints is produced more to reduce friction in the joints, the capillaries dilate and it lets more oxygen travel in the blood. • Higher temperature in the muscles. This decreases the thickness of the blood-letting the oxygen travel to different parts of the body quicker, it also decreases the viscosity within the muscle, removes lactic acid, lets the muscles fibres have greater extensibility and elasticity and an increase in force and contraction of muscles.
wikidoc
null
/index.php/Warthin%27s_tumor_overview
614
# Warthin's tumor overview Warthin's tumor is a type of benign tumor of the salivary glands. It is also known as benign papillary cystadenoma lymphomatosum. Its etiology is unknown, but there is a strong association with the cigarette smoking. Smokers are at 8 times greater risk of developing Warthin's tumor than the general population. Warthin's tumor arises from salivary gland epithelium, which are secretory cells of the salivary gland. On gross pathology, cystic and multicentric appearance are characteristic findings of Warthin's tumor. On microscopic histopathological analysis, papillae, fibrous capsule, and cystic spaces are characteristic findings of Warthin's tumor. Warthin's tumor must be differentiated from salivary gland cysts, salivary gland lymphoma, and salivary gland cancer. The prevalence of Warthin's tumor is estimated to be 2000 to 2500 cases annually. Warthin's tumor commonly affects elderly patients greater than 60 years old. Males are more commonly affected with Warthin's tumor than females. The male to female ratio ranges from 2.6:1 to 10:1. The most potent risk factor in the development of Warthin's tumor is smoking. Other risk factors include irradiation, Epstein-Barr virus, and alcohol. If left untreated, few patients with Warthin's tumor may progress to develop facial paralysis. Common complications of Warthin's tumor include squamous cell carcinoma and facial paralysis. Prognosis is generally good. The most common symptoms of Warthin's tumor include swollen salivary gland, lump near the back of the lower jaw, jaw pain, the sensation of pressure, facial nerve paralysis, tinnitus, impaired hearing, earache, etc. X-ray, computed tomography (CT) scan and magnetic resonance imaging (MRI) may help diagnosis. Surgery is the mainstay of treatment for Warthin's tumor. As a benign tumor, the prognosis of Warthin's tumor is good. Warthin's tumor arises from salivary gland epithelium, which are secretory cells of the salivary gland. On gross pathology, cystic and multicentric appearance are characteristic findings of Warthin's tumor. On microscopic histopathological analysis, papillae, fibrous capsule, and cystic spaces are characteristic findings of Warthin's tumor. The prevalence of Warthin's tumor is estimated to be 2000 to 2500 cases annually. Warthin's tumor commonly affects elderly patients greater than 60 years old. The male is more commonly affected with Warthin's tumor than female. The male to female ratio ranges from 2.6:1 to 10:1. If left untreated, few patients with Warthin's tumor may progress to develop facial paralysis. Common complications of Warthin's tumor include squamous cell carcinoma and facial paralysis. Prognosis is generally good. The hallmark of Warthin's tumor is swelling of jaw, cheek, mouth, or neck. A positive history of swollen salivary gland and jaw pain is suggestive of Warthin's tumor. The most common symptoms of Warthin's tumor include tinnitus, an earache, and blood in saliva. Patients with Warthin's tumor usually appear well. Physical examination of patients with Warthin's tumor is usually remarkable for mobile nontender mass which is firm, solitary, and normal in color and appearance. Neck CT scan may be helpful in the diagnosis of Warthin's tumor. Findings on CT scan suggestive of Warthin's tumor include cystic lesion posteriorly within the parotid with a focal tumor nodule and absence of calcifications. Neck ultrasound may be helpful in the diagnosis of Warthin's tumor. Findings on neck ultrasound suggestive of Warthin's tumor include well defined, ovoid, hyperechoic mass with internal cystic areas and hypervascularity. On biopsy, Warthin's tumor is characterized by cystic spaces surrounded by two uniform rows of cells with centrally placed pyknotic nuclei, papillae with a two rows of pink epithelial cells, and lymphoid stroma.
wikidoc
null
/index.php/Warwick_Medical_School
1,606
# Warwick Medical School The Warwick Medical School is based at one of the UK's leading research universities. The University of Warwick is consistently in the top 10 Times University ratings. The School was opened in 2000 as part of a government initiative to train more doctors in Britain. Originally linked with Leicester Medical School, Warwick has enjoyed rapid growth and in 2007 it was granted independent degree-awarding status by the Privy Council on the recommendation of the General Medical Council of the United Kingdom. Warwick Medical School is the only solely graduate-entry school in the UK. The School comprises three institutes: the Institute of Clinical Education (ICE) which co-ordinates undergraduate and postgraduate teaching, the Health Sciences Research Institute (HSRI) and the Clinical Sciences Research Institute (CSRI). Warwick offers a four-year Bachelor of Medicine and Bachelor of Surgery (MB ChB) to biological, natural and physical science graduates. Applicants must have a good (upper second and above) degree or equivalent. The course features early involvement with patients and focuses on developing both clinical and communication skills from the beginning. The MB ChB is divided into two phases: Phase I lasts for 18 months and accounts for the bulk of academic learning. As well as attending lectures, students work in small learning groups, guided by clinicians or academic staff. Phase 2 is largely based around 11 clinical blocks of 8 weeks duration and an 8-week elective attachment. The majority of the clinical placements are in three hospitals, the new University Hospital Coventry at Walsgrave (UHCW), Warwick Hospital and George Eliot Hospital. Placements are also provided in primary and community care settings, ranging from GP practices to outreach projects and mental health services in the local area. The admissions procedure for the MB ChB course at the Medical School begins with an application through UCAS. Prospective students are then invited to take the UK Clinical Aptitude Test. A proportion of applicants are invited to a selection centre which involves an interview and some team and written exercises. Successful applicants are then selected based on their performance at the selection centre. In 2005, 900 applications were received of which 300 were interviewed for 164 places. Continuing Professional Development (CPD) Warwick Medical School has more than 2,000 postgraduate taught students enrolled on CPD courses. It offers Postgraduate Awards, Postgraduate Certificates, Postgraduate Diplomas, Masters Degree, Short Courses and Undergraduate Level and Non-accredited Courses. Subject areas include Child Health, Chronic Disease Management, Dentistry, Diabetes, Public Health and Occupational Health. Warwick also offers a taught Postgraduate programme in Medical Education designed to provide health care professionals involved in the delivery of teaching and training in the health care environment with appropriate pedagogic skills. Research Degrees The School offers a three year full-time or five year part-time PhD or a two year full time or three year part time MPhil. Postgraduate students can also choose to study for an MSc by Research, one year full time or two years part time, or an MD (Doctor of Medicine) which is a two year full time or three year part time course of study. All students have a team of two or more supervisors. The supervisory team will meet on a regular basis with the student to ensure adequate monitoring and supervision of the student. The School works in close collaboration with a number of departments within the University. This collaboration enables students to have supervisors from different departments with different expertise. The HSRI specialises in community focused academic research with links to a number of NHS trusts and is the co-ordinating centre for the Warwick and West Midlands Primary Care Research Network. The Institute covers epidemiology, clinical trials, biostatistics, health economics, modelling, social sciences and psychology. Key research areas include are public mental health, emergency care and rehabilitation, cancer and primary care. There are also developing areas such as health care systems improvement. The Warwick Clinical Trials Unit was set up in 2005 within the HSRI. It is an academic clinical trials unit with expertise in the design and conduct of trials, particularly of complex health states and interventions. The trials unit has four major strands of work: musculoskeletal conditions including injury prevention and management; cancer; clinical trials methodology; and systematic reviews. The unit has grown dramatically since 2005 and a new Clinical Trials Building is planned for the Gibbet Hill campus. The Wolfson Foundation is supporting this new building with a £1 million grant. The Clinical Sciences Research Institute (CSRI) is based in a modern, purpose-built facility on the site of the major regional University Hospital Coventry and Warwickshire (UHCW) and was opened in 2005. It specialises in translational medicine, epidemiology and clinical effectiveness. There are 36 senior research academics based at the Institute, which has state-of-the-art equipment for molecular, cellular, proteomic, transcriptomic and functional studies. The Institute's research themes are consistent with the strategic areas of development at University Hospital Coventry and Warwickshire. The School was established as a collaborative venture with the University of Leicester, the Leicester-Warwick Medical School. Professor Ian Lauder was appointed Dean of the new joint School. The first students to study at Warwick arrived in September 2000. The School had temporary headquarters on the main University of Warwick campus until the Medical Teaching Centre was completed in August 2001 and was formally opened by the Secretary of State for Health in 2002. In 2003 the current Dean of Warwick Medical School Professor Yvonne Carter was appointed as Vice-Dean, before taking on the role of Dean of Warwick Medical School the following year. The first cohort of MB ChB students graduated in 2004, the same year that the old Mathematics and Statistics building at Gibbet Hill was refurbished and renamed the Medical School Building. The Medical School Building is now home to the Dean's Office, the Warwick Clinical Trials Unit and HSRI. The Clinical Sciences Research Institute was opened on the site of University Hospital Coventry and Warwickshire in 2005, by Sir Graeme Catto, President of the General Medical Council. In 2006, the School opened a Biomedical Learning Grid for students. This study resource is equipped with up-to-date IT equipment, interactive white boards, plasma screens and PCs as well as more traditional learning materials such as reference texts and anatomical models. Following an intensive period of assessment in 2006 by the General Medical Council, Warwick was formally recommended to receive independent degree-awarding status. This was enacted on 2 May 2007 when the Medical Act was amended by Her Majesty the Queen in the Privy Council. Independent degree-awarding status came into effect on 6 June 2007. MB ChB graduates in the summer of 2007 were the first to receive University of Warwick medical degrees. In 2005 a student-led project initiated Medics Without A Paddle to write an e-book containing free revision notes for all the 70-odd pages of objectives that students were expected to cover during their phase II clinical attachments. The material is entirely produced by the student body and two student editors oversee entries. This project was conceived of to promote co-operation between students in spreading the knowledge and experiences gained from a variety of sources. Site URL is: http://uk.geocities.com/medicswithoutapaddle/index.html The medics community are brought together under the leadership of the University of Warwick Medsoc who organise social events, Medics sports (so as not to clash with their busy timetables) and the yearly Revue whose function is mainly to poke fun at the Medical School and the NHS. As was reported in the March 2007 issue of StudentBMJ News, a third of 185 students who sat the second year end of Phase 1 exam were unsuccessful. Following a retake examination, 35 students were unable to continue with the course. Those students faced termination of their course. The high failure rate was investigated by Warwick University (see below). See also Senate Minutes March 2007 Following a thorough appeals process, out of the 35 students, 9 had their course terminated. The remaining students have been permitted to re-sit the second year of the course. Warwick Medical School conducted an internal investigation in an attempt to understand why 30% of students failed its 'End of Phase 1' examination this year. According to a statement sent to Wikipedia, the Medical School administration gave "deep consideration to the unexpected results of the recent Leicester-Warwick Medical Schools Phase 1 examination and are unable to explain why so many students failed this assessment, including some students with previously strong academic records.[citation needed] External examiners commented that the assessment was a fair assessment, which was conducted in exemplary fashion.[citation needed] The School conducted a significant event review to see if it could shed light on this as there were no obvious reasons. It claimed that it cannot attribute an increased failure rate to changes in the student curriculum or changes in the assessment process. There appeared to have been no single area of the assessment which caused difficulty across the student cohort. The Significant Event Review Report was made available to all students in May 2007. Warwick Medical School has taken steps to implement those recommendations of the report which have not already been implemented in the changes to the new Warwick curriculum. The School claims that whilst it is not in the Medical Schools' interest to have a large number of students fail any progression test, it had to act responsibly in accordance with student performance. If all had met the required standard, they would all have passed.
wikidoc
null
/index.php/Washington_Hospital_Center
821
# Washington Hospital Center A member of MedStar Health, the not-for-profit Hospital Center is licensed for 926 beds and, on average, operates near capacity. Health services in primary, secondary and tertiary care are offered to adult and neonatal patients. The Hospital Center occupies a 47-acre campus in Northwest Washington it shares with three other medical facilities. Immediately adjacent to the Washington Hospital Center are the National Rehabilitation Hospital and Children's National Medical Center (although Children's has satellite centers scattered across the city). The Washington Heart program is a national leader in the research, diagnosis and treatment of cardiovascular disease; its angioplasty or cardiac catheterization laboratory is recognized by the DuPont Foundation as the busiest in the nation.[citation needed] One of the Washington area's first heart transplants was done at the Hospital Center on May 22,1987. In addition to its cardiac care specialties, the Hospital Center is respected as a top facility in other areas including cancer, neurosciences, gastrointestinal disorders, endocrinology, women's services, transplantation and burn. Washington Hospital Center's neurosciences program offers the full range of surgical and minimally invasive treatment and operates the only JCAHO-accredited Primary Stroke Center in the District.[citation needed] The adult burn center is the most advanced in the area.[citation needed] The Washington Cancer Institute (WCI) is the District's largest cancer care provider, treating more cancer patients than any other program in the nation's capital. The Cancer Institute diagnosed more than 2,580 new cases during fiscal year 2006. There were more than 84,299 outpatient visits and more than 2,423 inpatient admissions during that period. WCI provides comprehensive, interdisciplinary care including surgical, radiation and medical oncology services as well as counseling for patients and families, cancer education, community outreach program and clinical research trials. The Center for Breast Health saw an estimated 15,358 patients during fiscal year 2006. The Hospital Center's transplantation program ranks among the top five percent in the nation for patient outcomes and consistently exceeds the national average.[citation needed] The program for kidney, pancreas and heart is one of the busiest on the East Coast.[citation needed] Perhaps the Hospital Center's most wide-reaching presence is its MedSTAR Transport air ambulance service, which, as of 2007 had carried 40,293 patients since its inception in 1983. The American College of Surgeons consistently recognizes the MedSTAR Trauma program as one of the nation's best Level I shock/trauma units.[citation needed] The Washington Hospital Center was founded in March 1958 when three specialty hospitals merged into one. On May 7, 1998, Medlantic Healthcare Group, the Hospital Center's not-for-profit parent company, merged with Helix Health, a group of four Baltimore, MD-based hospitals, making the combined company the largest health care provider in the mid-Atlantic region. Helix/Medlantic was renamed MedStar Health on February 1, 1999. Template:Refimprovesect In fiscal year 2006, 46,155 inpatients were served --including 4,409 births-- and 366,248 outpatients. The Hospital Center has a medical/dental staff of 1,584. A total of 19,054 cardiac catheterizations were performed during FY 2006. There were 1,803 open-heart surgeries and six heart transplants performed during the fiscal year. There were 111 kidney transplants, four combination kidney/pancreas transplants and three pancreas transplants performed during fiscal year 2006. There were 3,791 helicopter transports and 1,007 trauma unit visits in FY 2006. There were 74,025 Emergency Department visits, including 18,776 admissions. The Hospital Center provided over $55.3 million in charity care during FY 2006. In 2007, the Washington Hospital Center was named among "America's Best Hospitals" for heart and heart surgery and kidney disease by U.S.News & World Report in the magazine's 18th annual survey of 5,462 health care facilities. The Hospital Center was the only Washington hospital to be ranked in the areas of heart and heart surgery and geriatrics and was the top ranked Washington area hospital in kidney disease and transplantation. Only 173 medical centers in the U.S. were ranked in one or more of 16 specialties designated in U.S.News & World Report's survey. A sampling of board-certified doctors in each specialty was randomly selected from the American Medical Association's master list of 850,000 physicians nationwide. U.S.News randomly polled 200 doctors in each specialty who were asked to list the five hospitals they considered best in their specialty for difficult cases. The mail survey also asked that the decision be made without consideration of cost or location. An estimated 46 percent of the 3,200 doctors contacted responded to the 2006 survey. Hospitals were judged on three equally weighted factors - reputation, mortality rate, and patient care-related factors including nursing and patient services. The 50 hospitals with the highest scores in each specialty made the list.
wikidoc
null
/index.php/Water
5,681
# Water Water is a common chemical substance that is essential for the survival of all known forms of life. In typical usage, water refers only to its liquid form or state, but the substance also has a solid state, ice, and a gaseous state, water vapor. About 1.460 petatonnes (Pt) of water covers 71% of the Earth's surface, mostly in oceans and other large water bodies, with 1.6% of water below ground in aquifers and 0.001% in the air as vapor, clouds (formed of solid and liquid water particles suspended in air), and precipitation. Some of the Earth's water is contained within man-made and natural objects near the Earth's surface such as water towers, animal and plant bodies, manufactured products, and food stores. Saltwater oceans hold 97% of surface water, glaciers and polar ice caps 2.4%, and other land surface water such as rivers, lakes and ponds 0.6%. Water moves continually through a cycle of evaporation or transpiration (evapotranspiration), precipitation, and runoff, usually reaching the sea. Winds carry water vapor over land at the same rate as runoff into the sea, about 36 Tt per year. Over land, evaporation and transpiration contribute another 71 Tt per year to the precipitation of 107 Tt per year over land. Some water is trapped for varying periods in ice caps, glaciers, aquifers, or in lakes, sometimes providing fresh water for life on land. Clean, fresh water is essential to human and other life. However, in many parts of the world - especially developing countries - it is in short supply. Water is a solvent for a wide variety of chemical substances. Water can appear in three phases. Water takes many different forms on Earth: water vapor and clouds in the sky; seawater and rarely icebergs in the ocean; glaciers and rivers in the mountains; and aquifers in the ground. Water can dissolve many different substances, giving it different tastes and odors. In fact, humans and other animals have developed senses to be able to evaluate the potability of water: animals generally dislike the taste of salty sea water and the putrid swamps and favor the purer water of a mountain spring or aquifer. Humans also tend to prefer cold water rather than lukewarm, as cold water is likely to contain less microbes. The taste advertised in spring water or mineral water derives from the minerals dissolved in it, as pure H2O is tasteless. As such, purity in spring and mineral water refers to purity from toxins, pollutants, and microbes. Much of the universe's water may be produced as a byproduct of star formation. When stars are born, their birth is accompanied by a strong outward wind of gas and dust. When this outflow of material eventually impacts the surrounding gas, the shock waves that are created compress and heat the gas. The water observed is quickly produced in this warm dense gas. Water has been detected in interstellar clouds within our galaxy, the Milky Way. It is believed that water exists in abundance in other galaxies too, because its components, hydrogen and oxygen, are among the most abundant elements in the universe. Interstellar clouds eventually condense into solar nebulae and solar systems, such as ours. Probability or possibility of distribution of water ice is at: lunar ice on the Moon, Ceres (dwarf planet), Tethys (moon). Ice is probably in internal structure of Uranus, Neptune, and Pluto and on comets. The existence of liquid water, and to a lesser extent its gaseous and solid forms, on Earth is vital to the existence of life on Earth as we know it. The Earth is located in the habitable zone of the solar system; if it were slightly closer to or further from the Sun (about 5%, or 8 million kilometres or so), the conditions which allow the three forms to be present simultaneously would be far less likely to exist. Earth's mass allows gravity to hold an atmosphere. Water vapor and carbon dioxide in the atmosphere provide a greenhouse effect which helps maintain a relatively steady surface temperature. If Earth were smaller, a thinner atmosphere would cause temperature extremes preventing the accumulation of water except in polar ice caps (as on Mars). It has been proposed that life itself may maintain the conditions that have allowed its continued existence. The surface temperature of Earth has been relatively constant through geologic time despite varying levels of incoming solar radiation (insolation), indicating that a dynamic process governs Earth's temperature via a combination of greenhouse gases and surface or atmospheric albedo. This proposal is known as the Gaia hypothesis. The state of water also depends on a planet's gravity. If a planet is sufficiently massive, the water on it may be solid even at high temperatures, because of the high pressure caused by gravity. Hydrology is the study of the movement, distribution, and quality of water throughout the Earth. The study of the distribution of water is hydrography. The study of the distribution and movement of groundwater is hydrogeology, of glaciers is glaciology, of inland waters is limnology and distribution of oceans is oceanography. Ecological processes with hydrology are in focus of ecohydrology. The collective mass of water found on, under, and over the surface of a planet is called hydrosphere. Earth's approximate water volume (the total water supply of the world) is 1 360 000 000 km³ (326 000 000 mi³). Of this volume: Liquid water is found in bodies of water, such as an ocean, sea, lake, river, stream, canal, pond, or puddle. The majority of water on Earth is sea water. Water is also present in the atmosphere in solid, liquid, and vapor phases. It also exists as groundwater in aquifers. The most important geological processes caused by water are: chemical weathering, water erosion, water sediment transport and sedimentation, mudflows, ice erosion and sedimentation by glacier. The water cycle (known scientifically as the hydrologic cycle) refers to the continuous exchange of water within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants. Most water vapor over the oceans returns to the oceans, but winds carry water vapor over land at the same rate as runoff into the sea, about 36 Tt per year. Over land, evaporation and transpiration contribute another 71 Tt per year. Precipitation, at a rate of 107 Tt per year over land, has several forms: most commonly rain, snow, and hail, with some contribution from fog and dew. Condensed water in the air may also refract sunlight to produce rainbows. Water runoff often collects over watersheds flowing into rivers. A mathematical model used to simulate river or stream flow and calculate water quality parameters is hydrological transport model. Some of water is diverted to irrigation for agriculture. Rivers and seas offer opportunity for travel and commerce. Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil and level ground for the establishment of population centers. A flood occurs when an area of land, usually low-lying, is covered with water. It is when a river overflows its banks or flood from the sea. A drought is an extended period of months or years when a region notes a deficiency in its water supply. This occurs when a region receives consistently below average precipitation. Some runoff water is trapped for periods, for example in lakes. At high altitude, during winter, and in the far north and south, snow collects in ice caps, snow pack and glaciers. Water also infiltrates the ground and goes into aquifers. This groundwater later flows back to the surface in springs, or more spectacularly in hot springs and geysers. Groundwater is also extracted artificially in wells. This water storage is important, since clean, fresh water is essential to human and other land-based life. In many parts of the world, it is in short supply. Tides are the cyclic rising and falling of Earth's ocean surface caused by the tidal forces of the Moon and the Sun acting on the oceans. Tides cause changes in the depth of the marine and estuarine water bodies and produce oscillating currents known as tidal streams. The changing tide produced at a given location is the result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of Earth rotation and the local bathymetry. The strip of seashore that is submerged at high tide and exposed at low tide, the intertidal zone, is an important ecological product of ocean tides. From a biological standpoint, water has many distinct properties that are critical for the proliferation of life that set it apart from other substances. It carries out this role by allowing organic compounds to react in ways that ultimately allow replication. All known forms of life depend on water. Water is vital both as a solvent in which many of the body's solutes dissolve and as an essential part of many metabolic processes within the body. Metabolism is the sum total of anabolism and catabolism. In anabolism, water is removed from molecules (through energy requiring enzymatic chemical reactions) in order to grow larger molecules (e.g. starches, triglycerides and proteins for storage of fuels and information). In catabolism, water is used to break bonds in order to generate smaller molecules (e.g. glucose, fatty acids and amino acids to be used for fuels for energy use or other purposes). Water is thus essential and central to these metabolic processes. Therefore, without water, these metabolic processes would cease to exist, leaving us to muse about what processes would be in its place, such as gas absorption, dust collection, etc. Water is also central to photosynthesis and respiration. Photosynthetic cells use the sun's energy to split off water's hydrogen from oxygen. Hydrogen is combined with CO2 (absorbed from air or water) to form glucose and release oxygen. All living cells use such fuels and oxidize the hydrogen and carbon to capture the sun's energy and reform water and CO2 in the process (cellular respiration). Water is also central to acid-base neutrality and enzyme function. An acid, a hydrogen ion (H+, that is, a proton) donor, can be neutralized by a base, a proton acceptor such as hydroxide ion (OH−) to form water. Water is considered to be neutral, with a pH (the negative log of the hydrogen ion concentration) of 7. Acids have pH values less than 7 while bases have values greater than 7. Stomach acid (HCl) is useful to digestion. However, its corrosive effect on the esophagus during reflux can temporarily be neutralized by ingestion of a base such as aluminum hydroxide to produce the neutral molecules water and the salt aluminum chloride. Human biochemistry that involves enzymes usually performs optimally around a biologically neutral pH of 7.4. For example a cell of Escherichia coli contains 70% of water, a human body 60-70%, plant body up to 90% and the body of an adult jellyfish is made up of 94–98% water. Earth's waters are filled with life. The earliest life forms appeared in water; nearly all fish live exclusively in water, and there are many types of marine mammals, such as dolphins and whales that also live in the water. Some kinds of animals, such as amphibians, spend portions of their lives in water and portions on land. Plants such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton is generally the foundation of the ocean food chain. Aquatic animals must obtain oxygen to survive, and they do so in various ways. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals, such as dolphins, whales, otters, and seals need to surface periodically to breathe air. Smaller life forms are able to absorb oxygen through their skin. Civilization has historically flourished around rivers and major waterways; Mesopotamia, the so-called cradle of civilization, was situated between the major rivers Tigris and Euphrates; the ancient society of the Egyptians depended entirely upon the Nile. Large metropolises like Rotterdam, London, Montreal, Paris, New York City, Shanghai, Tokyo, Chicago, and Hong Kong owe their success in part to their easy accessibility via water and the resultant expansion of trade. Islands with safe water ports, like Singapore, have flourished for the same reason. In places such as North Africa and the Middle East, where water is more scarce, access to clean drinking water was and is a major factor in human development. Water fit for human consumption is called drinking water or potable water. Water that is not potable can be made potable by filtration or distillation (heating it until it becomes water vapor, and then capturing the vapor without any of the impurities it leaves behind), or by other methods (chemical or heat treatment that kills bacteria). Sometimes the term safe water is applied to potable water of a lower quality threshold (i.e., it is used effectively for nutrition in humans that have weak access to water cleaning processes, and does more good than harm). Water that is not fit for drinking but is not harmful for humans when used for swimming or bathing is called by various names other than potable or drinking water, and is sometimes called safe water, or "safe for bathing". Chlorine is a skin and mucous membrane irritant that is used to make water safe for bathing or drinking. Its use is highly technical and is usually monitored by government regulations (typically 1 part per million (ppm) for drinking water, and 1-2 ppm of chlorine not yet reacted with impurities for bathing water). This natural resource is becoming scarcer in certain places, and its availability is a major social and economic concern. Currently, about 1 billion people around the world routinely drink unhealthy water. Most countries accepted the goal of halving by 2015 the number of people worldwide who do not have access to safe water and sanitation during the 2003 G8 Evian summit. Even if this difficult goal is met, it will still leave more than an estimated half a billion people without access to safe drinking water and over 1 billion without access to adequate sanitation. Poor water quality and bad sanitation are deadly; some 5 million deaths a year are caused by polluted drinking water. Water, however, is not a finite resource, but rather re-circulated as potable water in precipitation in quantities many degrees of magnitude higher than human consumption. Therefore, it is the relatively small quantity of water in reserve in the earth (about 1% of our drinking water supply, which is replenished in aquifers around every 1 to 10 years), that is a non-renewable resource, and it is, rather, the distribution of potable and irrigation water which is scarce, rather than the actual amount of it that exists on the earth. Water-poor countries use importation of goods as the primary method of importing water (to leave enough for local human consumption), since the manufacturing process uses around 10 to 100 times products' masses in water. In the developing world, 90% of all wastewater still goes untreated into local rivers and streams. Some 50 countries, with roughly a third of the world's population, also suffer from medium or high water stress, and 17 of these extract more water annually than is recharged through their natural water cycles. The strain not only affects surface freshwater bodies like rivers and lakes, but it also degrades groundwater resources. The most important use of water in agriculture is for an irrigation and irrigation is key component to produce enough food. Irrigation takes up to 90% of water withdrawn in some developing countries. On 7 April 1795, the gram was defined in France to be equal to "the absolute weight of a volume of pure water equal to a cube of one hundredth of a meter, and to the temperature of the melting ice." For practical purposes though, a metallic reference standard was required, one thousand times more massive, the kilogram. Work was therefore commissioned to determine precisely how massive one liter of water was. In spite of the fact that the decreed definition of the gram specified water at 0 °C—a highly stable temperature point—the scientists chose to redefine the standard and to perform their measurements at the most stable density point: the temperature at which water reaches maximum density, which was measured at the time as 4 °C. The Kelvin temperature scale of the SI system is based on the triple point of water. The scale is a more accurate development of the Celsius temperature scale, which is defined by the boiling point (100 °C) and melting point (0 °C) of water. Natural water consists mainly of the isotopes hydrogen-1 and oxygen-16, but there is also small quantity of heavier hydrogen-2 (deuterium). The amount of deuterium oxides or heavy water is very small, but it still affects the properties of water. Water from rivers and lakes tends to contain less deuterium than seawater. Therefore, a standard water called Vienna Standard Mean Ocean Water is defined as the standard water. The human body is anywhere from 55% to 78% water depending on body size. To function properly, the body requires between one and seven liters of water per day to avoid dehydration; the precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is ingested through foods or beverages other than drinking straight water. It is not clear how much water intake is needed by healthy people, though most advocates agree that 6–7 glasses of water (approximately 2 litres) daily is the minimum to maintain proper hydration. Medical literature favors a lower consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss from exercise or warm weather. For those who have healthy kidneys, it is rather difficult to drink too much water, but (especially in warm humid weather and while exercising) it is dangerous to drink too little. People can drink far more water than necessary while exercising, however, putting them at risk of water intoxication (hyperhydration), which can be fatal. The "fact" that a person should consume eight glasses of water per day cannot be traced back to a scientific source. There are other myths such as the effect of water on weight loss and constipation that have been dispelled. An original recommendation for water intake in 1945 by the Food and Nutrition Board of the National Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." The latest dietary reference intake report by the United States National Research Council in general recommended (including food sources): 2.7 liters of water total for women and 3.7 liters for men. Specifically, pregnant and breastfeeding women need additional fluids to stay hydrated. According to the Institute of Medicine—who recommend that, on average, women consume 2.2 litres and men 3.0 litres—this is recommended to be 2.4 litres (approx. 9 cups) for pregnant women and 3 litres (approx. 12.5 cups) for breastfeeding women since an especially large amount of fluid is lost during nursing. Also noted is that normally, about 20 percent of water intake comes from food, while the rest comes from drinking water and beverages (caffeinated included). Water is excreted from the body in multiple forms; through urine and feces, through sweating, and by exhalation of water vapor in the breath. With physical exertion and heat exposure, water loss will increase and daily fluid needs may increase as well. Humans require water that does not contain too many impurities. Common impurities include metal salts and/or harmful bacteria, such as Vibrio. Some solutes are acceptable and even desirable for taste enhancement and to provide needed electrolytes. The single largest freshwater resource suitable for drinking is Lake Baikal in Siberia, which has a very low salt and calcium content and is very clean. Dissolving (or suspending) is used to wash everyday items such as the human body, clothes, floors, cars, food, and pets. Also, human wastes are carried by water in the sewage system. Its use as a cleaning solvent consumes most of water in industrialized countries. Water can facilitate the chemical processing of wastewater. An aqueous environment can be favourable to the breakdown of pollutants, due to the ability to gain an homogenous solution that is pumpable and flexible to treat. Aerobic treatment can be used by applying oxygen or air to a solution reduce the reactivity of substances within it. Water also facilitates biological processing of waste that have been dissolved within it. Microorganisms that live within water can access dissolved wastes and can feed upon them breaking them down into less polluting substances. Reedbeds and anaerobic digesters are both examples of biological systems that are particularly suited to the treatment of effluents. Typically from both chemical and biological treatment of wastes, there is often a solid residue or cake that is left over from the treatment process. Depending upon its constituent parts, this 'cake' may be dried and spread on land as a fertilizer if it has beneficial properties, or alternatively disposed of in landfill or incinerated. Water and steam are used as heat transfer fluids in diverse heat exchange systems, due to its availability and high heat capacity, both as a coolant and for heating. Cool water may even be naturally available from a lake or the sea. Condensing steam is a particularly efficient heating fluid because of the large heat of vaporization. A disadvantage is that water and steam are somewhat corrosive. In almost all electric power plants, water is the coolant, which vaporizes and drives steam turbines to drive generators. In the nuclear industry, water can also be used as a neutron moderator. In a pressurized water reactor, water is both a coolant and a moderator. This provides a passive safety measure, as removing the water from the reactor also slows the nuclear reaction down. Water has a high heat of vaporization and is relatively inert, which makes it a good fire extinguishing fluid. The evaporation of water carries heat away from the fire. However, water cannot be used to fight fires of electric equipment, because impure water is electrically conductive, or of oils and organic solvents, because they float on water and the explosive boiling of water tends to spread the burning liquid. Decomposition of water may have played a role in the Chernobyl disaster. Initially, cooling of the incandescent reactor was attempted, but the result was an explosion, when the extreme heat caused water to flash into steam, thus leading to a steam explosion; it may also have decomposed water into hydrogen and oxygen, which subsequently exploded. Organic reactions are usually quenched with water or a water solution of a suitable acid, base or buffer. Water is generally effective in removing inorganic salts. In inorganic reactions, water is a common solvent. In organic reactions, it is usually not used as a reaction solvent, because it does not dissolve the reactants well and is amphoteric (acidic and basic) and nucleophilic. Nevertheless, these properties are sometimes desirable. Also, acceleration of Diels-Alder reactions by water has been observed. Supercritical water has recently been a topic of research. Oxygen-saturated supercritical water combusts organic pollutants efficiently. Humans use water for many recreational purposes, as well as for exercising and for sports. Some of these include swimming, waterskiing, boating, and diving. In addition, some sports, like ice hockey and ice skating, are played on ice. Lakesides, beaches and waterparks are popular places for people to go to relax and enjoy recreation. Many find the sound of flowing water to be calming, too. Some keep fish and other life in aquariums or ponds for show, fun, and companionship. Humans also use water for snow sports i.e. skiing or snowboarding, which requires the water to be frozen. People may also use water for play fighting such as with snowballs, water guns or water balloons. They may also make fountains and use water in their public or private decorations. Water supply facilities includes for example water wells cisterns for rainwater harvesting, water supply network, water purification facilities, water tanks, water towers, water pipes including old aqueducts. Atmospheric water generator is in development. Drinking water is often collected at springs, extracted from artificial borings in the ground, or wells. Building more wells in adequate places is thus a possible way to produce more water, assuming the aquifers can supply an adequate flow. Other water sources are rainwater and river or lake water. This surface water, however, must be purified for human consumption. This may involve removal of undissolved substances, dissolved substances and harmful microbes. Popular methods are filtering with sand which only removes undissolved material, while chlorination and boiling kill harmful microbes. Distillation does all three functions. More advanced techniques exist, such as reverse osmosis. Desalination of abundant ocean or seawater is a more expensive solution used in coastal arid climates. The distribution of drinking water is done through municipal water systems or as bottled water. Governments in many countries have programs to distribute water to the needy at no charge. Others argue that the market mechanism and free enterprise are best to manage this rare resource and to finance the boring of wells or the construction of dams and reservoirs. Reducing waste by using drinking water only for human consumption is another option. In some cities such as Hong Kong, sea water is extensively used for flushing toilets citywide in order to conserve fresh water resources. Polluting water may be the biggest single misuse of water; to the extent that a pollutant limits other uses of the water, it becomes a waste of the resource, regardless of benefits to the polluter. Like other types of pollution, this does not enter standard accounting of market costs, being conceived as externalities for which the market cannot account. Thus other people pay the price of water pollution, while the private firms' profits are not redistributed to the local population victim of this pollution. Pharmaceuticals consumed by humans often end up in the waterways and can have detrimental effects on aquatic life if they bioaccumulate and if they are not biodegradable. Water is used in power generation. Hydroelectricity is electricity obtained from hydropower. Hydroelectric power comes from water driving a water turbine connected to a generator. Hydroelectricity is a low-cost, non-polluting, renewable energy source. The energy is supplied by the sun. Heat from the sun evaporates water, which condenses as rain in higher altitudes, from where it flows down. Pressurized water is used in water blasting and water jet cutters. Also, very high pressure water guns are used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is also used in the cooling of machinery to prevent over-heating, or prevent saw blades from over-heating. Water is also used in many industrial processes and machines, such as the steam turbine and heat exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water (thermal pollution). Industry requires pure water for many applications and utilizes a variety of purification techniques both in water supply and discharge. Water plays many critical roles within the field of food science. It is important for a food scientist to understand the roles that water plays within food processing to ensure the success of their products. Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and freezing points of water is affected by solutes. One mole of sucrose (sugar) raises the boiling point of water by 0.52 °C, and one mole of salt raises the boiling point by 1.04 °C while lowering the freezing point of water in a similar way. Solutes in water also affect water activity which affects many chemical reactions and the growth of microbes in food. Water activity can be described as a ratio of the vapor pressure of water in a solution to the vapor pressure of pure water. Solutes in water lower water activity. This is important to know because most bacterial growth ceases at low levels of water activity. Not only does microbial growth affect the safety of food but also the preservation and shelf life of food. Water hardness is also a critical factor in food processing. It can dramatically affect the quality of a product as well as playing a role in sanitation. Water hardness is classified based on the amounts of removable calcium carbonate salt it contains per gallon. Water hardness is measured in grains; 0.064 g calcium carbonate is equivalent to one grain of hardness. Water is classified as soft if it contains 1 to 4 grains, medium if it contains 5 to 10 grains and hard if it contains 11 to 20 grains. The hardness of water may be altered or treated by using a chemical ion exchange system. The hardness of water also affects its pH balance which plays a critical role in food processing. For example, hard water prevents successful production of clear beverages. Water hardness also affects sanitation; with increasing hardness, there is a loss of effectiveness for its use as a sanitizer. Boiling, steaming, and simmering are popular cooking methods that often require immersing food in water or its gaseous state, steam. While cooking water is used for dishwashing too. Water politics is politics affected by water and water resources. Because of overpopulation, mass consumption, misuse, and water pollution, the availability of drinking water per capita is inadequate and shrinking as of the year 2006. For this reason, water is a strategic resource in the globe and an important element in many political conflicts. It causes health impacts and damage to biodiversity. The serious worldwide water situation is called water crisis. UNESCO's World Water Development Report (WWDR, 2003) from its World Water Assessment Program indicates that, in the next 20 years, the quantity of water available to everyone is predicted to decrease by 30%. 40% of the world's inhabitants currently have insufficient fresh water for minimal hygiene. More than 2.2 million people died in 2000 from waterborne diseases (related to the consumption of contaminated water) or drought. In 2004, the UK charity WaterAid reported that a child dies every 15 seconds from easily preventable water-related diseases; often this means lack of sewage disposal; see toilet. Fresh water — now more precious than ever in our history for its extensive use in agriculture, high-tech manufacturing, and energy production — is increasingly receiving attention as a resource requiring better water management and sustainable use. Organizations concerned in water protection include International Water Association (IWA), WaterAid, Water 1st, American Water Resources Association. Water related conventions are United Nations Convention to Combat Desertification (UNCCD), International Convention for the Prevention of Pollution from Ships, United Nations Convention on the Law of the Sea and Ramsar Convention. World Day for Water takes place at March 22 and World Ocean Day at June 8. Water is considered a purifier in most religions. Major faiths that incorporate ritual washing (ablution) include Christianity, Hinduism, Rastafarianism, Islam, Shinto, Taoism, and Judaism. Immersion (or aspersion or affusion) of a person in water is a central sacrament of Christianity (where it is called baptism); it is also a part of the practice of other religions, including Judaism (mikvah) and Sikhism (Amrit Sanskar). In addition, a ritual bath in pure water is performed for the dead in many religions including Judaism and Islam. In Islam, the five daily prayers can be done in most cases after completing washing certain parts of the body using clean water (wudu). In Shinto, water is used in almost all rituals to cleanse a person or an area (e.g., in the ritual of misogi). Water is mentioned in the Bible 442 times in the New International Version and 363 times in the King James Version: 2 Peter 3:5(b) states, "The earth was formed out of water and by water" (NIV). Some faiths use water especially prepared for religious purposes (holy water in some Christian denominations, Amrita in Sikhism and Hinduism). Many religions also consider particular sources or bodies of water to be sacred or at least auspicious; examples include Lourdes in Roman Catholicism, the Jordan River (at least symbolically) in some Christian churches, the Zamzam Well in Islam and the River Ganges (among many others) in Hinduism. Water is often believed to have spiritual powers. In Celtic mythology, Sulis is the local goddess of thermal springs; in Hinduism, the Ganges is also personified as a goddess, while Saraswati have been referred to as goddess in Vedas. Also water is one of the "panch-tatva"s (basic 5 elements, others including fire, earth, space, air). Alternatively, gods can be patrons of particular springs, rivers, or lakes: for example in Greek and Roman mythology, Peneus was a river god, one of the three thousand Oceanids. In Islam, not only does water give life, but every life is itself made of water: "We made from water every living thing". The Ancient Greek philosopher Empedocles held that water is one of the four classical elements along with fire, earth and air, and was regarded as the ylem, or basic substance of the universe. Water was considered cold and moist. In the theory of the four bodily humors, water was associated with phlegm. Water was also one of the five elements in traditional Chinese philosophy, along with earth, fire, wood, and metal. Water also plays an important role in literature as a symbol of purification. Examples include the critical importance of a river in As I Lay Dying by William Faulkner and the drowning of Ophelia in Hamlet.
wikidoc
null
/index.php/Water_Sampling_Stations
103
# Water Sampling Stations To enhance water quality monitoring in a drinking water network sampling stations are installed along the route of the water network. Water sampling stations are connected to next water main and have a little sink. Water samples are analyzed for bacteria, chlorine levels, pH, inorganic and organic pollutants, turbidity, odor, and many other water quality indicators. New York City has over 800 sampling stations that are distributed based on population density, water pressure zones, proximity to water mains, and accessibility. The stations rise about 4 1/2 feet above the ground and are made of heavy cast iron.
wikidoc
null
/index.php/Water_crisis
1,674
# Water crisis The water crisis is the status of the world's water resources relative to human demand as of the 1970s and to the current time. The term "water crisis" has been applied to the worldwide water situation by the United Nations and other world organizations. The major aspects of the water crisis are overall scarcity of usable water and water pollution. The Earth has a finite supply of fresh water, stored in aquifers, surface waters and the atmosphere. Sometimes oceans are mistaken for available water, but the amount of energy needed to convert saline water to potable water is prohibitive today, explaining why only a very small fraction of the world's water supply derives from desalination . Waterborne diseases and the absence of sanitary domestic water is the leading cause of death worldwide and may account for up to 80 percent of human sickness. Historically the manifestations of the water crisis have been less pronounced, but 20th century levels of human overpopulation have revealed the limited quantity of fresh water. Drought dramatizes the underlying tenuous balance of safe water supply, but it is the imprudent actions of humans that have rendered the human population vulnerable to the devastation of major droughts. Not only are there 1.1 billion without adequate drinking water, but the United Nations acknowledges 2.6 billion people are without adequate water for sanitation (e.g. wastewater disposal). The issues are coupled, since, without water for sewage disposal, cross-contamination of drinking water by untreated sewage is the chief adverse outcome of inadequate safe water supply. Consequently disease and significant deaths arise from people using contaminated water supplies; these effects are particularly pronounced for children in underdeveloped countries, where 3900 children per day die of diarrhea alone . While these deaths are generally considered preventable, the situation is considerably more complex, since the Earth is beyond its carrying capacity with respect to available fresh water . Often technology is advanced as a panacea, but the costs of technology presently exclude a number of countries from availing themselves of these solutions. If lesser developed countries acquire more wealth, partial mitigation will occur, but sustainable solutions must involve each region in balancing population to water resource and in managing water resources more optimally. In any case the finite nature of the water resource must be acknowledged if the world is to achieve a better balance. Vegetation and wildlife are fundamentally dependent upon adequate freshwater resources. Marshes, bogs and riparian zones are more obviously dependent upon sustainable water supply, but forests and other upland ecosystems are equally at risk of significant productivity changes as water availability is diminished. In the case of wetlands, considerable area has been simply taken from wildlife use to feed and house the expanding human population. But other areas have suffered reduced productivity from gradual diminishing of freshwater inflow, as upstream sources are diverted for human use. In seven states of the U.S. over 80 percent of all historic wetlands were filled by the 1980s, when Congress acted to create a "no net loss" of wetlands. In Europe extensive loss of wetlands has also occurred with resulting loss of biodiversity. For example many bogs in Scotland have been drained or developed through human population expansion. One example is the Portlethen Moss in Aberdeenshire, that has been over half lost, and a number of species which inhabited this moss are no longer present such as the Great Crested Newt. On Madagascar's central highland plateau, a massive transformation occurred that eliminated virtually all the heavily forested vegetation in the period 1970 to 2000. The slash and burn agriculture eliminated about ten percent of the total country's native biomass and converted it to a barren wasteland. These effects were from overpopulation and the necessity to feed poor indigenous peoples, but the adverse effects included widspread gully erosion that in turn produced heavily silted rivers that "run red" decades after the deforestation. This eliminated a large amount of usable fresh water and also destroyed much of the riverine ecosystems of several large west-flowing rivers. Several fish species have been driven to the edge of extinction and some coral reef formations in the Indian Ocean are effectively lost. There are approximately 260 different river systems worldwide, where conflicts exist crossing national boundaries. While Helsinki Rules help to interpret intrinsic water rights among countries, there are some conflicts so bitter or so related to basic survival that strife and even warfare are inevitable. In many cases water use disputes are merely an added dimension to underlying border tensions founded on other bases. The Tigris-Euphrates River System is one example where differing national interests and withdrawal rights have been in conflict. The countries of Iran, Iraq and Syria each present valid claims of certain water use, but the total demands on the riverine system surpass the physical constraints of water availability. As early as 1974 Iraq massed troops on the Syrian border and threatened to destroy Syria's al-Thawra dam on the Euphrates. In 1992 Hungary and Czechoslovakia took a dispute over Danube River water diversions and dam construction to the International Court of Justice. This case represents a minority of disputes where logic and jurisprudence may be the path of dispute resolution. Other conflicts involving North and South Korea, Israel and Palestine, Egypt and Ethiopia, may prove more difficult tests of negotiation. There are many other countries of the world that are severely impacted with regard to human health and inadequate drinking water. The following is a partial list of some of the countries with significant populations (numerical population of affected population listed) whose only consumption is of contaminated water : According to the California Department of Water Resources, if more supplies aren't found by 2020, region will face a shortfall nearly as great as the amount consumed today. Los Angeles is a coastal desert able to support at most 1 million people on its own water; the Los Angeles basin now is the core of a megacity that spans 220 miles (350 km) from Santa Barbara to the Mexican border. The region's population is expected to reach 22 million by 2020. The population of California continues to grow by more than a half million a year and is expected to reach 48 million in 2030. But water shortage is likely to surface well before then. Water deficits, which are already spurring heavy grain imports in numerous smaller countries, may soon do the same in larger countries, such as China or India. The water tables are falling in scores of countries (including Northern China, the US, and India) due to widespread overpumping using powerful diesel and electric pumps. Other countries affected include Pakistan, Iran, and Mexico. This will eventually lead to water scarcity and cutbacks in grain harvest. Even with the overpumping of its aquifers, China is developing a grain deficit. When this happens, it will almost certainly drive grain prices upward. Most of the 3 billion people projected to be added worldwide by mid-century will be born in countries already experiencing water shortages. Unless population growth can be slowed quickly by investing heavily in female literacy and family planning services, there may not be a humane solution to the emerging world water shortage. After China and India, there is a second tier of smaller countries with large water deficits — Algeria, Egypt, Iran, Mexico, and Pakistan. Four of these already import a large share of their grain. Only Pakistan remains self-sufficient. But with a population expanding by 4 million a year, it will also likely soon turn to the world market for grain. According to a UN climate report, the Himalayan glaciers that are the sources of Asia's biggest rivers - Ganges, Indus, Brahmaputra, Yangtze, Mekong, Salween and Yellow - could disappear by 2035 as temperatures rise. Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. In India alone, the Ganges provides water for drinking and farming for more than 500 million people. Year 2025 forecasts state that two thirds of the world population will be without safe drinking water and basic sanitation services. Construction of wastewater treatment plants and reduction of groundwater overdrafting appear to be obvious solutions to the worldwide problem; however, a deeper look reveals more fundamental issues in play. Wastewater treatment is highly capital intensive, restricting access to this technology in some regions; furthermore the rapid increase in population of many countries makes this a race that is difficult to win. As if those factors are not daunting enough, one must consider the enormous costs and skill sets involved to maintain wastewater treatment plants even if they are successfully developed. Reduction in groundwater overdrafting is usually politically very unpopular and has major economic impacts to farmers; moreover, this strategy will necessarily reduce crop output, which is something the world can ill afford, given the population level at present. At more realistic levels, developing countries can strive to achieve primary wastewater treatment or secure septic systems, and carefully analyse wastewater outfall design to miminise impacts to drinking water and to ecosystems. Developed countries can not only share technology better, including cost-effective wastewater and water treatment systems but also in hydrological transport modeling. At the individual level, people in developed countries can look inward and reduce overconsumption, which further strains worldwide water consumption. Both developed and developing countries can increase protection of ecosytems, especially wetlands and riparian zones. These measures will not only conserve biota, but also render more effective the natural water cycle flushing and transport that make water systems more healthy for humans. As new technological innovations continue to reduce the capital cost of desalination, more countries are building desalination plants as a small element in addressing their water crises , . Cheaper alternatives to million-dollar desalianation projects are water filters, such as reverse osmosis water processors and vapaires. These are often more viable when shortage of funds is an obstacle.
wikidoc
null
/index.php/Water_fluoridation
1,659
# Water fluoridation Water fluoridation is the practice of adding fluoride compounds to water with the intended purpose of reducing tooth decay in the general population. Many North American and Australian municipalities fluoridate their water supplies in the belief that this practice will reduce tooth decay at a low cost. Currently 66% of United States residents on public water supplies have fluoridated water. Water purveyors typically add a fluoride in the form of sodium hexafluorosilicate or hexafluorosilicic acid, at a level between 0.7 and 1.2 ppm. These compounds originate as side products from the processing ("defluorination") of phosphate ores to prepare fertilizer, food additive etc. Fluorides such as sodium fluoride (NaF), sodium monofluorophosphate ("SMFP" or "MFP", Na2FPO3), tin(II) fluoride ("Stannous fluoride", SnF2), and amine fluorides are common ingredients in toothpaste. While the use of fluorides for prevention of dental caries was discussed in the 19th century in Europe, community water fluoridation in the United States owes its origin in part to the research of Dr. Frederick McKay, who pressed the dental community for an investigation into what was then known as "Colorado brown stains." In 1909, of the 2,945 children seen by Dr. McKay, 87.5% had some degree of stain or mottling. All the affected children were from the Pikes Peak region. Despite having a negative impact on the physical appearance of their teeth, the children with stained or mottled teeth also had fewer cavities than other children. McKay brought the problem to the attention of Dr. G.V. Black, and Black's interest into the Colorado stain led to greater interest throughout the dental profession. Initial hypotheses for the staining included poor nutrition, overconsumption of pork or milk, radium exposure, childhood diseases, or a calcium deficiency in the local drinking water. In 1931, researchers finally concluded that the cause of the Colorado stain was a high concentration of fluoride ions in the region's drinking water (Fluoride levels ranging 2-13.7 ppm) and areas with lower concentrations had no staining (1 ppm or less). Pikes Peak's rock formations contained the mineral cryolite, one of whose constituents is fluorine. As the rain and snow fell, the resulting runoff water dissolved fluoride which made its way into the water supply. Dental research then moved toward determining a safe level for fluoride in water supplies. The research had two goals: (1) to warn communities with a high concentration of fluoride of the danger, initiating a reduction of the fluoride levels in order to prevent the Colorado stain, currently known as dental fluorosis, and (2) to encourage communities with a low concentration of fluoride in drinking water to increase the fluoride levels in order to help prevent tooth decay. The classic epidemiological study to attempt to determine the optimal level of fluoride in water was led by Dr. H. Trendley Dean, a dental officer of the U.S. Public Health Service, in 1934. His research on the fluoride - dental caries relationship, published in 1942, included 7,000 children from 21 cities in Colorado, Illinois, Indiana, and Ohio. The study concluded that the optimal level of fluoride which minimized the risk of severe fluorosis but had positive benefits for tooth decay was 1 part per million (ppm). In 1939, Dr. Gerald J. Cox conducted laboratory tests on fluoride and suggested adding fluoride to drinking water (or other media such as milk or bottled water) in order to improve oral health. In 1937, dentists Henry Klein and Carroll E. Palmer had considered the possibility of fluoridation to prevent cavities after their evaluation of data gathered by a Public Health Service team at dental examinations of American Indian children. In a series of papers published afterwards (1937-1941), yet disregarded by his colleagues within the U.S.P.H.S., Klein summarized his findings on tooth development in children and related problems in epidemiological investigations on caries prevalence. In the mid 1940s, four widely-cited studies were conducted. The researchers investigated cities that had both fluoridated and unfluoridated water. The first pair was Muskegon, Michigan and Grand Rapids, Michigan, making Grand Rapids the first community in the world to modify its fluoride levels in drinking water to benefit dental health on January 25, 1945. Kingston, New York was paired with Newburgh, New York. Oak Park, Illinois was paired with Evanston, Illinois. Sarnia, Ontario was paired with Brantford, Ontario, Canada. The research found a decrease in the incidence of tooth decay in cities which had added fluoride to water supplies. Water fluoridation by public authorities has provoked controversy. Advocates of water fluoridation say that fluoridation is similar to fortifying salt with iodine, milk with vitamin D and orange juice with vitamin C and say it is an effective way to prevent tooth decay. Those opposed to public fluoridation of drinking water contend that water fluoridation can have harmful health effects such as dental fluorosis and bone cancer. Some opponents claim that fluoridation takes away individual choice as to the substances a person ingests and that it amounts to mass medication. Currently, there is some concern among dental professionals that the growing use of bottled water may decrease the amount of fluoride exposure people will receive. Some bottlers such as Dannon have begun adding fluoride to their water. Most bottlers, however, do not add fluoride, and fluoride concentrations are not usually labeled on the bottle. As a result, people who have fluoridated water supplies may receive less than the amounts of fluoride that fluoride proponents recommend if they choose bottled water over tap water. However, if consumers are merely choosing bottled water over other packaged drinks, such as orange juice or soda (when the latter is produced using water which has not been fluoridated), the effects may be absent, especially because consumers will still turn to tap water for cooking (i.e. preparing pasta or making bread). Water fluoridation equipment has, on occasion, malfunctioned in the United States. Perhaps the worst incident in the United States occurred in Hooper Bay, Alaska in 1992. When fluoridation equipment failed, a large amount of fluoride was released into the drinking water supply and 296 people were poisoned; 1 person died, marking the first reported death due to fluoride toxicity caused by drinking water from a community water system. As of May 2000, 42 of the 50 largest U.S. cities have water fluoridation. According to a 2002 study, 67% of Americans are living in communities with fluoridated water. As of 2001, 19 states have at least 75% of their population receiving fluoridated water. There is a CDC database for researching the water fluoridation status of neighborhood water. In 1998, 70% of people polled in a survey conducted by the American Dental Association believed community water should be fluoridated, with 18% disagreeing and the rest undecided. The issue of whether or not to fluoridate water supplies occasionally arises in local governments. For example, on November 8, 2005, citizens of Mt. Pleasant, Michigan voted 63% to 37% in favor of reinstating fluoridation in public drinking water after a 2004 ballot initiative ceased water fluoridation in the city. At the same time, voters in Xenia, Ohio; Springfield, Ohio; Bellingham, Washington; and Tooele City, Utah all rejected water fluoridation. The cost of fluoridating water supplies in the United States has been researched. In cities with a population of over 50,000 people, fluoridation costs 31 cents per person per year. The cost rises to $2.12 per person in cities with a population below 10,000. Implementation of fluoridation usually lies with provincial or city governments. Brantford, Ontario became the first city in Canada to fluoridate its water supplies in 1945. In 1955, Toronto approved water fluoridation, but delayed implementation of the program until 1963 due to a campaign against fluoridation by broadcaster Gordon Sinclair. The city continues to fluoridate its water today. There have been some recent decreases in the amount of fluoridation used, however, from 1 mg per litre to between 0.6 and 0.8 mg per litre. Historically, British Columbia has been the province with least percentage of its population receiving fluoridated water. Montreal may be the last major city in Canada which does not fluoridate its water supplies. France does not fluoridate its water supply. As "[f]luoride chemicals are not included in the list [of 'chemicals for drinking water treatment']. This is due to ethical as well as medical considerations.", Directeur de la Protection de l'Environment, August 25, 2000). However, fluoridated salt is widely available. In Switzerland since 1962 two fluoridation programmes had operated in tandem: water fluoridation in the City of Basel, and salt fluoridation in the rest of Switzerland (around 83% of domestic salt sold had fluoride added). However it became increasingly difficult to keep the two programmes separate. As a result some of the population of Basel were assumed to use both fluoridated salt and fluoridated water. In order to correct that situation, in April 2003 the State Parliament agreed to cease water fluoridation and officially expand salt fluoridation to Basel. Australia has fluoridation in all but one state, Queensland, in which water fluoridation is under local government control. The City of Geelong, west of Melbourne, does not fluoridate its water supplies. This is despite the fact that all of Melbourne's water is fluoridated. Many regional centres in Queensland do fluoridate their water supply, however Brisbane, the state capital, currently does not add fluoride to its drinking water. The first town to fluoridate the water supply in Australia was Beaconsfield, Tasmania in 1953. New Zealand has fluoridated nearly all water-supplies except those in remote areas. The use of water fluoridation first began in New Zealand in Hastings in 1954. A Commission of Inquiry was held in 1957 and then its use rapidly expanded in the mid 1960s. In Brazil, about 45% of the cities have a fluoridated water supply. Government studies reported a decrease in cavities on the affected population between 40% and 80%.
wikidoc
null
/index.php/Water_fluoridation_controversy
4,794
# Water fluoridation controversy Water fluoridation controversy refers to debate surrounding the addition of fluoride to public water supplies. Calcium fluoride (CaF2) occurs naturally in the ground water in certain areas of the world. When water is artificially fluoridated, sodium fluoride (NaF), sodium silicofluoride (SFS) or fluosilicic acid (FSA) is added to raise the fluoride level to a range between .7 and 1.2 parts per million (ppm). Despite the current stance of the medical and dental research associations, opponents charge that releasing fluoride compounds into municipal water supplies infringes on individual choice as to what a person ingests. They say water fluoridation is a scheme to dispose of industrial waste and this amounts to a mass dissemination of toxic substances. Advocates of water fluoridation say that fluoridation is similar to fortifying salt with iodine, milk with vitamin D and orange juice with vitamin C and say it is an effective way to prevent tooth decay and improve oral health over a lifetime, for both children and adults. Opponents counter that there is no U.S. RDA (recommended dietary allowance) for fluoride and claim industrial fluorosilicates are serious neurotoxic and mutagenic health hazards, even at the 1 part per million level deemed "optimal" by pro-fluoridation groups. The EPA Headquarters Union of Scientists say recent, peer-reviewed studies that document bone fractures, pediatric osteosarcoma, genetic mutations, lowered IQ, decreased kidney, thyroid and pineal gland functions, arthritis, osteoporosis and dental fluorosis requires an immediate halt to using the nation's drinking water reservoirs as toxic waste disposal sites. Sodium fluoride is also one of the main ingredients in fungicides, rat and possum poison (sodium fluoroacetate, also known as "1080") and Sarin nerve gas (isopropyl-methyl-phosphoryl-fluoride). Section 3: Hazards Identification: DEVELOPMENTAL TOXICITY: The substance may be toxic to kidneys, lungs, the nervous system, heart, gastrointestinal tract, cardiovascular system, bones, teeth. Repeated or prolonged exposure to the substance can produce target organs damage. Repeated exposure to a highly toxic material may produce general deterioration of health by an accumulation in one or many human organs. Over 90% of the fluoride compounds added to water are fluorosilicic acid (FSA) or sodium fluorosilicate (SFS), which are industrial byproducts of fluorine gas concentrates that are captured in pollution scrubbers during the manufacture of phosphate fertilizer. The phosphate ores used to make fertilizer are a natural source of radiation. During the manufacturing process, trace amounts of heavy metals, including lead, arsenic, mercury and radioactive nuclides such as uranium and radium are captured in the pollution scrubber and carried into the fluorosilicic acid concentrate. The radionuclides radon-222 and polonium-210 readily combine with fluorine, which is the most reactive element. As little as 0.03 microcuries (6.8 trillionths of a gram) of polonium-210 can be carcinogenic to humans. In addition, lead is absorbed like calcium in the body, where it can be stored in the bones for years before decaying and triggering a release of alpha radiation. Unless tests for specific isotopes are performed, no one would know that a transmutation had occurred from lead-214 to bismuth-214 and then to the highly radioactive polonium-214. Despite pro-fluoridation group's claims that fluorosilicic acid is comparable to a "vitamin" or "nutrient", the manufacturer's data sheet describes FSA as a DOT Class 6.1 Poisonous/Toxic substance, subject to hazmat regulations, with a maximum content of .020% heavy metals (as lead, or 1 part in 5000). In contrast, the EPA has a zero-level goal for lead in drinking water, with a new MCL (maximum contaminant level) of five parts per billion. In 2005, eleven EPA employee unions representing over 7000 environmental and public health professionals of the Civil Service called for a moratorium on drinking water fluoridation programs across the country and asked EPA management to recognize fluoride as posing a serious risk of causing cancer in people. The unions acted following revelations of an apparent cover-up of evidence from Harvard School of Dental Medicine linking fluoridation with an elevated risk of osteosarcoma in boys, a rare but fatal bone cancer. In addition, over 1000 health industry professionals, including doctors, dentists, scientists and researchers from a variety of disciplines are calling for an end to water fluoridation in an online petition to Congress. Their petition highlights eight recent events that they say mandates a moratorium on water fluoridation, including a 500-page review of fluoride's toxicology that was published in 2006 by a distinguished panel appointed by the National Research Council of the National Academies. While the NRC report did not specifically examine artificially fluoridated water, it concluded that the US Environmental Protection Agency's safe drinking water standard of 4 parts per million (ppm) for fluoride is unsafe and should be lowered. Despite over 60 years of fluoridation without a single double-blind study of fluoride's effectiveness and many basic research questions that have never been addressed, the panel reviewed a large body of literature in which fluoride has a statistically significant association with a wide range of adverse effects. Several prominent dental researchers and government advisors who were leaders of the pro-fluoridation movement have announced reversals of their former positions after they concluded that water fluoridation is not an effective means of reducing dental caries and that it poses serious risks to human health. The late Dr. John Colquhoun was Principal Dental Officer of Auckland, New Zealand. In an article titled, "Why I changed my mind about water fluoridation", he published his reasons for changing sides. Dr. Hardy Limeback, BSc, PhD, DDS was one of the 12 scientists who served on the National Academy of Sciences panel that issued the aforementioned report, Fluoride in Drinking Water: A Scientific Review of the EPA's Standards. Dr. Limeback is an associate professor of dentistry and head of the preventive dentistry program at the University of Toronto. He detailed his concerns in an April 2000 letter titled, "Why I am now officially opposed to adding fluoride to drinking water". In a presentation to the California Assembly Committee of Environmental Safety and Toxic Materials, Dr. Richard Foulkes, B.A., M.D., former special consultant to the Minister of Health of British Columbia, revealed: "The [water fluoridation] studies that were presented to me were selected and showed only positive results. Studies that were in existence at that time that did not fit the concept that they were "selling," were either omitted or declared to be "bad science." The endorsements had been won by coercion and the self-interest of professional elites. Some of the basic "facts" presented to me were, I found out later, of dubious validity. We are brought up to respect these persons in whom we have placed our trust to safeguard the public interest. It is difficult for each of us to accept that these may be misplaced." Despite various concerns by private citizens, government agencies such as the CDC and WHO continue to support water fluoridation as being a safe and effective means of reducing dental decay. There are some groups which consider fluoride neither a vitamin nor an essential nutrient and claim that is the same belief held by the medical community. Fluoridation opponents believe that excellent dental health can be maintained through alternative methods, such as modifying diet by consuming less sugar, chewing xylitol gum as is done in Europe, and good dental hygiene through flossing and brushing the teeth - even with fluoride toothpaste. They argue that since the effects of fluoride is primarily topical, there is no need to actually consume fluoride. Furthermore, the benefits of fluoridated water do not outweigh the costs of systemic harm on the body. Consequently, they argue that there is no need to fluoridate the community drinking water. Despite these claims, dental research has shown that fluoride has a positive effect on dental health. During tooth development, fluoride binds to the hydroxyapatite crystals present in enamel and makes the enamel more resistant to demineralization by acids. As a result, some organizations, such as the American Dental Hygiene Association, classify fluoride as a nutrient necessary for proper tooth development. In addition, organizations, including the CDC and WHO, promote increasing the accessibility of fluoridated water. Water Fluoridation is prevalent in the United States. Most developed nations previously fluoridated their water, but stopped or banned the practice. Some examples are as follows. The years when water Fluoridation started and stopped are in parentheses: In spite of this, the prevalence of dental decay has decreased in both Western Europe and the United States. Some countries had water fluoridation but then abruptly stopped the practice. These countries, including the former East Germany, Cuba, and Finland, have continued to see drops in the incidence of tooth decay. Based on this evidence, opponents conclude that water fluoridation is unnecessary. Though water fluoridation is promoted by many health organizations and is considered the least costly method of dispersing fluoride, other methods of dispersal are possible. In areas with complex water sources, water fluoridation is more difficult and more costly. Thus, other methods to fluoride are supported in those cases. The World Health Organization is currently assessing the effects of affordable fluoridated toothpaste, milk fluoridation and salt fluoridation in Africa, Asia, and Europe. Moreover, a major concern of health organizations is the incidence of dental fluorosis, a sign of overexposure to fluoride. In many instances, natural fluoride levels in water are much higher than desired. These areas do not need fluoride added to water supplies, and health organizations endorse providing alternate water sources or adjusting the fluoride levels to deliver the proper amount for dental health instead. Frequently, opponents point to a study by the National Institute of Dental Research showing little difference in tooth decay rates among children in fluoridated and non-fluoridated communities. In the study's results, the difference between the children exposed to water fluoridation, and those who were not exposed, was very small, between 0.12 and 0.30 DMFS (Decayed Missing and Filled Surfaces). Opponents also argue that in the instances that fluoride prevents tooth decay, the effects are merely topical. Therefore, fluoridating water is unnecessary and ineffective. Instead, they argue, direct applications of fluoride to teeth as done in dental offices and with fluoridated toothpastes should be the recommended methods. Opponents point out that dental decay continues to exist in water fluoridated communities. They reason that if fluoride is effective, then there would be no more tooth decay. While, in theory, the poorest members of society would be aided the most by fluoridinated water, baby bottle tooth decay (BBTD) and tooth decay in general is still prevalent in those social groups. Opponents conclude that, in light of the continuing dental health problem, water fluoridation is unable to successfully increase health standards and thus should not be used. Finally, opponents argue that the general decline of tooth decay is the result of factors beside water fluoridation, including toothpaste with fluoride, improved diets, and overall improved general and dental health. The Centers for Disease Control and Prevention published in its Morbidity and Mortality Weekly Report (MMWR) that starting or continuing water fluoridation decreased the incidence of tooth decay by 29%, and that stopping water fluoridation increased the incidence of tooth decay in some communities. Other organizations also see a clear link between desired fluoride levels in water and a decrease in tooth decay. In addition, since oral health is affected by many factors, fluoride alone would be unable, nor would it be expected, to eradicate the disease. The social groups that would be more likely to benefit from water fluoridation are those living in poorer conditions, and an important factor to decrease dental health disparities may be water fluoridation programs. Nonetheless, it is understood that these communities suffer from various problems which would impede oral health, such as lack of access to dental care and poorer oral hygiene education. Water fluoridation is only a single factor to improve dental health. There are some opponents of fluoridation who believe fluoride is a poison that can lead to death or, more commonly, dental fluorosis in instances of overdose. They argue that having a lethal chemical in the water is reckless and leads to many health problems in the general public. These persons point to research which they say supports the notion that fluoride causes chromosomal damage and interferes with DNA repair. They point to animal studies that they say demonstrate that rats fed for one year with 1 ppm fluoride in their water had detrimental changes to their kidneys and brains, an increased uptake of aluminum in the brain, and the formation of beta amyloid deposits, a characteristic of Alzheimers disease. In animal studies, fluoride has been shown to inhibit melatonin production and promote precocious puberty. Fluoride may have an analogous inhibitory effect on human melatonin production, as fluoride accumulates readily in the human pineal gland, the brain organ responsible for melatonin synthesis. Further, it is argued by some opponents that fluoride can weaken the immune system, leaving people vulnerable to the development of cancer and AIDS. Theses groups further emphasize that an overdose of fluoride is associated with liver damage, kidney function, and fluorosis in children. At high doses, fluoride has many side effects. Animal studies demonstrate that fluoride can damage the male reproductive system in various species. Consequently, fluoride is considered dangerous by these groups. Advocates of water fluoridation agree that fluoride in high concentrations produces harmful effects to the body. Nonetheless, they argue that almost any substance is harmful because toxicity is based on the amount of exposure. In defending water fluoridation, the American Dental Association points out that vitamin A, vitamin D, iron, iodine, aspirin, and water are potentially harmful if given in certain amounts. As is true for all vitamins and minerals, recommended dosages for fluoride represent levels which maximize health benefits and minimize adverse effects. The greatest concern with fluoride overexposure is dental fluorosis. Fluorosis is undesirable because, in severe cases, it discolors teeth, causes surface changes to the enamel, and makes oral hygiene more difficult. Government agencies, such as the Center for Disease Control and Prevention, keep records on the prevalence of fluorosis in the general public. Also a concern, skeletal fluorosis is a disease in which fluoride deposits into bone, causing joint stiffness, joint pain, and sometimes changes in bone shape. For skeletal fluorosis to occur, chronic, high level exposure to fluoride is required. A mild form of skeletal fluorosis, osteosclerosis, is seen when levels of fluoride reach 5 parts per million (ppm) and the time of exposure lasts for 10 years. In order to best prevent fluorosis, health organizations have created guidelines restricting the amount of fluoride exposure. The United States Environmental Protection Agency limits the maximum amount of fluoride in drinking water to 4.0 milligrams per liter of water and recommends water supplies to contain between 0.7 and 1.2 milligrams of fluoride per liter. The World Health Organization cautions that fluoride levels above 1.5 milligrams per liter leaves the risk for fluorosis. When fluoride levels in water are low (usually below 0.6 ppm), fluoride supplements are sometimes prescribed to encourage healthy dental development. There are accepted recommended guidelines on the amount of fluoride to prescribe, which depend on the fluoride levels in the drinking water and on the age of the child. Moreover, health organizations have affirmed the currently accepted belief that recommended levels of fluoride does not contribute to the many diseases water fluoridation detractors accuse fluoride of causing. The Centers for Disease Control and Prevention and the National Cancer Institute have both issued statements that water fluoridation is not believed to cause osteosarcomas. Cancer in general is not believed to be caused from water fluoridation. There is no clear link between Alzheimer's disease and water fluoridation. A study in 1998 suggested a possible relationship between fluoride exposure and Alzheimer's disease. Research groups point out that the study contained methodological limitations, which prevent a definitive conclusion on the subject. As a result, research and health agencies currently believe fluoride is not a risk factor for Alzheimer's disease, and instead age and family history are the most important risk factors. Moreover, there is some research that suggests Alzheimer's disease can be prevented with water fluoridation because of the competition between aluminum and fluoride absorption. Nonetheless, this research is also limited by design and no definitive conclusion of this effect can be made. Other health concerns, such as kidney disease, Down syndrome, lead poisoning, heart disease, decreased fertility rates, and inhibition of biologic enzymes, are not believed to be attributed to water fluoridation. Many fluoridation opponents rely on experts opposing water fluoridation to validate their argument of the dangers of fluoride. People, such as scientists and Nobel prize winners, are exemplified as a large knowledgeable group that have stated their opposition against water fluoridation. In response, scientific and health organizations criticize opponents of water fluoridation for trying to engage in "polling practices" with research. When a group opposing water fluoridation claims an award-winning researcher or dental expert agrees with them, the argument is supposed to be more convincing to the general public. Researchers emphasize that voting or polling is not how scientific progress is made. Thorough review of methodology and design of multiple studies over time lead researchers to conclusions. Even in the critical analysis of these studies, content is the focus, rather than the researcher who led the study. Another criticism of water fluoridation opponents given is their reference to research seeming to support their view. Generally, those studies are criticized by the majority of scientific researchers on basic principles, such as the methodology used. More problematic is the accusation that some anti-fluoridation research is published in journals, such as "Fluoride", that are deceptively made to appear peer-reviewed. One aspect of opposition to water fluoridation regards the social or political implications of adding fluoride to public water supplies. Setting aside the claim that water fluoridation may improve dental health, such an act would violate an individual's choice to pursue free choice of, or form of, medical treatment and it is argued that water fluoridation is "compulsory mass medication" because it does not allow proper consent. It is also argued that, because of the negative health effects of fluoride exposure, mandatory fluoridation of public water supplies is a "breach of ethics" and a "human rights violation." Litigation, both pro and con, has been a frequent outcome of forced water fluoridation. Many advocates of fluoridation do not consider it a violation of people's right to consent to medical treatment. They usually argue that fluoridation is not a form of mass medication because fluoride is naturally present in all water systems. Opponents argue that the form of fluoride found in naturally fluoridated water supplies is not the same as the form used to artificially fluoridate water. Likewise, opponents argue that the pharmacy grade fluoride used in many studies to support fluoride as a tooth decay preventative is not the grade used to fluoridate water. Frequently, those who promote water fluoridation make the comparison to the fortification of other types of foods, such as adding vitamins to breakfast cereals and baby foods. In addition, proponents propose that preventing broad, easy access to fluoride is unethical. Since the populations which benefit most from water fluoridation are children and those in poorer communities, fluoridation is considered an avenue to relieve some of the health disparities between socio-economic groups. Fluoridation is defended further by its relative low cost. The Canadian Task Force On Preventive Health Care describes water fluoridation as "the single most effective, equitable and efficient means of preventing coronal and root dental caries." In the United States, the cost can be as low as 31 cents per person, per year. As a result, many health organizations defend fluoridation and do not consider it a violation of ethical principles. Some opponents point to a government conspiracy that has modified scientific research to further its own political goal. The particular conspiracy involves the secret development of the atomic bomb during World War II. The argument usually involves characterizing research as flawed or edited for the public in order to avoid public concern over military research. As some have put it, "The science of fluoridating public drinking water systems has been, from day one, shoddy at best . . . . the basis of that science was rooted in protecting the U.S. Atomic bomb program from litigation." Other conspiracy theories involve large industrial companies wanting to rid themselves of fluorine "waste products". Some argue that fluoride is a waste product that is unusable and expensive to dispose of properly. Because of this expense, industrial companies desiring to protect their profits release "millions of tons of waste fluoride into the environment." As a result, these opponents of water fluoridation say, "it is now clear that the one utterly relentless force behind fluoridation is American 'big industry' ". In spite of this, a large majority of government agencies and medical organizations support water fluoridation in locations needing fluoride supplementation and agree that it is a safe practice. (See Medical approval for a list of health organizations.) The Centers for Disease Control and Prevention (CDC) has listed water fluoridation as one of the ten greatest achievements in public health of the 20th century. In 2000, a report by the Surgeon General of the United States titled "Oral Health in America" stated, "Community water fluoridation remains one of the great achievements of public health in the twentieth century." Various international groups, including the World Health Organization (WHO) and the International Association for Dental Research (IADR) support water fluoridation as a safe and effective method to fight tooth decay. Fluoridation has spawned many court cases. Anti-fluoride activists have sued municipalities, claiming that their rights to consent to medical treatment, privacy, and due process are infringed by mandatory water fluoridation. Individuals have sued municipalities for a number of illnesses that they blamed on fluoridation of the city's water supply. A substantial majority of courts have held in favor of cities in such cases, finding no or only a tenuous connection between health problems and widespread water fluoridation. To date, no federal appellate court or state court of last resort (i.e., state supreme court) has found water fluoridation to be unlawful. A flurry of cases were heard in numerous state courts in the 1950s during the early years of water fluoridation. State courts consistently held in favor of allowing fluoridation to continue, analogizing fluoridation to mandatory vaccination and the use of other chemicals to clean the public water supply, both of which had a long-standing history of acceptance by courts. In 1952, a Federal Regulation was adopted that stated in part, "The Federal Security Agency will regard water supplies containing fluorine, within the limitations recommended by the Public Health Service, as not actionable under the Federal Food, Drug, and Cosmetic Act." The Supreme Court of Oklahoma analogized water fluoridation to mandatory vaccination in a 1954 case. The court noted, "we think the weight of well-reasoned modern precedent sustains the right of municipalities to adopt such reasonable and undiscriminating measures to improve their water supplies as are necessary to protect and improve the public health, even though no epidemic is imminent and no contagious disease or virus is directly involved . . . . To us it seems ridiculous and of no consequence in considering the public health phase of the case that the substance to be added to the water may be classed as a mineral rather than a drug, antiseptic or germ killer; just as it is of little, if any, consequence whether fluoridation accomplishes its beneficial result to the public health by killing germs in the water, or by hardening the teeth or building up immunity in them to the bacteria that causes caries or tooth decay. If the latter, there can be no distinction on principle between it and compulsory vaccination or inoculation, which, for many years, has been well-established as a valid exercise of police power." In the 1955 case Froncek v. City of Milwaukee, the Wisconsin Supreme Court affirmed the ruling of a circuit court which held that "the fluoridation is not the practice of medicine, dentistry, or pharmacy, by the City" and that "the legislation is a public health measure, bearing a real, substantial, and reasonable relation to the health of the city." The Supreme Court of Ohio, in 1955's Kraus v. City of Cleveland, said, "Plaintiff's argument that fluoridation constitutes mass medication, the unlawful practice of medicine and adulteration may be answered as a whole. Clearly, the addition of fluorides to the water supply does not violate such principles any more than the chlorination of water, which has been held valid many times." As cases continued to be brought in state courts, a general consensus developed that fluoridation, at least from a legal standpoint, was acceptable. In 1973's Beck v. City Council of Beverly Hills, the California Court of Appeal, Second District, said, "Courts through the United States have uniformly held that fluoridation of water is a reasonable and proper exercise of the police power in the interest of public health. The matter is no longer an open question." (citations omitted) Though courts have consistently rejected arguments against fluoridation, advocates continue to challenge the spread of fluoridation. For instance, in 2002, the city of Watsonville, California chose to disregard a California law mandating fluoridation of water systems with 10,000 or more hookups, and the dispute between the city and the state ended up in court. The trial court and the intermediate appellate court ruled in favor of the state and its fluoridation mandate, however, and the Supreme Court of California declined to hear the case in February of 2006. Since 2000, courts in Washington, Maryland, and Texas have reached similar conclusions. In Ryan v. Attorney General (1965), the Supreme Court held that water fluoridation did not infringe the plaintiff's right to bodily integrity. However, the court found that such a right to bodily integrity did exist, despite the fact that it was not explicitly mentioned in the Constitution of Ireland, thus establishing the doctrine of unenumerated rights in Irish constitutional law. Template:NPOV Although Japan, Germany, Sweden, Netherlands, Czechoslovakia, Cuba, and the former Soviet Union have stopped water fluoridation, and some reputable scientific organizations oppose water fluoridation , more than 100 national and international health service agencies and professional organizations see benefits in community water fluoridation as a means of preventing dental decay. They include: There has only been one proper systematic review ever carried out. In the United Kingdom the Department of Health funded a systematic review in 1999, which looked at all of the evidence so far published, into the efficacy safety of adding fluoride to drinking water. This work was carried out at the University of York. They concluded: The current confused state can be better understood by the discovery of the House of Commons Health Committee' inquiry into 'The Influence of the Pharmaceutical Industry' that medical students are seldom taught, neither here nor abroad, how to properly go about evaluating a research paper. The committee concluded: "This implies a major deficiency in the education of healthcare professionals." We are concerned about the continuing misinterpretations of the evidence and think it is important that decision makers are aware of what the review really found. As such, we urge interested parties to read the review conclusions in full at: In order to fully prove and understand the nature of fluoridation risks (including the range of doses that can cause the effects, and how these doses vary based on environmental, genetic, and dietary factors) more research is required. But is it ethical to continue exposing entire populations to fluoride in their water while additional long-term studies are carried out to clarify the risks? That is the crux of the question posed by an insightful analysis published in the March 2006 issue of the Journal of Evidence Based Dental Practice. The analysis, written by Joel Tickner and Melissa Coffin, examines the water fluoridation controversy in the context of the "precautionary principle." The precautionary principle has become a core guiding principle of environmental health regulations in Europe and is based on the notion that: "if there is uncertainty, yet credible scientific evidence or concern of threats to health, precautionary measures should be taken. In other words, preventive action should be taken on early warnings even though the nature and magnitude of the risk are not fully understood." Without expressing any personal opinions about water fluoridation, Tickner and Coffin note that "The need for precaution arises because the costs of inaction in the face of uncertainty can be high, and paid at the expense of sound public health."
wikidoc
null
/index.php/Water_model
1,028
# Water model In computational chemistry, classical water models are used for the simulation of water clusters, liquid water, and aqueous solutions with explicit solvent. These models use the approximations of molecular mechanics. Many different models have been proposed; they can be classified by the number of points used to define the model (atoms plus dummy sites), whether the structure is rigid or flexible, and whether the model includes polarization effects. The simplest water models treat the water molecule as rigid and rely only on non-bonded interactions. The electrostatic interaction is modeled using Coulomb's law and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P and TIP4P is represented by where kC, the electrostatic constant, has a value of 332.1 ·kcal/mol in the units commonly used in molecular modeling; qi are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms. The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model. The simplest models have three interaction sites, corresponding to the three atoms of the water molecule. Each atom gets assigned a point charge, and the oxygen atom also gets the Lennard-Jones parameters. The 3-site models are very popular for molecular dynamics simulations because of their simplicity and computational efficiency. Most models use a rigid geometry matching the known geometry of the water molecule. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°. where μ is the dipole of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of 1.608 × 10−40 F m. Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model. The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms. The charges are not modified. The 4-site models place the negative charge on a dummy atom (labeled M in the figure) placed near the oxygen along the bisector of the HOH angle. This improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal-Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is therefore only of historical interest. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough. The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; and TIP4P/2005, a general parameterization for simulating the entire phase diagram of water. The 5-site models place the negative charge on dummy atoms (labeled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximum density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums. Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r): A 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden . It was found to reproduce the structure and melting of ice better than other models. The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O-O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1). When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step).
wikidoc
null
/index.php/Water_of_Life_(Dune)
282
# Water of Life (Dune) Found on the planet Arrakis, the Water of Life is categorized as an illuminating poison, or an awareness spectrum narcotic. Specifically, it is the bilious liquid exhalation of a sandworm produced at the moment of its death from drowning. In its raw state it is toxic to anyone but a Reverend Mother. A Reverend Mother can metabolize the Water within her body, then expel it to transform the raw Water into a form consumable by others. The "changed" Water is the narcotic used in the Fremen Sietch orgy. The Bene Gesserit test their acolytes to see if they can be Reverend Mothers by feeding them the Water of Life in a ritual known as the spice agony. If the adept is qualified, she transmutes the poison safely within and becomes a Reverend Mother. If not, she dies. Paul Atreides uses the Water of Life to fulfill his destiny as the Kwisatz Haderach. Paul also perceives that the Water of Life could also be used to catastrophic effect in combination with a pre-spice mass, as the "blow" reaction which formed the melange would transform the Water of Life into Water of Death, resulting in the destruction of the sandworms and the end of the spice-cycle ecosystem. With the power to destroy all the spice at its source, the Fremen were thus able to exercise a degree of control over both the Bene Gesserit and the Spacing Guild. Since there were no sandworms after the transformation of Dune to Arrakis during and after the reign of Emperor Leto, it appears that the Water of Life was replaced by spice essence in Bene Gesserit and Fremen rituals.
wikidoc
null
/index.php/Water_pipe_percolator
208
# Water pipe percolator A water pipe percolator is a small additional sub chamber within the main chamber of a water pipe that provides in-line smoke-water interaction via heat exchange and dissolution. A percolator works by utilizing a pressure differential between its bottom and top in/outlets. Reduced pressure at the outlet end is usually provided by the users lungs. The fluid at the inlet (ie: a smoke, vapor, and air mixture) is directed to the bottom of a column of water, where the pressure differential causes the inlet fluid to pass though the water in small pockets (liquid bubbles), and then rise to the outlet. Branched percolators utilize between 1 and 4 individual flow paths for the smoke to enter the water. This design is more recent than the cylindrical design. Branched percolators are sometimes referred to as "tree" percolators due to their geometry. See image. Cylindrical percolators instead use a single encirculating cylindrical piece to direct smoke to the bottom of the water column. Cylindrical percolators may be essentially interpreted as the branch design with an infinite number of branches compounded together to form one singular flow path. Cylindrical percolators are sometimes referred to as "dome" percolators due to their geometry. See image.
wikidoc
null
/index.php/Water_pollution
1,077
# Water pollution Although natural phenomena such as volcanoes, algae blooms, storms, and earthquakes also cause major changes in water quality and the ecological status of water, these are not deemed to be pollution. Water is only called polluted when it is not able to be used for what one wants it to be used for. Water pollution has many causes and characteristics. Increases in nutrient loading may lead to eutrophication. Organic wastes such as sewage impose high oxygen demands on the receiving water leading to oxygen depletion with potentially severe impacts on the whole eco-system. Industries discharge a variety of pollutants in their wastewater including heavy metals, resin pellets, organic toxins, oils, nutrients, and solids. Discharges can also have thermal effects, especially those from power stations, and these too reduce the available oxygen. Silt-bearing runoff from many activities including construction sites, deforestation and agriculture can inhibit the penetration of sunlight through the water column, restricting photosynthesis and causing blanketing of the lake or river bed, in turn damaging ecological systems. Pollutants in water include a wide spectrum of chemicals, pathogens, and physical chemistry or sensory changes. Many of the chemical substances are toxic. Pathogens can produce waterborne diseases in either human or animal hosts. Alteration of water's physical chemistry include acidity, electrical conductivity, temperature, and eutrophication. Eutrophication is the fertilisation of surface water by nutrients that were previously scarce. Even many of the municipal water supplies in developed countries can present health risks. Water pollution is a major problem in the global context. It has been suggested that it is the leading worldwide cause of deaths and diseases, and that it accounts for the deaths of more than 14,000 people daily. Most water pollutants are eventually carried by the rivers into the oceans. In some areas of the world the influence can be traced hundred miles from the mouth by studies using hydrology transport models. Advanced computer models such as SWMM or the DSSAM Model have been used in many locations worldwide to examine the fate of pollutants in aquatic systems. Indicator filter feeding species such as copepods have also been used to study pollutant fates in the New York Bight, for example. The highest toxin loads are not directly at the mouth of the Hudson River, but 100 kilometers south, since several days are required for incorporation into planktonic tissue. The Hudson discharge flows south along the coast due to coriolis force. Further south then are areas of oxygen depletion, caused by chemicals using up oxygen and by algae blooms, caused by excess nutrients from algal cell death and decomposition. Fish and shellfish kills have been reported, because toxins climb the foodchain after small fish consume copepods, then large fish eat smaller fish, etc. Each successive step up the food chain causes a stepwise concentration of pollutants such as heavy metals (e.g. mercury) and persistent organic pollutants such as DDT. This is known as biomagnification which is occasionally used interchangeably with bioaccumulation. The big gyres in the oceans trap floating plastic debris. The North Pacific Gyre for example has collected the so-called Great Pacific Garbage Patch that is now estimated at two times the size of Texas. Many of these long-lasting pieces wind up in the stomachs of marine birds and animals. This results in obstruction of digestive pathways which leads to reduced appetite or even starvation. Many chemicals undergo reactive decay or chemially change especially over long periods of time in groundwater reservoirs. A noteworthy class of such chemicals are the chlorinated hydrocarbons such as trichloroethylene (used in industrial metal degreasing and electronics manufacturing) and tetrachloroethylene used in the dry cleaning industry (note latest advances in liquid carbon dioxide in dry cleaning that avoids all use of chemicals). Both of these chemicals, which are carcinogens themselves, undergo partial decomposition reactions, leading to new hazardous chemicals (including dichloroethylene and vinyl chloride). Groundwater pollution is much more difficult to abate than surface pollution because groundwater can move great distances through unseen aquifers. Non-porous aquifers such as clays partially purify water of bacteria by simple filtration (adsorption and absorption), dilution, and, in some cases, chemical reactions and biological activity: however, in some cases, the pollutants merely transform to soil contaminants. Groundwater that moves through cracks and caverns is not filtered and can be transported as easily as surface water. In fact, this can be aggravated by the human tendency to use natural sinkholes as dumps in areas of Karst topography. In the UK there are common law rights (civil rights) to protect the passage of water across land unfettered in either quality of quantity. Criminal laws dating back to the 16th century exercised some control over water pollution but it was not until the River (Prevention of pollution )Acts 1951 - 1961 were enacted that any systematic control over water pollution was established. These laws were strengthened and extended in the Control of Pollution Act 1984 which has since been updated and modified by a series of further acts. It is a criminal offense to either pollute a lake, river, groundwater or the sea or to discharge any liquid into such water bodies without proper authority. In England and Wales such permission can only be issued by the Environment Agency and in Scotland by SEPA. In the USA, concern over water pollution resulted in the enactment of state anti-pollution laws in the latter half of the 19th century, and federal legislation enacted in 1899. The Refuse Act of the federal Rivers and Harbors Act of 1899 prohibits the disposal of any refuse matter from into either the nation's navigable rivers, lakes, streams, and other navigable bodies of water, or any tributary to such waters, unless one has first obtained a permit. The Water Pollution Control Act, passed in 1948, gave authority to the Surgeon General to reduce water pollution. Growing public awareness and concern for controlling water pollution led to enactment of the Federal Water Pollution Control Act Amendments of 1972. As amended in 1977, this law became commonly known as the Clean Water Act. The Act established the basic mechanisms for regulating contaminant discharge. It established the authority for the United States Environmental Protection Agency to implement wastewater standards for industry. The Clean Water Act also continued requirements to set water quality standards for all contaminants in surface waters. Further amplification of the Act continued including the enactment of the Great Lakes Legacy Act of 2002.
wikidoc
null
/index.php/Water_potential
756
# Water potential Water potential is the potential energy of water relative to pure water (e.g. deionized water) in reference conditions. It quantifies the tendency of water to move from one area to another due to osmosis, gravity, mechanical pressure, or matrix effects including surface tension. Water potential is measured in units of pressure and is commonly represented by the Greek letter <math>\Psi</math> (Psi). This concept has proved especially useful in understanding water movement within plants, animals, and soil. Typically, pure water at standard temperature and pressure (or other suitable reference condition) is defined as having a water potential of 0. The addition of solutes to water lowers its potential (makes it more negative), just as the increase in pressure increases its potential (makes it more positive). If possible, water will move from an area of higher water potential to an area that has a lower water potential. One very common example is water that contains a dissolved salt, like sea water or the solution within living cells. These solutions typically have negative water potentials, relative to the pure water reference. If there is no restriction on flow, water molecules will proceed from the locus of pure water to the more negative water potential of the solution. This effect can be used to power a osmotic power plant. Pressure potential is based on mechanical pressure, and is an important component of the total water potential within plant cells. Pressure potential is increased as water enters a cell. As water passes through the cell wall and cell membrane, it increases the total amount of water present inside the cell, which exerts an outward pressure that is retained by the structural rigidity of the cell wall. The pressure potential in a living plant cell is usually positive. In plasmolysed cells, pressure potential is almost zero. Negative pressure potentials occur when water is pulled through an open system such as a plant xylem vessel. Withstanding negative pressure potentials (frequently called tension) is an important adaptation of xylem vessels. Pure water is usually defined as having a solute potential (<math>\Psi_\pi</math>) of zero, and in this case, solute potential can never be positive. The relationship of solute concentration (in molarity) to solute potential is given by the van 't Hoff equation: where <math>m</math> is the concentration in molarity of the solute, <math>i</math> is the van 't Hoff factor, the ionization constant of the solute (1 for glucose, 2 for NaCl, etc.) <math>R</math> is the ideal gas constant, and <math>T</math> is the absolute temperature. For example, when a solute is dissolved in water, water molecules are less likely to diffuse away via osmosis than when there is no solute. A solution will have a lower and hence more negative water potential than that of pure water. Furthermore, the more solute molecules present, the more negative the solute potential is. Solute potential has important implication for many living organisms. If a living cell with a lower solute concentration is surrounded by a concentrated solution, the cell will tend to lose water to the more negative water potential of the surrounding environment. This is often the case for marine organisms living in sea water and halophytic plants growing in saline environments. In the case of a plant cell, the flow of water out of the cell may eventually cause the plasma membrane to pull away from the cell wall, leading to plasmolysis. When water is in contact with solid particles (e.g., clay or sand particles within soil) adhesive intermolecular forces between the water and the solid can be large and important. The forces between the water molecules and the solid particles in combination with attraction among water molecules promote surface tension and the formation of menisci within the solid matrix. Force is then required to break these menisci. The magnitude of matrix potential depends on the distances between solid particles--the width of the menisci (see also capillary action)--and the chemical composition of the solid matrix. In many cases, matrix potential can be quite large and comparable to the other components of water potential discussed above. It is worth noting that matrix potentials are very important for plant water relations. Strong (very negative) matrix potentials bind water to soil particles within very dry soils. Plants then create even more negative matrix potentials within tiny pores in the cell walls of their leaves to extract water from the soil and allow physiological activity to continue through dry periods.
wikidoc
null
/index.php/Water_quality
1,121
# Water quality Water quality is the physical, chemical and biological characteristics of water, characterized through the methods of hydrometry. The primary bases for such characterization are parameters which relate to drinking water, safety of human contact and for health of ecosystems. The vast majority of surface water on the planet is neither potable nor toxic. This remains true even if sea water in the oceans (which is too salty to drink) isn't counted. Another general perception of water quality is that of a simple property that tells whether water is polluted or not. In fact, water quality is a very complex subject, in part because water is a complex medium intrinsically tied to the ecology of the Earth. Industrial pollution is a major cause of water pollution, as well as runoff from agricultural areas, urban stormwater runoff and discharge of untreated sewage (especially in developing countries). Contaminants that may be in untreated water include microorganisms such as viruses and bacteria; inorganic contaminants such as salts and metals; pesticides and herbicides; organic chemical contaminants from industrial processes and petroleum use; and radioactive contaminants. Water quality depends on the local geology and ecosystem, as well as human uses such as sewage dispersion, industrial pollution, use of water bodies as a heat sink, and overuse (which may lower the level of the water). The Environmental Protection Agency prescribes regulations that limit the amount of certain contaminants in the water provided by public water systems for tap water. Food and Drug Administration (FDA) regulations establish limits for contaminants in bottled water that must provide the same protection for public health. Drinking water, including bottled water, may reasonably be expected to contain at least small amounts of some contaminants. The presence of these contaminants does not necessarily indicate that the water poses a health risk. Some people use water purification technology to remove contaminants from the municipal water supply they get in their homes, or from local pumps or bodies of water. For people who get water from a local stream, lake, or aquifer, their drinking water is not filtered by the local government. Toxic substances and high populations of certain microorganisms can present a health hazard for non-drinking purposes such as irrigation, swimming, fishing, rafting, boating, and industrial uses. These conditions may also impact wildlife which use the water for drinking or as a Habitat. Interest by individuals and volunteer groups in making local water quality observations is high, and an understanding of the basic chemistry of many water quality parameters is an essential first step to making good measurements. Most citizens harbor great concern over the purity of their drinking water, but there is far more to water quality than water treatment for human consumption. Statements to the effect that "uses must be preserved" are included within water quality regulations because they provide for broad interpretation of water quality results, while preserving the ultimate goal of the regulations. Technical measures of water quality—that is, the values obtained when making water quality measurements—are always subject to interpretation from multiple perspectives. Is it reasonable to expect a river to be pristine in a landscape that no longer is? If a river has always carried sediment, is it polluted even if the cause is not man induced? Can water quality be maintained when water quantity can not? The questions that arise from consideration of water quality relative to human uses of the water become more complex when consideration must also be given to conditions required to sustain aquatic biota. Yet inherent in the concept of preserving uses is a mandate that waterways must be much more than conduits for a fluid we might want to drink, fill our swimming pool with, or carry our wastes out of town The complexity of water quality as a subject is reflected in the many types of measurements of water and Wastewater quality indicators. In England and Wales acceptable levels are listed in the Water Supply (Water Quality) Regulations 1989. These measurements include (from simple and basic to more complex): Some of the simple measurements listed above can be made on-site (temperature, pH, dissolved oxygen, conductivity), in direct contact with the water source in question. More complex measurements that must be made in a lab setting require a water sample to be collected, preserved, and analyzed at another location. Making these complex measurements can be expensive. Because direct measurements of water quality can be expensive, ongoing monitoring programs are typical conducted by government agencies. Individuals interested in monitoring water quality who cannot afford or manage lab scale analysis can also use biological indicators to get a general reading of water quality. Biological monitoring metrics have been developed in many places, and one widely used measure is the presence and abundance of members of the insect orders Ephemeroptera, Plecoptera and Trichoptera (EPT). EPT indexes will naturally vary from region to region, but generally, within a region, the greater the number of taxa from these orders, the better the water quality. A number of websites originating in the United States offer guidance on developing a monitoring program and identifying members of these and other aquatic insect orders. In the United States each governing jurisdiction (states, territories, and covered tribal entities) is required to submit a set of biennial reports on the quality of water in their area. These reports submitted to, and approved by, the Environmental Protection Agency are known as the 303(d), 305(b) and 314 reports. In coming years it is expected that the governing jurisdictions will submit all three reports as a single document, called the Integrated Report. The 305(b) report is a general report on water quality throughout the state, providing overall information about the number of miles of streams and rivers and their agreegate condition. The 314 report provides similar information for lakes. Under the Clean Water Act, states are required to adopt water quality standards for each of the possible designated uses that they assign to their waters. Should evidence exist to suggest or document that a stream, river or lake has failed to meet the water quality criteria for one or more of its designated uses, it is placed on the 303(d) list, or the list of impaired waters. Once on the 303(d) list states are required to develop management plans establishing Total Maximum Daily Loads for the pollutant impairing the use of the water. These TMDLs establish what reductions in pollutants are needed to allow the water to regain its status as fully supporting the designated uses assigned to it. These reports are completed by the governing jurisdiction, typically a Department of Environmental Quality or similar state agency, and are available on the web.
wikidoc
null
/index.php/Water_softener
812
# Water softener These "hardness ions" cause three major kinds of undesired effects. Most visibly, metal ions react with soaps and calcium-sensitive detergents, hindering their ability to lather and forming a precipitate—the familiar "bathtub ring". Presence of "hardness ions" also inhibits the cleaning effect of detergent formulations. Secondly, calcium and magnesium carbonates tend to adhere to the surfaces of pipes and heat exchanger surfaces. The resulting build-up of scale can restrict water flow in pipes. In boilers, the deposits act as an insulation that impairs the flow of heat into water, reducing the heating efficiency and allowing the metal boiler components to overheat. In a pressurized system, this can lead to failure of the boiler. Thirdly, the presence of ions in an electrolyte, in this case, hard water, can also lead to galvanic corrosion, in which one metal will preferentially corrode when in contact with another type of metal, when both are in contact with an electrolyte. Conventional water-softening devices intended for household use depend on an ion-exchange resin in which "hardness" ions trade places with sodium ions that are electrostatically bound to the anionic functional groups of the polymeric resin. A class of minerals called zeolites also exhibits ion-exchange properties; these minerals were widely used in earlier water softeners. Water softeners are typically required[citation needed] when the source of water is a well, whether municipal or private. Chelators are used in chemical analysis, as water softeners, and are ingredients in many commercial products such as shampoos and food preservatives. Citric acid is used to soften water in soaps and laundry detergents. A commonly used synthetic chelator is EDTA. The water to be treated passes through a bed of the resin. Negatively-charged resins absorb and bind metal ions, which are positively charged. The resins initially contain univalent hydrogen, sodium or potassium ions, which exchange with divalent calcium and magnesium ions in the water. This exchange eliminates precipitation and soap scum formation. As the water passes through both kinds of resin, the hardness ions replace the hydrogen, sodium or potassium ions which are released into the water. The "harder" the water, the more hydrogen, sodium or potassium ions are released from the resin and into the water. As these resins become loaded with hardness ions they gradually lose their effectiveness and must be regenerated by passing a concentrated brine, usually of sodium chloride or potassium chloride, or hydrochloric acid solution through them. Most of the salts used for regeneration gets flushed out of the system and may be released into the soil or sewer. These processes can be damaging to the environment, especially in arid regions.[citation needed] Some jurisdictions prohibit such release and require users to dispose of the spent brine at an approved site or to use a commercial service company. Most water softener manufacturers provide metered control valves to minimize the frequency of regeneration. It is also possible on most units to adjust the amount of salt used for each regeneration. Both of these steps are recommended to minimize the impact of water softeners on the environment and conserve on salt use.[citation needed] Using acid to regenerate lowers the pH of the regeneration waste. In industrial scale water softening plants, the effluent flow from re-generation process can be very significant. Under certain conditions, such as when the effluent is discharged in admixture with domestic sewage, the calcium and magnesium salts may precipitate out as hardness scale on the inside of the discharge pipe. This can build up to such an extent so as to block the pipe as happened to a major chlor-alkali plant on the south Wales coast in the 1980s.[citation needed] For people on a low-sodium diet, the increase in sodium levels (for systems releasing sodium) in the water can be significant, especially when treating very hard water. A paper by Kansas State University gives an example: "A person who drinks two liters (2L) of softened, extremely hard water (assume 30 gpg) will consume about 480 mg more sodium (2L x 30 gpg x 8 mg/L/gpg = 480 mg), than if unsoftened water is consumed." This is a significant amount, as they state: "The American Heart Association (AHA) suggests that the 3 percent of the population who must follow a severe, salt-restricted diet should not consume more than 400 mg of sodium a day. AHA suggests that no more than 10 percent of this sodium intake should come from water. The EPA's draft guideline of 20 mg/L for water protects people who are most susceptible." Most people who are concerned with the added sodium in the water generally have one faucet in the house that bypasses the softener, or have a reverse osmosis unit installed for the drinking water and cooking water, which was designed for desalinisation of sea water.
wikidoc
null
/index.php/Water_softening
909
# Water softening A water softener reduces the dissolved calcium, magnesium, and to some degree manganese and ferrous iron ion concentration in hard water. (A common water softener is sodium carbonate; formula Template:SodiumTemplate:Carbonate.) These "hardness ions" cause three major kinds of undesired effects. Most visibly, metal ions react with soaps and calcium-sensitive detergents, hindering their ability to lather and forming a precipitate—the familiar "bathtub ring". Presence of "hardness ions" also inhibits the cleaning effect of detergent formulations. Second, calcium and magnesium carbonates tend to precipitate out as hard deposits to the surfaces of pipes and heat exchanger surfaces. This is principally caused by thermal decomposition of bi-carbonate ions but also happens to some extent even in the absence of such ions. The resulting build-up of scale can restrict water flow in pipes. In boilers, the deposits act as an insulation that impairs the flow of heat into water, reducing the heating efficiency and allowing the metal boiler components to overheat. In a pressurized system, this can lead to failure of the boiler. Third, the presence of ions in an electrolyte, in this case, hard water, can also lead to galvanic corrosion, in which one metal will preferentially corrode when in contact with another type of metal, when both are in contact with an electrolyte. However the sodium (or potassium) ions released during conventional water softening are much more electrolytically active than the calcium or magnesium ions that they replace and galvanic corrosion would be expected to be substantially increased by water softening and not decreased. Similarly if any lead plumbing is in use, softened water is likely to be substantially more plumbo-solvent than hard water. Conventional water-softening devices intended for household use depend on an ion-exchange resin in which "hardness" ions trade places with sodium ions that are electrostatically bound to the anionic functional groups of the polymeric resin. A class of minerals called zeolites also exhibits ion-exchange properties; these minerals were widely used in earlier water softeners. Water softeners may be desirable when the source of water is a well, whether municipal or private. The water to be treated passes through a bed of the resin. Negatively-charged resins absorb and bind metal ions, which are positively charged. The resins initially contain univalent hydrogen, sodium or potassium ions, which exchange with divalent calcium and magnesium ions in the water. As the water passes through the resin column, the hardness ions replace the hydrogen, sodium or potassium ions which are released into the water. The "harder" the water, the more hydrogen, sodium or potassium ions are released from the resin and into the water. Resins are also available to remove carbonate, bi-carbonate and sulphate ions which are absorbed and hydroxyl ions released from the resin. Both types of resin may be provided in a single water softener. As these resins become loaded with undesirable cations and anions they gradually lose their effectiveness and must be regenerated. If a cationic resin is used (to remove calcium and magnesium ions) then regeneration is usually effected by passing a concentrated brine, usually of sodium chloride or potassium chloride, or hydrochloric acid solution through them. For anionic resins a solution of sodium or potassium hydroxide (lye) is used. Most of the salts used for regeneration gets flushed out of the system and may be released into the soil or sewer. These processes can be damaging to the environment, especially in arid regions.[citation needed] Some jurisdictions prohibit such release and require users to dispose of the spent brine at an approved site or to use a commercial service company. Most water softener manufacturers provide metered control valves to minimize the frequency of regeneration. It is also possible on most units to adjust the amount of reagent used for each regeneration. Both of these steps are recommended to minimize the impact of water softeners on the environment and conserve on reagent use.[citation needed] Using acid to regenerate lowers the pH of the regeneration waste. If potassium chloride is used the same exchange process takes place except that potassium is exchanged for the calcium, magnesium and iron instead of sodium. This is a more expensive option and may be unsuited for people on potassium-restricted diets. For people on a low-sodium diet, the increase in sodium levels (for systems releasing sodium) in the water can be significant, especially when treating very hard water. A paper by Kansas State University gives an example: "A person who drinks two litres (2L) of softened, extremely hard water (assume 30 gpg) will consume about 480 mg more sodium (2L x 30 gpg x 8 mg/L/gpg = 480 mg), than if unsoftened water is consumed." This is a significant amount, as they state: "The American Heart Association (AHA) suggests that the 3 percent of the population who must follow a severe, salt-restricted diet should not consume more than 400 mg of sodium a day. AHA suggests that no more than 10 percent of this sodium intake should come from water. The EPA's draft guideline of 20 mg/L for water protects people who are most susceptible." Most people who are concerned with the added sodium in the water generally have one tap (US: faucet) in the house that bypasses the softener, or have a reverse osmosis unit installed for the drinking water and cooking water, which was designed for desalinisation of sea water.
wikidoc
null
/index.php/Water_stagnation
83
# Water stagnation To avoid ground and surface water stagnation, drainage of surface and subsoil is advised. Areas with a shallow water table are more susceptible to ground water stagnation due to the lower availability of natural soil drainage. Pools of stagnant water have historically been used in the processing of hemp and some other fiber crops, as well as of linden bark used for making bast shoes. Several weeks of soaking makes bast easily separable due to bacterial and fermentative processes.
wikidoc
null
/index.php/Water_supply
3,269
# Water supply Water supply is the process of self-provision or provision by third parties of water of various qualities to different users. This article is so far limited to public water supply. It is expected to also cover industrial self-supply of water. Irrigation is covered separately. In 2004 about 3.5 billion people worldwide (54% of the global population) had access to piped water supply through house connections. Another 1.3 billion (20%) had access to safe water through other means than house connections, including standpipes, "water kiosks", protected springs and protected wells. Finally, more than 1 billion people (16%) did not have access to safe water, meaning that they have to revert to unprotected wells or springs, canals, lakes or rivers to fetch water. Both an adequate amount of water and adequate water quality are essential for public health and hygiene. Waterborne diseases are among the leading causes of morbidity and mortality in low- and middle-income countries, frequently called developing countries. For example, an estimated 900 million people suffer (and approximately 2 million die) from water-related diarrhoeal illnesses each year. At least 17 percent of the total burden of human diseases in many developing countries can be attributed to diarrhea and infestations by intestinal worms. The most common waterborne or waterwashed diseases are diarrhea, typhoid and cholera. Another example is trachoma, an infectious disease of the eye, which results in many cases of blindness in developing countries, which is associated with poor water supply, poor sanitation and failure to adequately process human excrement. Sometimes, due to actual or suspected contamination by pathogens a boil water advisory, known as a Boiling water order in the UK, may be invoked. The World Health Organization has defined around 20 liter per capita per day as basic access, which implies high health concerns, and 100 liter per capita per day as optimal access, associated with low health concerns. Water supply systems get water from a variety of locations, including groundwater (aquifers), surface water (lakes and rivers), conservation and the sea through desalination. The water is then, in most cases, purified, disinfected through chlorination and sometimes fluoridated. Treated water then either flows by gravity or is pumped to reservoirs, which can be elevated such as water towers or on the ground (for indicators related to the efficiency of drinking water distribution see non-revenue water). Once water is used, wastewater is typically discharged in a sewer system and treated in a wastewater treatment plant before being discharged into a river, lake or the sea or reused for landscaping, irrigation or industrial use (see also sanitation). Many of the 3.5 billion people having access to piped water receive a poor or very poor quality of service, especially in developing countries where about 80% of the world population lives. Water supply service quality has many dimensions: continuity; water quality; pressure; and the degree of responsiveness of service providers to customer complaints. Continuity of water supply is taken for granted in most developed countries, but is a severe problem in many developing countries, where sometimes water is only provided for a few hours every day or a few days a week. It is estimated that about half of the population of developing countries receives water on an intermittent basis. Drinking water quality has a micro-biological and a physico-chemical dimension. There are thousands of parameters of water quality. In public water supply systems water should, at a minimum, be disinfected - previously through chlorination, now using ultra violet light - or it may need to undergo treatment, especially in the case of surface water. For more details please see the separate entries on water quality, water treatment and drinking water. Water pressures vary in different locations of a distribution system. Water mains below the street may operate at higher pressures, with a pressure reducer located at each point where the water enters a building or a house. In poorly managed systems, water pressure can be so low as to result only in a trickle of water or so high that it leads to damage to plumbing fixtures and waste of water. Pressure in an urban water system is typically maintained either by a pressurized water tank serving an urban area, by pumping the water up into a tower and relying on gravity to maintain a constant pressure in the system or solely by pumps at the water treatment plant and repeater pumping stations. Typical UK pressures are 4-5 bar for an urban supply. However, some people can get over 8bars. A single iron main pipe may cross a deep valley, it will have the same nominal pressure, however each consumer will get a bit more or less because of the hydrostatic pressure (about 1 bar /10m height). So people at the bottom of a 100-foot hill will get about 3 bars more than those at the top. The effective pressure also varies because of the supply resistance even for the same static pressure. An urban consumer may have 5 metres of 1/2" lead pipe running from the iron main, so the kitchen tap flow will be fairly unrestricted, so high flow. A rural consumer may have a kilometre of rusted and limed 3/4" iron pipe so their kitchen tap flow will be small. For this reason the traditional UK domestic water system has a header/storage tank in the attic. Water can dribble into this tank through a 1/2" lead pipe, plus ball valve, and then supply the house on 22 or 28 mm pipes. Gravity water has a small pressure (say 1/4 bar in the bathroom) but needs wide pipes allow higher flows. This is fine for baths and toilets but is frequently inadequate for showers. People install shower booster pumps to increase the pressure. For this reason urban houses are increasingly using mains pressure boilers (combies) which take a long time to fill a bath but suit the high back pressure of a shower. Comparing the performance of water and sanitation service providers (utilities) is needed, because the sector offers limited scope for direct competition (natural monopoly). Firms operating in competitive markets are under constant pressure to out perform each other. Water utilities are often sheltered from this pressure, and it frequently shows: some utilities are on a sustained improvement track, but many others keep falling further behind best practice. Benchmarking the performance of utilities allows to simulate competition, establish realistic targets for improvement and create pressure to catch up with better utilities. Information on benchmarks for water and sanitation utilities is provided by the International Benchmarking Network for Water and Sanitation Utilities. A great variety of institutions have responsibilities in water supply. A basic distinction is between institutions responsible for policy and regulation on the one hand; and institutions in charge of providing services on the other hand. Water supply policies and regulation are usually defined by one or several Ministries, in consultation with the legislative branch. In the United States the EPA, whose administrator reports directly to the President, is responsible for water and sanitation policy and standard setting within the executive branch. In other countries responsibility for sector policy is entrusted to a Ministry of Environment (such as in Mexico and Colombia), to a Ministry of Health (such as in Panama, Honduras and Uruguay), a Ministry of Public Works (such as in Ecuador and Haiti), a Ministry of Economy (such as in German states) or a Ministry of Energy (such as in Iran). A few countries, such as Jordan and Bolivia, even have a Ministry of Water. Often several Ministries share responsibilities for water supply. In the European Union, important policy functions have been entrusted to the supranational level. Policy and regulatory functions include the setting of tariff rules and the approval of tariff increases; setting, monitoring and enforcing norms for quality of service and environmental protection; benchmarking the performance of service providers; and reforms in the structure of institutions responsible for service provision. The distinction between policy functions and regulatory functions is not always clear-cut. In some countries they are both entrusted to Ministries, but in others regulatory functions are entrusted to agencies that are separate from Ministries. Dozens of countries around the world have established regulatory agencies for infrastructure services, including often water supply and sanitation, in order to better protect consumers and to improve efficiency. Regulatory agencies can be entrusted with a variety of responsibilities, including in particular the approval of tariff increases and the management of sector information systems, including benchmarking systems. Sometimes they also have a mandate to settle complaints by consumers that have not been dealt with satisfactorily by service providers. These specialized entities are expected to be more competent and objective in regulating service providers than departments of government Ministries. Regulatory agencies are supposed to be autonomous from the executive branch of government, but in many countries have often not been able to exercise a great degree of autonomy. In the United States regulatory agencies for utilities have existed for almost a century at the level of states, and in Canada at the level of provinces. In both countries they cover several infrastructure sectors. In many US states they are called Public Utility Commissions. For England and Wales, a regulatory agency for water (OFWAT) was created as part of the privatization of the water industry in 1989. In many developing countries, water regulatory agencies were created during the 1990s in parallel with efforts at increasing private sector participation. (for more details on regulatory agencies in Latin America, for example, please see Water and sanitation in Latin America and the regional association of water regulatory agencies ADERASA [http:/www.aderasa.org]) Many countries do not have regulatory agencies for water. In these countries service providers are regulated directly by local government, or the national government. This is, for example, the case in the countries of continental Europe, in China and India. For more information on utility regulation in the water sector see the body of knowledge on utility regulation and the World Bank's knowledge base on the same topic at Water supply service providers, which are often utilities, differ from each other in terms of their geographical coverage relative to administrative boundaries; their sectoral coverage; their ownership structure; and their governance arrangements. Many water utilities provide services in a single city, town or municipality. However, in many countries municipalities have associated in regional or inter-municipal or multi-jurisdictional utilities to benefit from economies of scale. In the United States these can take the form of special-purpose districts which may have independent taxing authority. An example of a multi-jurisdictional water utility in the United States is WASA, a utility serving Washington, DC and various localities in the state of Maryland. Multi-jurisdictional utilities are also common in Germany, where they are known as "Zweckverbaende", in France and in Italy. In some federal countries there are water service providers covering most or all cities and towns in an entire state, such as in all states of Brazil and some states in Mexico (see Water supply and sanitation in Mexico). In England and Wales water supply and sewerage is supplied almost entirely through ten regional companies. Some smaller countries, especially developed countries, have established service providers that cover the entire country or at least most of its cities and major towns. Such national service providers are especially prevalent in West Africa and Central America, but also exist, for example, in Tunisia, Jordan and Uruguay (see also water supply and sanitation in Uruguay). In rural areas, where about half the world population lives, water services are often not provided by utilities, but by community-based organizations which usually cover one or sometimes several villages. Some water utilities provide only water supply services, while sewerage is under the responsibility of a different entity. This is for example the case in Tunisia. However, in most cases water utilities also provide sewer and wastewater treatment services. In some cities or countries utilities also distribute electricity. In a few cases such multi-utilities also collect solid waste and provide local telephone services. An example of such an integrated utility can be found in the Colombian city of Medellín. Utilities that provide water, sanitation and electricity can be found in Frankfurt, Germany (Mainova), in Casablanca, Morocco and in Gabon in West Africa. Multi-utilities provide certain benefits such as common billing and the option to cross-subsidize water services with revenues from electricity sales, if permitted by law. An estimated 10 percent of urban water supply is provided by private or mixed public-private companies, usually under concessions, leases or management contracts. Under these arrangements the public entity that is legally responsible for service provision delegates certain or all aspects of service provision to the private service provider for a period typically ranging from 4 to 30 years. The public entity continues to own the assets. These arrangements are common in France and in Spain. Only in few parts of the world water supply systems have been completely sold to the private sector (privatization), such as in England and Wales as well as in Chile. The largest private water companies in the world are SUEZ and Veolia Environnement from France; Aguas de Barcelona from Spain; and Thames Water from the UK, all of which are engaged internationally (see links to website of these companies below). Governance arrangements for both public and private utilities can take many forms. Governance arrangements define the relationship between the service provider, its owners, its customers and regulatory entities. They determine the financial autonomy of the service provider and thus its ability to maintain its assets, expand services, attract and retain qualified staff, and ulitmately to provide high-quality services. Key aspects of governance arrangements are the extent to which the entity in charge of providing services is insulated from arbitrary political intervention; and whether there is an explicit mandate and political will to allow the service provider to recover all or at least most of its costs through tariffs and retain these revenues. If water supply is the responsibility of a department that is integrated in the administration of a city, town or municipality, there is a risk that tariff revenues are diverted for other purposes. In some cases, there is also a risk that staff are appointed mainly on political grounds rather than based on their professional credentials. These risks are particularly high in developing countries. Municipal or inter-municipal utilities with a separate legal personality and budget as well as a certain extent of managerial autonomy can mitigate these risks. Almost all service providers in the world charge tariffs to recover part of their costs. According to estimates by the World Bank the average (mean) global water tariff is US$ 0.53 per cubic meter. In developed countries the average tariff is US$ 1.04, while it is only U$ 0.11 in the poorest developing countries. The lowest tariffs in developing countries are found in South Asia (mean of US$ 0.09/m3), while the highest are found in Latin America (US$ 0.41/m3). Few utilities do recover all their costs. According to the same World Bank study only 30% of utilities globally, and only 50% of utilities in developed countries, generate sufficient revenue to cover operation, maintenance and partial capital costs. According to another study undertaken in 2006 by NUS Consulting, the average water and sewerage tariff in 14 mainly OECD countries excluding VAT varied between US$ 0.66 per cubic meter in the United States and the equivalent of US$ 2.25 per cubic meter in Denmark. However, it should be noted that water consumption in the US is much higher than in Europe. Therefore, residential water bills may be very similar, even if the tariff per unit of consumption tends to be higher in Europe than in the US. In developing countries tariffs are usually much further from covering costs. Residential water bills for a typical consumption of 15 cubic meters per month vary between less than US$ 1 and US$ 12 per month. Water and sanitation tariffs, which are almost always billed together, can take many different forms. Where meters are installed, tariffs are typically volumetric (per usage), sometimes combined with a small monthly fixed charge. In the absence of meters, flat or fixed rates - which are independent of actual consumption - are being charged. In developed countries, tariffs are usually the same for different categories of users and for different levels of consumption. In developing countries, are often characterized by cross-subsidies with the intent to make water more affordable for residential low-volume users that are assumed to be poor. For example, industrial and commercial users are often charged higher tariffs than public or residential users. Also, metered users are often charged higher tariffs for higher levels of consumption (increasing-block tariffs). However, cross-subsidies between residential users do not always reach their objective. Given the overall low level of water tariffs in developing countries even at higher levels of consumption, most consumption subsidies benefit the wealthier segments of society. Also, high industrial and commercial tariffs can provide an incentive for these users to supply water from other sources than the utility (own wells, water tankers) and thus actually erode the utility's revenue base. Metering of water supply is usually motivated by one or several of four objectives: First, it provides an incentive to conserve water which protects water resources (environmental objective). Second, it can postpone costly system expansion and saves energy and chemical costs (economic objective). Third, it allows a utility to better locate distribution losses (technical objective). Fourth, it allows to charge for water based on use, which is perceived by many as the fairest way to allocate the costs of water supply to users. Metering is considered good practice in water supply and is widespread in developed countries, except for the United Kingdom. In developing countries it is estimated that half of all urban water supply systems are metered and the tendency is increasing. Most cities are increasingly installing Automatic Meter Reading (AMR) systems to prevent fraud, to lower ever-increasing labor and liability costs and to improve customer service and satisfaction. The cost of supplying water consists to a very large extent of fixed costs (capital costs and personnel costs) and only to a small extent of variable costs that depend on the amount of water consumed (mainly energy and chemicals). The full cost of supplying water in urban areas in developed countries is about US$1-2 per cubic meter depending on local costs and local water consumption levels. The cost of sanitation (sewerage and wastewater treatment) is another US$1-2 per cubic meter. These costs are somewhat lower in developing countries. Throughout the world, only part of these costs is usually billed to consumers, the remainder being financed through direct or indirect subsidies from local, regional or national governments (see section on tariffs). Besides subsidies water supply investments are financed through internally generated revenues as well as through debt. Debt financing can take the form of credits from commercial Banks, credits from international financial institutions such as the World Bank and regional development banks (in the case of developing countries), and bonds (in the case of some developed countries and some upper middle-income countries). Throughout history people have devised systems to make getting and using water more convenient. Early Rome had indoor plumbing, meaning a system of aqueducts and pipes that terminated in homes and at public wells and fountains for people to use.
wikidoc
null
/index.php/Water_therapy
126
# Water therapy According to alternative medicine advocates, one form of water therapy is the consuming of a gutful of water upon waking in order to "cleanse the bowel". A litre to a litre and half is the common amount ingested. This water therapy, also known as Indian or Chinese Water Therapy, is claimed to have a wide range of health benefits; or at least no adverse effects. While ingesting about a litre and a half of water is usually harmless, this is approaching the level which can lead to water intoxication, an urgent and dangerous medical condition. Advocates of water therapy claim that application of water therapy at first will cause multiple bowel movements until the body adjusts to the increased amount of fluid.
wikidoc
null
/index.php/Water_tower
864
# Water tower A water tower or elevated water tank is a very large tank constructed for the purpose of holding a supply of water at a height sufficient to pressurize a water supply system. Many water towers were constructed during the industrial revolution and some of these are now considered architectural landmarks and monuments and may not be demolished. Some are converted to apartments or exclusive penthouses. A typical water tower is constructed of steel, reinforced or prestressed concrete or bricks. It is usually spherical or cylindrical and is approximately 50 feet (16 metres) in diameter. It typically has a height of approximately 120 feet (40 metres). The users of the water supply (a town, factory, or just a building) need to have water pressure to maintain the safety of the water supply. If a water supply is not pressurized sufficiently, several things can happen: The height of the tower provides the hydrostatic pressure for the water supply system, and it may be supplemented with a pump. The volume of the reservoir and diameter of the piping provide and sustain flow rate. However, relying on a pump to provide pressure is expensive; to keep up with varying demand, the pump would have to provide a constantly varying output pressure (and thus need an expensive control system) and it would have to be sized sufficiently to give the same pressure at high flow rates. Very high flow rates are needed when fighting fires. With a water tower present, pumps can be sized for average demand, not peak demand; the tower can provide water pressure during the day and the pumps can refill the water tower at night when demand is very low. Water towers can be surrounded by ornate coverings including fancy brickwork, a large ivy-covered trellis or it can be simply painted. Some city water towers had the name of the city painted in large letters on the roof, as a navigational aid to aviators. Sometimes the decoration can be humorous, as Granger, Iowa has two water towers, labeled HOT and COLD. The The House in the Clouds in Thorpeness was built to resemble a house in order to disguise the eyesore, whilst the lower floors were used for accommodation. When the town was connected to the mains water supply, the tank was dismantled and converted to additional living space. Sapp Bros. truck stops uses a water tower with a handle and spout -- looking like a coffee pot -- as the company logo. Many of their facilities have thus-decorated actual water towers (presumably non-functional) on-site. The first and original "Mushroom" -- Svampen in Swedish -- was built in Örebro in Sweden in the early 1950s and later copies were built around the world including Saudi-Arabia and Kuwait.[citation needed] Water towers are very common in India, where the electricity supply is erratic in most places. Water tanks are used atop houses and multi-story houses to store water from erratic supplies. In many countries, water towers have been taken out of the water supply system and replaced by pumps alone. Water towers are often regarded to be the monuments of civil engineering. Some are rejuvenated and converted to serve modern purposes. A good example of the latter is Wieża Ciśnień in Wroclaw, Poland.[citation needed] All railways making use of steam locomotives require a means of replenishing the locomotive's water tank. This is most commonly achieved by means of a water tower feeding one or more water cranes, usually located at stations and locomotive sheds. Some water towers are also used as observation towers. There are even water towers with restaurants, such as the Goldbergturm in Sindelfingen, Germany. It is also common to use water towers as the location of transmission mechanisms in the UHF range with small power, for instance for closed rural broadcasting service, portable radio, or cellular telephone service. In the 1800s, New York City required that all buildings higher than 6 stories be equipped with a rooftop water tower. This was necessary to prevent the need for excessively high pressures at lower elevations, which could burst pipes. In modern times, the towers have become fashionable in some circles. As of 2006, the neighborhood of Tribeca requires water towers on all buildings, whether or not they are being used. Two companies in New York build water towers, both of which are family businesses in operation since the 1800s. The original water tower builders were barrel makers who expanded their craft to meet a modern need as buildings in the city grew taller in height. Even today, no sealant is used to hold the water in. Tank walls are held together with cables but leak through every gap when first filled. As the wood swells, the gaps close and become impermeable. The rooftop tanks store 5,000 to 10,000 gallons of water until it is needed in the building below. The upper portion of water is skimmed off the top for everyday use while the water in the bottom of the tank is held in reserve to fight fire. When the water drops below a certain level, a pump is triggered and the tank is refilled.
wikidoc
null
/index.php/Water_urticaria
59
# Water urticaria ## See also Water urticaria, also known as aquagenic urticaria and aquagenous urticaria, is a rare form of physical urticaria. It is sometimes described as an allergy. In affected persons, water on the skin causes hives to appear within 15 minutes and last for up to two hours. Fewer than 30 cases are known world-wide.
wikidoc
null
/index.php/Waterborne_diseases
104
# Waterborne diseases Waterborne diseases are caused by pathogenic microorganisms which are directly transmitted when contaminated drinking water is consumed. Contaminated drinking water used in the preparation of food can be the source of foodborne disease through consumption of the same microorganisms. According to the World Health Organization, diarrheal disease accounts for an estimated 4.1% of the total DALY global burden of disease and is responsible for the deaths of 1.8 million people every year. It was estimated that 88% of that burden is attributable to unsafe water supply, sanitation and hygiene and is mostly concentrated on children in developing countries.
wikidoc
null
/index.php/WebMD
125
# WebMD WebMD is a medical and wellness information service, primarily known for its public internet site, which provides health information, a symptom checklist, pharmacy information, a place to store personal medical information, and an online community with over 140 moderated expert-led and peer-to-peer message boards. The site is reported to receive over 40 million hits each month and is the leading health portal in the nation according to comScore Media Metrix. WebMD also offers services to physicians and private clients. For example, they publish WebMD the Magazine, a patient-directed publication distributed bimonthly to 85 percent of physician waiting rooms.[citation needed]. Medscape is a professional portal for physicians with 30 medical specialty areas and over 30 physician discussion boards.
wikidoc
null
/index.php/Webbed_toes
390
# Webbed toes Webbed toes is the common name for syndactyly affecting the feet. It is characterised by the fusion of two or more digits of the feet. The scientific name for the condition is syndactyly, although this term covers both webbed fingers and webbed toes. There are various levels of webbing, from partial to complete. Most commonly the second and third toes are webbed or joined by skin and flexible tissue. This can reach either part way up or nearly all the way up the toe. The exact cause of the condition is unknown. In other cases, no other related persons have this condition. Syndactyly occurs when apoptosis or programmed cell death during gestation is absent or incomplete. Webbed toes occur most commonly in the following circumstances: Smoking: Smoking during pregnancy significantly elevates the risk of having a child with excess, webbed or missing fingers and toes, according to the January issue of Plastic and Reconstructive Surgery, the official medical journal of the American Society of Plastic Surgeons (ASPS). In fact, the study found that smoking just half a pack per day increases the risk of having a child born with a toe or finger defect by 29 percent. Webbed fingers or toes occur one in every 2,000 to 2,500 live births and excess fingers or toes occur one in every 600 live births. Webbed toes is a purely cosmetic condition. This condition does not impair the ability to perform any activity including walking, running, or swimming. There is no evidence that it improves swimming ability. Webbed toes can interfere with the ability to wear toe rings and toe socks. Psychological stress may arise from the fear of negative reactions to this condition from people who do not have webbed toes. This may lead some individuals to become extremely self-conscious about their feet and go to great lengths to hide them. They may avoid open-toed footwear and activities such as swimming where their feet may be seen. In reality, other people rarely notice this condition unless the person with this condition makes a deliberate effort to point it out. This condition is normally discovered at birth. If other symptoms are present, a specific syndrome may be indicated. Diagnosis of a specific syndrome is based on a family history, medical history, and a physical exam.
wikidoc
null
/index.php/Weber%E2%80%93Fechner_law
727
# Weber–Fechner law The Weber–Fechner law attempts to describe the relationship between the physical magnitudes of stimuli and the perceived intensity of the stimuli. Ernst Heinrich Weber (1795–1878) was one of the first people to approach the study of the human response to a physical stimulus in a quantitative fashion. Gustav Theodor Fechner (1801–1887) later offered an elaborate theoretical interpretation of Weber's findings, which he called simply Weber's law, though his admirers made the law's name a hyphenate. Stevens' power law is sometimes considered more accurate and general, although both make assumptions about the measurement of perceived intensity. The Weber–Fechner law assumes that just noticeable differences are additive. L. L. Thurstone uses this assumption for the concept of discriminal dispersion in the Law of comparative judgment. Fechner believed that Weber had discovered the fundamental principle of mind-body interaction, a mathematical analog of the function René Descartes once assigned to the pineal gland. In one of his classic experiments, Weber gradually increased the weight that a blindfolded man was holding and asked him to respond when he first felt the increase. Weber found that the smallest noticable difference in weight (the least difference that the test person can still perceive as a difference), was proportional to the starting value of the weight. That is to say, if the weight is 1 kg, an increase of a few grams will not be noticed. Rather, when the mass is increased by a certain factor, an increase in weight is perceived. If the mass is doubled, the threshold called smallest noticable difference also doubles. This kind of relationship can be described by a differential equation as, where dp is the differential change in perception, dS is the differential increase in the stimulus and S is the stimulus at the instant. A constant factor k is to be determined experimentally. The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e. multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e. in additive constant amounts). For example, if a stimulus is tripled in strength (i.e, 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 1), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. In addition, the mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber's law (see link1 or link2). The eye senses brightness logarithmically. Hence stellar magnitude is measured on a logarithmic scale. This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits. An increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100. Still another logarithmic scale is the decibel scale of sound intensity. And yet another is pitch, which, however, differs from the other cases in that the physical quantity involved is not a "strength". In the case of perception of pitch, humans hear pitch in a logarithmic or geometric ratio-based fashion: For notes spaced equally apart to the human ear, the frequencies are related by a multiplicative factor. For instance, the frequency of corresponding notes of adjacent octaves differ by a factor of 2. Similarly, the perceived difference in pitch between 100 Hz and 150 Hz is the same as between 1000 Hz and 1500 Hz. Musical scales are always based on geometric relationships for this reason. Notation and theory about music often refers to pitch intervals in an additive way, which makes sense if one considers the logarithms of the frequencies, as <math>\log(a imes b)=\log a+\log b</math>. Loudness: Weber's law does not quite hold for loudness. Its a good approximation for higher amplitudes, but not for lower amplitudes. This is usually referred to as "near miss" to Weber's law
wikidoc
null
/index.php/Weet-Bix
893
# Weet-Bix Weet-Bix is the name of high-fibre breakfast cereal biscuits manufactured in Australia, New Zealand, and South Africa by Sanitarium Health Food Company. The name is probably a derivative of wheat bricks or wheat biscuits and as such the plural for "Weet-Bix" is generally "Weet-Bix". Sanitarium's wheat biscuits originated in the form of a product called Granose which was created as early as the 1900s. In the 1920s a company called Grain Products created a new sweetened biscuit by the name of Weet-Bix. In 1930, Sanitarium acquired Grain Products, which like Sanitarium had ties with the Seventh-day Adventist Church and made Weet-Bix a Sanitarium product. Weet-Bix are seen in Australia as an iconic Australian foodstuff. The product was marketed in Australia using the tagline "Aussie Kids are Weet-Bix kids". This slogan was adapted for the New Zealand market as "Kiwi Kids are Weet-Bix kids". A closely related product is Weetabix, manufactured in England by Weetabix Limited of Kettering, Northamptonshire. The two products are nearly identical, the chief differences between the two are that Weetabix are smaller, sweeter, and are more brick-like in appearance than Weet-Bix. In South Africa Weet-Bix is manufactured by Bolandse Kooperatiewe Molenaars (Bokomo) in Malmesbury. Weet-Bix was invented by Bennison Osborne in NSW, Australia in the mid 1920s. Benn set out to make a product more palatable than "Granose." He tried his new product on his little nieces and nephews until he had it perfected, and in 1928 he registered the tradename "Weetbix" and production started at 659 Parramatta Road, Leichhardt, NSW with the financial backing of Mr. Arthur Shannon. Benn's friend Malcolm Ian "Mac" Macfarlane from N.Z. joined him and proved a brilliant marketer. The product was so successful that in October of 1928, Mr. Shannon sold the rights in the product to the Sanitarium Health Food Company, at which point Mac suggested that they take the product to N.Z. The product proved so successful in N.Z. that it quickly became apparent that it would be difficult to adequately supply the market from Australia. Again, with the financial assistance of Mr. Arthur Shannon, factories were established in both Auckland and Christchurch. The enterprise was such a great success that Mr. Shannon again sold out (in 1930) to the Sanitarium Health Food Company. Benn and Mac then exported the product to South Africa where they obtained other financial backing and installed a factory in Cape Town, forming the "British & African Cereal Company Pty. Ltd.," which was registered in London with Benn as the Managing Director. For the purpose of differentiating between the various countries, it was decided that the product, when introduced into England, should be known as "Weetabix." In England, Benn and Mac became the Joint Managing Directors with Benn controlling production and Mac controlling marketing. Thirty-three potential sites for the factory were examined, with Burton Latimer eventually being chosen, due in part to the offer of a disused flour mill by a Mr. George who was allotted shares in the company. For records see the 1932 and 1933 papers (Kettering Leader & Guardian," and "Northamptonshire Advisor" and also the May 19, 1933, "Town and Country News.") When the business was firmly established, Mr. Shannon offered to finance an expansion of the business. However, cash flow was such that additional financing was not necessary. Mr. Shannon however, did suggest investigating the Canadian market. At this point, Mac left the business to go overseas and Benn became the sole Managing Director with Mr. George as Chairman of Directors. A fleet of cars was purchased and salesmen employed throughout England. At the height of its success in 1936, Benn sold his share holding to the Directors and left the Company to go to the U.S.A. Weetabix was unsuccessful in the U.S.A. (Clinton, Mass.) and Benn eventually became the wartime supervisor of the Army Air Force Base in Zephyr Hills, Florida. After the war, in 1946, he took his wife and three daughters by freighter back to Australia, where he died in 1980. Around 1992, Weetabix successfully entered the U.S.A. market from Canada via Clinton, Mass., the site of the unsuccessful U.S. factory. Weet-Bix vies neck and neck with Nutri-Grain, manufactured by the market dominant Kellogg's company, as Australia's best selling cereal. The two brands have been known to clash in TV advertising. Kellogg's aired an ad which mocked the purportedly bland taste of Weet-Bix, to which Sanitarium responded with a commercial exposing the high sugar content of Nutri-Grain - a campaign which caused some damage to sales of the latter product. It was rumoured following this campaign that the two companies agreed on a 'truce' in which neither would denigrate the other's product in advertising again.[citation needed] The jingle employed in Australia as of March 2006 contains the line "Hope you've had your Weet-Bix, all-Australian Weet-Bix, for breakfast everyday!". The phrase "I hope he's had his Weet-Bix today", or similar, is established in sports commentary in Australia. SBS commentator Simon Hill also said, "Cahill is ecstatic. He did have his Weet-Bix this morning." after Tim Cahill scored Australia's first goal at World Cup 2006 against Japan.
wikidoc
null
/index.php/Weight
1,315
# Weight In the physical sciences, weight is a measurement of the gravitational force acting on an object. Near the surface of the Earth, the acceleration due to gravity is approximately constant; this means that an object's weight is roughly proportional to its mass. The words "weight" and "mass" are therefore often used interchangeably, even though they do not describe the same concept. In modern usage in the field of mechanics, weight and mass are fundamentally different quantities: mass is an intrinsic property of matter, whereas weight is a force that results from the action of gravity on matter. However, the recognition of this difference is, historically, a relatively recent development – and in many everyday situations the word "weight" continues to be used when strictly speaking "mass" is meant. For example, we say that an object "weighs one kilogram", even though the kilogram is actually a unit of mass. This common usage is due to the displacement of earlier force-based measurement systems by the more scientific mass-based SI system. This transition has led to the common and even legal intertwining of "weight" and "mass." The distinction between mass and weight is unimportant for many practical purposes because, to a reasonable approximation, the strength of gravity is the same everywhere on the surface of the Earth. In such a constant gravitational field, the gravitational force exerted on an object (its weight) is directly proportional to its mass. So, if object A weighs, say, 10 times as much as object B, then object A's mass is 10 times that of object B. This means that an object's mass can be measured indirectly by its weight (for conversion formulas see below). For example, when we buy a bag of sugar we can measure its weight (how hard it presses down on the scales) and be sure that this will give a good indication of the quantity that we are actually interested in, which is the mass of sugar in the bag. Nevertheless, slight variations in the Earth's gravitational field do exist (see Earth's gravity), and these must be taken into account in high precision weight measurements. The use of "weight" for "mass" also persists in some scientific terminology – for example, in the chemical terms "atomic weight", "molecular weight", and "formula weight", rather than the preferred "atomic mass" etc. Systems of units of weight (force) and mass have a tangled history, partly because the distinction was not properly understood when many of the units first came into use. In most modern scientific work, physical quantities are measured in SI units. The SI unit of mass is the kilogram. The SI unit of force (and hence weight) is the newton (N) – which can also be expressed in SI base units as kg·m/s² (kilograms times meters per second squared). The kilogram-force is a non-SI unit of force, defined as the force exerted by a one-kilogram mass in standard Earth gravity (equal to about 9.8 newtons). The gravitational force exerted on an object is proportional to the mass of the object, so it is reasonable to think of the strength of gravity as measured in terms of force per unit mass, that is, newtons per kilogram (N/kg). However, the unit N/kg resolves to m/s²; (metres per second per second), which is the SI unit of acceleration, and in practice gravitational strength is usually quoted as an acceleration. In United States customary units, the pound can be either a unit of force or a unit of mass. Related units used in some distinct, separate subsystems of units include the poundal and the slug. The poundal is defined as the force necessary to accelerate a one-pound object at 1 ft/s², and is equivalent to about 1/32 of a pound (force). The slug is defined as the amount of mass that accelerates at 1 ft/s² when a pound of force is exerted on it, and is equivalent to about 32 pounds (mass). To convert between weight (force) and mass we use Newton's second law, F = ma (force = mass × acceleration). Here, F is the force due to gravity (i.e. the weight force), m is the mass of the object in question, and a is the acceleration due to gravity, on Earth approximately 9.8 m/s² or 32 ft/s²). In this context the same equation is often written as W = mg, with W standing for weight, and g for the acceleration due to gravity. The weight force that we actually sense is not the downward force of gravity, but the normal force (an upward contact force) exerted by the surface we stand on, which opposes gravity and prevents us falling to the center of the Earth. This normal force, called the apparent weight, is the one that is measured by a spring scale. For a body supported in a stationary position, the normal force balances the earth's gravitational force, and so apparent weight has the same magnitude as actual weight. (Technically, things are slightly more complicated. For example, an object immersed in water weighs less, according to a spring scale, than the same object in air; this is due to buoyancy, which opposes the weight force and therefore generates a smaller normal. These and other factors are explained further under apparent weight.) If there is no contact with any surface to provide such an opposing force then there is no sensation of weight (no apparent weight). This happens in free-fall, as experienced by sky-divers (until they approach terminal velocity) and astronauts in orbit, who feel "weightless" even though their bodies are still subject to the force of gravity: they're just no longer resisting it. The experience of having no apparent weight is also known as microgravity. A degree of reduction of apparent weight occurs, for example, in elevators. In an elevator, a spring scale will register a decrease in a person's (apparent) weight as the elevator starts to accelerate downwards. This is because the opposing force of the elevator's floor decreases as it accelerates away underneath one's feet. Weight is commonly measured using one of two methods. A spring scale or hydraulic or pneumatic scale measures weight force (strictly apparent weight force) directly. If the intention is to measure mass rather than weight, then this force must be converted to mass. As explained above, this calculation depends on the strength of gravity. Household and other low precision scales that are calibrated in units of mass (such as kilograms) assume roughly that standard gravity will apply. However, although nearly constant, the apparent or actual strength of gravity does in fact vary very slightly in different places on the earth (see standard gravity, physical geodesy, gravity anomaly and gravity). This means that same object (the same mass) will exert a slightly different weight force in different places. High precision spring scales intended to measure mass must therefore be calibrated specifically according their location on earth. Mass may also be measured with a balance, which compares the item in question to others of known mass. This comparison remains valid whatever the local strength of gravity. If weight force, rather than mass, is required, then this can be calculated by multiplying mass by the acceleration due to gravity – either standard gravity (for everyday work) or the precise local gravity (for precision work). Gross weight is a term that generally is found in commerce or trade applications, and refers to the gross or total weight of a product and its packaging. Conversely, net weight refers to the intrinsic weight of the product itself, discounting the weight of packaging or other materials. The following is a list of the weights of a mass on the surface of some of the bodies in the solar system, relative to its weight on Earth:
wikidoc
null
/index.php/Weight_loss
184
# Weight loss Editor-In-Chief: C. Michael Gibson, M.S., M.D. ; Associate Editor(s)-in-Chief: Javaria Anwer M.D. Cafer Zorkun, M.D., Ph.D. ; M.Umer Tariq ; Synonyms and keywords: weight reduction, elderly, malignancy, infection, dietary supplements, nutrition. Weight loss, in the context of medicine or health or physical fitness, is a reduction of the total body weight. An individual may lose weight due to a mean loss of fluid, body fat or adipose tissue and/or lean mass, namely bone mineral deposits, muscle, tendon and other connective tissue. Weight loss is a product of negative energy balance and can be unintentional or intentional. It can be a side effect of therapeutic drugs. The most common cause among the elderly is cancer. A thorough history with nutritional assessment, calorie count, patient's living conditions, neurocognitive dysfunction, appropriate labs and imaging findings are necessary. Until a diagnosis is made, nutritional supplementation should not be delayed in the interest of a patient's health. Treating the underlying cause, regular follow-up, and patient counseling are important components of weight loss management.
wikidoc
null
/index.php/Weight_loss_resident_survival_guide
324
# Weight loss resident survival guide Editor-In-Chief: C. Michael Gibson, M.S., M.D. ; Associate Editor(s)-in-Chief: Javaria Anwer M.D. Synonyms and keywords: weight loss management guide, unintentional weight loss management guide, loss of weight resident survival guide, pathologic weight loss resident survival guide. A loss of >5% of the usual body weight within 6 - 12 months represents pathologic weight loss. Weight loss may be intentional or unintentional. Unintentional weight loss is more common among the elderly. Common causes of weight loss among patients aged >65 years include malignancies (specifically digestive and non-hematologic), dementia, stroke, parkinson's disease, and polymyalgia rheumatica. In comparison, Endocrine disorders, infections, and psychiatric disorders make up the majority of the causes of weight loss among individuals aged <65 years. A thorough history from the patient or a caregiver provides useful insights to the cause. It is important to assess the availability of food and nutritional status first. A detailed physical exam and observing an elder patient have a meal in front of the physician may provide clues to neurocognitive dysfunctions. CBC, CMP provides a general picture of patient condition. Follow-up is necessary to completely treat the known and identify unknown causes of weight loss. A multidisciplinary approach ensures the optimum management option. Nutritional supplements may be warranted in selected cases but should act as an adjunct to normal meals. Life-threatening causes include conditions that may result in death or permanent disability within 24 hours if left untreated. The life-threatening causes of weight loss include: Abbreviations: GI: Gastrointestinal system; GERD: Gastroesophageal reflux disease; BMI: Body Mass Index; HEENT: Head, Eyes, Ears. Nose, and Throat exam; IM: Infectious Mononucleosis; CBC: Complete blood count; ESR: Erythrocyte sedimentation rate; LDH: Lactate dehydrogenase; CMP: Comprehensive metabolic panel; CRP:C-reactive protein; TSH: Thyroid stimulating hormone; PTH: Parathyroid hormine; COPD: Chronic Obstructive Pulmonary Disease Shown below is an algorithm summarizing the diagnosis of weight loss.
wikidoc
null
/index.php/Weight_loss_resort
178
# Weight loss resort A weight loss resort is a for various spas, resorts and retreats offering weight loss programs for adults. A pejorative term, perhaps more widely used, is fat farm. Weight loss resorts have existed in the United States in large numbers since the 1950s. Some of these achieved their touted weight loss through forced low-calorie diets and exercise, and were criticized as "quick fixes" that did not result in long-term weight loss. Many modern weight loss resorts, particularly luxury resorts, take an approach geared more towards encouraging a healthy lifestyle and eating behavior than just achieving short-term weight loss. Programs like yoga, mediation and deep-breathing exercises may be offered in addition to traditional exercise. Many offer the chance to consult with medical doctors, physical therapists, nutritionists, personal trainers, even acupuncturists and life coaches. Other programs may teach attendees how to cook or select healthy meals. Some are more similar to medical centers and take a clinical approach, such as Clinique La Prairie and the Cooper Wellness Program in Dallas, Texas.
wikidoc
null
/index.php/Weight_training
2,777
# Weight training Weight training is a common type of strength training for developing the strength and size of skeletal muscles. It uses the force of gravity (in the form of weighted bars, dumbbells or weight stacks) to oppose the force generated by muscle through concentric or eccentric contraction. Weight training uses a variety of specialized equipment to target specific muscle groups and types of movement. Weight training differs from bodybuilding, weightlifting, powerlifting and strongman, which are sports rather than forms of exercise. Weight training, however, is often part of the athlete's training regimen. Strength training is an inclusive term for all types of exercise devoted towards increasing muscular strength and size (as opposed to muscular endurance, associated with aerobic exercise, or flexibility, associated with stretching exercise like yoga or pilates, though endurance and flexibility can improve as a byproduct of training). Weight training is one type of strength training and the most common, seen by all but specialists as synonymous with strength training. The difference between weight training and other types of strength training is how the opposition to muscular contraction is generated. Resistance training uses elastic or hydraulic forces to oppose muscular contraction and isometric exercise uses structural or intramuscular forces (e.g. doorways or the body's own muscles). Hippocrates explained the principle behind weight training when he wrote "that which is used develops, and that which is not used wastes away." Progressive resistance training dates back at least to Ancient Greece, when legend has it that wrestler Milo of Croton trained by carrying a newborn calf on his back every day until it was fully grown. Another Greek, the physician Galen, described strength training exercises using the halteres (an early form of dumbbell) in the 2nd century. Another early device was the Indian club, which came from ancient Persia where it was called the "meels." It subsequently became popular during the 19th century, and has recently made a comeback in the form of the clubbell. The 1960s saw the gradual introduction of exercise machines into the still-rare strength training gyms of the time. Weight training became increasingly popular in the 1980s, following the release of the bodybuilding movie Pumping Iron and the subsequent popularity of Arnold Schwarzenegger. Since the late 1990s increasing numbers of women have taken up weight training, influenced by programs like Body for Life; currently nearly one in five U.S. women engages in weight training on a regular basis. The basic principles of weight training are essentially identical to those of strength training, and involve a manipulation of the number of repetitions (reps), sets, tempo, exercise types and weight moved to cause desired increases in strength, endurance, size or shape. The specific combinations of reps, sets, exercises and weight depends upon the aims of the individual performing the exercise; sets with fewer reps can be performed with heavier weights, but have a reduced impact on endurance. In addition to the basic principles of strength training, a further consideration added by weight training is the equipment used. Types of equipment include barbells, dumbbells, pulleys and stacks in the form of weight machines or the body's own weight in the case of chin-ups and push-ups. Different types of weights will give different types of resistance, and often the same absolute weight can have different relative weights depending on the type of equipment used. For example, lifting 10 kilograms using a dumbbell requires significantly more force than moving 10 kilograms on a weight stack due to the use of pulleys. Weight training also requires the use of 'good form', performing the movements with the appropriate muscle group, and not transferring the weight to different body parts in order to move greater weight (called 'cheating'). Failure to use good form during a training set can result in injury or a failure to meet training goals - since the desired muscle group is not challenged sufficiently, the threshold of overload is never reached and the muscle does not gain in strength. Weight training can be a very effective form of strength training because exercises, weights, sets and reps can be precisely manipulated to challenge individual muscle group in a way found to be the most effective for the individual. Other strength training exercises or equipment may lack the flexibility and precision that weights offer, and often cannot be safely taken to the point of momentary muscular failure. The benefits of weight training overall are comparable to most other types of strength training - increased muscle, tendon and ligament strength, bone density, flexibility, tone, metabolic rate and postural support. There are benefits and limitations to weight training as compared to other types of strength training. Resistance training involves the use of elastic or hydraulic resistance to contraction rather than gravity. Weight training provides the majority of the resistance at the beginning, initiation joint angle of the movement, when the muscle must overcome the inertia of the weight's mass. After this point the overall resistance alters depending on the angle of the joint. In comparison, hydraulic resistance provides a fixed amount of resistance throughout the range of motion, depending on the speed of the movement. Elastic resistance provides the greatest resistance at the end of the motion, when the elastic element is stretched to the greatest extent. Isometric exercise provides a fixed amount of resistance based on the force output of the muscle. This strengthens the muscle at the specific joint angle at which the isometric exercise occurs, with some lesser gains in strength also occurring at proximal joint angles. In comparison, weight training strengthens the muscle throughout the range of motion the joint is trained in, causing an increase in physical strength from the initiating through to terminating joint angle. Although weight training is similar to bodybuilding, they have different objectives. Bodybuilders compete in bodybuilding competitions; they train to maximize their muscular size and develop extremely low levels of body fat. In contrast, most weight trainers train to improve their strength and anaerobic endurance while not giving special attention to reducing body fat below normal. Weight trainers tend to focus on compound exercises to build basic strength, whereas bodybuilders often use isolation exercises to visually separate their muscles, and to improve muscular symmetry. However, the bodybuilding community has been the source of many of weight training's principles, techniques, vocabulary, and customs. Weight training does allow a tremendous flexibility in exercises and weights which can allow bodybuilders to target specific muscles and muscle groups, and attain specific goals. Weight training can be one of the safest forms of exercise, especially when the movements are slow, controlled, and carefully defined. However, as with any form of exercise, improper execution can result in injury. When the exercise becomes difficult towards the end of a set, there is a temptation to cheat, i.e. to use poor form to recruit other muscle groups to assist the effort. This may shift the effort to weaker muscles that cannot handle the weight. For example, the squat and the deadlift are used to exercise the largest muscles in the body—the leg and buttock muscles—so they require substantial weight. Beginners are tempted to round their back while performing these exercises. This causes the weaker lower back muscles to support much of the weight, which can result in serious lower back injuries. To avoid such problems, weight training exercises must be performed correctly. Hence the saying: "train, don't strain". An exercise should be halted if marked or sudden pain is felt, to prevent further injury. However, not all discomfort indicates injury. Weight training exercises are brief but very intense, and many people are unaccustomed to this level of effort. The expression "no pain, no gain" refers to working through the discomfort expected from such vigorous effort, rather than to willfully ignore extreme pain, which may indicate serious soft tissue injuries. Discomfort can arise from other factors. Individuals who perform large numbers of repetitions, sets and exercises for each muscle group may experience lactic acid build-up in their muscles. This is experienced as a burning sensation in the muscle, but it is perfectly harmless. These individuals may also experience a swelling sensation in their muscles from increased blood flow (the "pump"), which is also harmless. True muscle fatigue is experienced as a marked and uncontrollable loss of strength in a muscle, arising from the nervous system (motor unit) rather than from the muscle fibers themselves. Extreme neural fatigue can be experienced as temporary muscle failure. Some weight training programs actively seek temporary muscle failure; evidence to support this type of training is mixed at best. Irrespective of their program, however, most athletes engaged in high-intensity weight training will experience muscle failure from time to time. Beginners are advised to build up slowly to a weight training programme. Untrained individuals may have some muscles that are comparatively stronger than others. An injury can result if, in a particular exercise, the primary muscle is stronger than its stabilising muscles. Building up slowly allows muscles time to develop appropriate strengths relative to each other. This can also help to minimise delayed onset muscle soreness. A sudden start to an intense programme can cause significant muscular soreness. Unexercised muscles contain cross-linkages that are torn during intense exercise. Exercises where a barbell is held above the body, which can result in injury if the weight drops onto the lifter, are normally performed inside a squat cage or in the presence of one or more spotters, who can safely re-rack the barbell if the weight trainer is unable to do so. Anyone beginning an intensive physical training programme is typically advised to consult a physician, because of possible undetected heart or other conditions for which such activity is contraindicated. There have been mixed reviews regarding the use of weightlifting belts and other devices, such as lifting straps. Critics claim that they allow the lifter to use more weight than they should. In addition, the stabiliser muscles in the lower back and gripping muscles in the forearms receive less benefit from the exercises. Wrist straps (also known as cow ties or lifting straps) are sometimes used to assist in gripping very heavy weights. The straps wrap around the wrist and tuck around the bar or weight being lifted, transferring the mass of the weight to the wrist rather than the fingers. They are particularly useful for the deadlift. Some lifters avoid using wrist straps in order to develop their grip strength. Wrist straps can allow a lifter initially to use more weight than they might be able to handle safely for an entire set, but can place potentially harmful stress on the bones of the wrist. Wrist curls and reverse wrist curls can be performed as an alternative to straps to improve grip strength. These terms combine the prefix "iso" (meaning "same") with "tonic" (strength) and "plio" (more) with "metric" (distance). In "isotonic" exercises the force applied to the muscle does not change (while the length of the muscle decreases or increases) while in "plyometric" exercises the length of the muscle stretches and contracts rapidly to increases the power output of a muscle. Weight training is primarily an isotonic form of exercise, as the force produced by the muscle to push or pull weighted objects should not change (though in practice the force produced does decrease as muscles fatigue). Any object can be used for weight training, but dumbbells, barbells and other specialised equipment are normally used because they can be adjusted to specific weights and are easily gripped. Many exercises are not strictly isotonic because the force on the muscle varies as the joint moves through its range of motion. Movements can become easier or harder depending on the angle of muscular force relative to gravity - in example, a standard biceps curl becomes easier as the hand approaches the shoulder as more of the load is taken by the structure of the elbow. Certain machines such as the Nautilus involve special adaptations to keep resistance constant irrespective of the joint angle. Plyometric exercises exploits the stretch-shortening cycle of muscles to enhance the myotatic (stretch) reflex. This involves rapid alternation of lengthening and shortening of muscle fibers against a resistance. The resistance involved is often a weighted object such as a medicine ball, but can also be the body itself as in jumping exercises. Plyometrics is used to develop explosive speed, and focuses on maximal power instead of maximal strength by compressing the force of muscular contraction into as short a period as possible, and may be used to improve the effectiveness of a boxer's punch, or to increase the vertical jumping ability of a basketball player. An isolation exercise is one where the movement is restricted to one joint and one muscle group. For example, the leg extension is an isolation exercise for the quadriceps. Specialized types of equipment are used to ensure that other muscle groups are only minimally involved—they just help the individual maintain a stable posture—and movement occurs only around the knee joint. Most isolation exercises involve machines rather than dumbbells and barbells (free weights), though free weights can be used when combined with special positions and joint bracing. Compound exercises work several muscle groups at once, and include movement around two or more joints. For example, in the leg press movement occurs around the hip, knee and ankle joints. This exercise is primarily used to develop the quadriceps, but it also involves the hamstrings, glutes and calves. Compound exercises are generally similar to the ways that people naturally push, pull and lift objects, whereas isolation exercises often feel a little unnatural. Compound exercises generally involve dumbbells and barbells (free weights), involving more muscles to stabilize the body and joints as well as move the weight. The type of exercise performed also depends on the individual's goals. Those who seek to increase their performance in sports would focus mostly on compound exercises, with isolation exercises being used to strengthen just those muscles that are holding the athlete back. Similarly, a powerlifter would focus on the specific compound exercises that are performed at powerlifting competitions. However, those who seek to improve the look of their body without necessarily maximising their strength gains (including bodybuilders) would put more of an emphasis on isolation exercises. Both types of athletes, however, generally make use of both compound and isolation exercises. Free weights are dumbbells, barbells, and kettlebells. Unlike weight machines, they do not constrain users to specific, fixed movements, and therefore require more effort from the individual's stabilizer muscles. It is often argued that free weight exercises are superior for precisely this reason. As weight machines can go some way toward preventing poor form, they are somewhat safer than free weights for novice trainees. Moreover, since users need not concentrate so much on maintaining good form, they can focus more on the effort they are putting into the exercise. However, most athletes, bodybuilders and serious fitness enthusiasts prefer to use compound free weight exercises to gain functional strength. Some free weight exercises can be performed while sitting or lying on a Swiss ball. This makes it more difficult to maintain good form, which helps to exercise the deep torso muscles that are important for maintaining posture. There are a number of weight machines that are commonly found in neighborhood gyms. The Smith machine is a barbell that is constrained to move only vertically upwards and downwards. The cable machine consists of two weight stacks separated by 2.5 metres, with cables running through adjustable pulleys (that can be fixed at any height) to various types of handles. There are also exercise-specific weight machines such as the leg press. A multigym includes a variety of exercise-specific mechanisms in one apparatus. Weight trainers commonly divide the body's individual muscles into ten major muscle groups. These do not include the hip, neck and forearm muscles, which are rarely trained in isolation. The most common exercises for these muscle groups are listed below. (Videos of these and other exercises are available at exrx.net and from the University of Wisconsin.) The sequence shown below is one possible way to order the exercises. The large muscles of the lower body are normally trained before the smaller muscles of the upper body, because these first exercises require more mental and physical energy. The core muscles of the torso are trained before the shoulder and arm muscles that assist them. Exercises often alternate between "pushing" and "pulling" movements to allow their specific supporting muscles time to recover. The stabilising muscles in the waist should be trained last.
wikidoc
null
/index.php/Weissella
30
# Weissella Weissella is a genus of Gram-positive bacteria, placed within the family of Leuconostocaceae. The morphology of weissellas varies from spherical or lenticular cells to irregular rods.
wikidoc
null
/index.php/Weldon_process
31
# Weldon process After reacting hydrochloric acid with manganese dioxide (and related oxides), the waste manganese(II) chloride solution is treated with lime, steam and oxygen, producing calcium manganite (IV):
wikidoc
null
/index.php/Welfare,_Choice_and_Solidarity_in_Transition
45
# Welfare, Choice and Solidarity in Transition In 2007, (according to the definition of the World Bank) in these ten countries, Albania is a lower-middle-income economy; Czech and Slovenia are high-income economies; and others are belong to upper-middle-income economies.
wikidoc
null
/index.php/Well-formed_outcome
430
# Well-formed outcome A Well-formed outcome is a term originating in neuro-linguistic programming for an outcome one wishes to achieve, that meets certain conditions designed to avoid (1) unintended costs or consequences and (2) resistance to achieving the goal resulting from internal conflicting feelings or thoughts about the outcome. Thus, a high quality outcome is more than a vague wish or goal. It is an objective or goal which is integrated with all aspects of one's life (morals, ethics, relationships, finances, health, body, etc.) and has a process of accomplishment that respects and supports the current desirable circumstances in one's life. A high quality outcome is (in a sense) consistent with forward-thinking action as well, or alternatively have been clearly and well enough defined to be prima facie free of common "muddy thinking". By applying all of the well-formedness conditions to a goal or outcome, and adjusting the outcome specifications accordingly in the process, you create a Well-formed outcome. In NLP, a general distinction is made between goals and outcomes. A goal is a lay-term, and is often lacking in the precision and cognitive clarity needed to be acted upon. For example: An outcome may be small scale (the purpose of asking a specific question or phrasing) or large scale (the meaning of one's life), but NLP teaches that in each case there are some basic conditions that indicate if the outcome is well formed, or whether it needs further clarification and precision to be useful. While the exact details may differ among schools or training providers, generally in NLP a well-formed outcome is one that ideally meets the following basic conditions: There is one wellformedness condition for a wellformed goal that is missing. As Joseph O'Connor and John Seymour describe in their book Introducing Neuro Linguistic Programming (Aquarius 1993) the 4th condition is: contextualize your goal (where, with whom and when would you like to reach your goal). Chapter 21-29 of "Mindworks: NLP Tools for Building a Better Life," by Anne Linden. 1997, 1998 by Berkley Publishing Group, New York. Paperback, ISBN 0-425-16624-4 Chapter 3 (pp.31-45) and Appendix C of "Neuro-Linguistic Programming for Dummies," by Ready & Burton. 2004 John Wiley & Sons, Ltd. Trade paperback ISBN-10: 0-7645-7028-5, ISBN-13: 978-0-7645-7028-5. Section 2.2 (pp. 106-119) and section 3.7 (pp. 202-203) of "NLP at Work," by Knight. 1995, 1996, 1997 Nicholas Brealey Publishing, London. Trade paperback ISBN 1-85788-070-6
wikidoc
null
/index.php/Wells_Score
328
# Pulmonary embolism assessment of clinical probability and risk scores The diagnosis of pulmonary embolism (PE) is based primarily on the clinical assessment of the pretest probability of PE combined with diagnostic modalities such as spiral CT, V/Q scan, use of the D-dimer, and lower extremity ultrasound. Clinical prediction rules for PE include: the Wells score, the Geneva score and the PE rule-out criteria (PERC). A clinical prediction rule is a type of medical research study in which the researchers try to identify the best combination of signs, symptoms, and other findings to predict the probability of a specific disease or outcome. Clinical prediction rules for PE include: the Wells score, the Geneva score and the PE rule-out criteria. Its noteworthy that the use of any clinical prediction rule is associated with reduction in recurrent thromboembolism. Recently in 2006, the revised Geneva score was introduced with a more standarized and simplified algorithm to help predict the probability that a patient has a pulmonary embolism. A one-point simplified scoring system replaced the previously weighted scores for each parameter. This was done to reduce the likelihood of error when the score is used in clinical settings. The simplified Geneva score does not lead to a decrease in diagnostic utility when compared to the previous Geneva scores. A comparison of the YEARS to the original Wells found that the YEARS is more sensitive, less specific, and a very similar Youden's J index or Gain in Certainty . A cluster-randomized, crossover comparison of the YEARS to a strategy of "all patients underwent D-dimer testing with the threshold set at the age-adjusted level" found similar clinical outcomes but less chest imaging with the YEARS algorithm . The clinical predictive scores of PE are important in the interpretation of the different diagnostic modalities used to diagnose the disease. The combination of the pre-test probability and the tests results helps in the inclusion/exclusion of PE.
wikidoc
null
/index.php/Wellsoft
457
# Wellsoft Wellsoft Corporation is an Electronic Medical Record software vendor based in Somerset, NJ. It has been incorporated for over eighteen years, and is currently rated #1 in the KLAS Report for best EDIS (Emergency Department Information Systems) systems as of Nov 17th, 2006. Some competitors include GE Healthcare, Siemens AG, and Cerner. Healthcare Information Technology is one of the fastest growing fields which has increased the number of EDIS (Emergency Department Information Systems) vendors tremendously. The Wellsoft Suite started out as DOS based "HomeEasy" Discharge Instructions. This version is no longer deployed, although it maintains quite a few installations throughout the United States. The recent version (Wellsoft v11) is Windows based, with optional web deployment via Citrix. It currently uses the Oracle database (which can be deployed on Windows or Linux). Many EDIS systems offer internet-based access, and there are two basic routes with which this can be accomplished. Web based applications generally support thin client deployments through Java or HTML, while compiled applications typically use Citrix. Wellsoft currently uses Citrix. There are advantages and disadvantages to each method. Citrix deployments are capable of running both compiled and web-based applications, however they require that a Citrix Client be loaded onto the machines. While this is practical in an Emergency Department, it is less practical for providing access to patient information over the internet. HTML-based applications are almost universally portable. They can be used on any machine, with any operating system, including thin clients. While the portability is ideal for displaying web-pages with information, the poor stability of Internet Explorer and lack of user interface control make HTML-based applications a poor choice for serious applications. Java-based applications are very similar to compiled applications. While it is more portable than compiled applications, Java applications are also significantly slower. This type of application is well suited for remote doctors offices, but ill suited to time-critical tasks such as those in the Emergency Department. Compiled applications have been optimized for a particular operating system. While this makes compiled applications faster, it also restricts them to running on a particular operating system (for example, Windows). In the Emergency Department this is typically not an issue, since remote access is usually less important than speed. The system updates are deployed via the server. The workstations check the server for updates, and installation occurs automatically. This allows update distribution to occur without taking down the system. Wellsoft takes approximately five minutes. However, the update time varies dramatically from vendor to vendor, with updates taking anywhere from a minute to several hours. This is especially important in the Emergency Department, and in fact has been a barrier to entry for many other hospital-wide systems.
wikidoc
null
/index.php/Werdnig-Hoffman_disease
320
# Werdnig-Hoffman disease Werdnig-Hoffman disease (also known as "Severe infantile spinal muscular atrophy", or "spinal muscular atrophy type I") is an autosomal recessive neuromuscular disease. It is the most severe form of spinal muscular atrophy, which is one of a number of neuromuscular diseases classified as a type of muscular dystrophy. It is evident before birth or within the first few months of life. There may be a reduction in fetal movement in the final months of pregnancy. Symptoms include floppiness of the limbs and trunk, feeble movements of the arms and legs, swallowing and feeding difficulties, and impaired breathing. Affected children never sit or stand unassisted and will require respiratory support to survive before the age of 2. Other symptoms include: Treatment is symptomatic and supportive and includes treating pneumonia, curvature of the spine and respiratory infections, if present. Also, physical therapy, orthotic supports, and rehabilitation are useful. For individuals who survive early childhood, assistive technology can be vital to providing access to work and entertainment. Genetic counseling is imperative. The patient's condition tends to deteriorate over time, depending on the severity of the symptoms. Children with Werdnig-Hoffman disease / SMA Type 1 face a difficult battle. They are constantly at risk of respiratory infection and pneumonia. Feeding difficulties make it a real challenge for parents to give their children adequate nutrition and supplemental feedings may be required. Tubes placed through the nose or directly onto the stomach may be necessary. Recurrent respiratory problems mean that mechanical support for breathing - usually initially in the form of BiPAP and later often tracheostomy and ventilation - are necessary for the baby to have any chance of long-term survival. Affected children never sit or stand and usually die before the age of 2 if the decision is made not to provide breathing support. However, some individuals have survived to become adults, in which case sexual function is unimpaired.
wikidoc
null
/index.php/Werner_Erhard_and_Associates
481
# Werner Erhard and Associates Werner Erhard and Associates, also known as WE&A or as WEA, operated as a commercial entity from February 1981 until early 1991. It replaced Erhard Seminars Training, Inc. as the vehicle for marketing, selling and imparting the content of the est training, and offered what some people refer to as "personal-growth" programs. Initially WE&A marketed and staged the est training (in the form of the est seminars and workshops), but in 1984 it introduced a modified, shortened and less intensive introduction to Werner Erhard's teachings, dubbed "The Forum". In early 1991 WE&A faced notoriety from an impending 60 Minutes television exposé and from investigations by the Church of Scientology — see also Scientology and Werner Erhard. Erhard sold the assets of WE&A to a group of employees, who later formed Landmark Education. According to a "Site by Former Associates committed to providing accurate and reliable information about Werner Erhard", Erhard then retired. He left the United States in 1991. What is a fact is that the collective experiences of over 1,400,000 human beings who have gone through either the est training or the Forum series, and each of their particular contributions that have resulted from these experiences have certainly had an impact on America and the world. Werner Erhard, without any formal credentials, degrees or advanced educational training used his innate intelligence, intuition and insight to promote self-awareness in the United States. In the 1970's he was right for the times — in synch with the fledgling "me generation." By the time that he had retired the est training in 1984, and when he left the Forum in 1991, almost 750,000 people had undergone the courses that he had developed and marketed ... providing one of the best and quickest opportunities available to radically alter one's life. The importance of the est training and the Forum series, especially in the United States, is in the form of individual contributions that have transformed and enabled the people of this nation to enjoy a new level of spirituality, empowerment, perception and enhancement of their personal enlivenment and contribution. A scientific study, commissioned by Werner Erhard and Associates and conducted by a team of psychology professors, concluded that attending the Forum had minimal lasting effects, positive or negative, on participants' self-perception. The research won an American Psychological Association "National Psychological Consultants to Management Award" in 1989 . The results of the research study appeared in two articles in the Journal of Consulting and Clinical Psychology in 1989 , and in 1990 , and in 1990 in a book titled "Evaluating a Large Group Awareness Training". See also Evaluating a Large Group Awareness Training. The diagram shows international income-flows associated with Werner Erhard and Associates (appearing as "WEA"), as detailed for a United States federal tax-case hearing. Compare Margolis scheme.
wikidoc
null
/index.php/Wernicke
236
# Wernicke Template:Infobox Scientist Carl Wernicke (born 15 May 1848 in Tarnowitz, Upper Silesia, then Prussia, now Tarnowskie Gory, Poland – died 15 June 1905 in Gräfenroda, Germany) was a German physician, anatomist, psychiatrist and neuropathologist. He earned his medical degree at the University of Breslau (1870). He died in Germany due to injuries suffered during a bicycle accident . Shortly after Paul Broca published his findings on language deficits caused by damage to what is now referred to as Broca's area, Wernicke began pursuing his own research into the effects of brain disease on speech and language. Wernicke noticed that not all language deficits were the result of damage to Broca's area. Rather he found that damage to the left posterior, superior temporal gyrus resulted in deficits in language comprehension. This region is now referred to as Wernicke's area, and the associated syndrome is known as Wernicke's aphasia, for his discovery. This model is now obsolete. Nevertheless it has been very useful in directing research and organizing research results, because it is based on the idea that language consists of two basic functions: comprehension, which is a sensory/perceptual function, and speaking, which is a motor function. However, the neural organization of language is more complex than the Wernicke-Geschwind model of language suggests. The localization of speech in Broca's area is one of the weakest points of this model.
wikidoc
null
/index.php/Wernicke%27s_encephalopathy
404
# Wernicke's encephalopathy Wernicke encephalopathy is a severe syndrome characterised by ataxia, ophthalmoplegia, confusion and loss of short-term memory. It is linked to damage to the medial thalamic nuclei, mammillary bodies, periaqueductal, and periventricular brainstem nuclei , and superior cerebellar vermis. In the brain, it is the result of inadequate intake or absorption of thiamine (Vitamin B) coupled with continued carbohydrate ingestion. The most common cause of an onset is prolonged alcohol consumption that is sufficient enough to cause a thiamine deficiency. Alcoholics are therefore particularly at risk, but it may also occur due to other causes of malnutrition. Other causes of thiamine deficiency may be found in patients with carcinoma, chronic gastritis, or continuous vomiting. Wernicke's encephalopathy must also be differentiated from other diseases that cause personality changes, altered level of consciousness and hand tremors (asterixis). The differentials include the following: Wernicke encephalopathy onsets acutely, and usually presents with nystagmus, gaze palsies, ophthalmoplegia (especially of the abducens nerve, CN VI), gait ataxia, confusion, and short-term memory loss. The classic triad for this disease is encephalopathy, ophthalmoplegia, and ataxia. Untreated, this condition may progress to Korsakoff's psychosis or coma. Despite its name, Wernicke's encephalopathy is not related to damage of the speech and language interpretation area named Wernicke's area (see Wernicke's aphasia). Instead the pathological changes in Wernicke's encephalopathy are concentrated in the mammillary bodies, cranial nerve nuclei III, IV, VI and VIII, as well as the thalamus, hypothalamus, periaquiductal grey, cerebellar vermis and the dorsal nucleus of the vagus nerve. The ataxia and ophthalmoparesis relate to lesions in the oculomotor (ie IIIrd, IVth, and VIth nerves) and vestibular (ie VIIIth nerve) nuclei. Treatment includes an intravenous (IV) or intramuscular (IM) injection of thiamine, prior to the assessment of other central nervous system (CNS) diseases or other metabolic disturbances. Patients are usually dehydrated, and so rehydration to restore blood volume should be started. If the condition is treated early, recovery may be rapid and complete. In individuals with sub-clinical thiamine deficiency, a large dose of glucose (either as sweet food etc or glucose infusion), can precipitate the onset of overt encephalopathy. Glucose loading results in metabolic disturbances in the brain that exacerbate the signs and symptoms of encephalopathy, and may trigger cellular processes leading to brain damage. . If the patient is hypoglycemic (common in alcoholism), a thiamin injection should always precede the glucose infusion.
wikidoc
null
/index.php/Wernicke-Korsakoff_syndrome_(patient_information)
325
# Wernicke-Korsakoff syndrome (patient information) A lack of vitamin B1 is common in people with alcoholism. It is also common in persons whose bodies do not absorb food properly (malabsorption), such as sometimes occurs after obesity surgery. Korsakoff syndrome, or Korsakoff psychosis, tends to develop as Wernicke's symptoms go away. Wernicke's encephalopathy causes brain damage in lower parts of the brain called the thalamus and hypothalamus. Korsakoff psychosis results from damage to areas of the brain involved with memory. Call your health care provider or go to the emergency room if you have symptoms of Wernicke-Korsakoff syndrome, or if you have been diagnosed with the condition and your symptoms get worse or return. A brain MRI may show changes in the tissue of the brain, but if Wernicke-Korsakoff syndrome is suspected, treatment should start immediately. Usually a brain MRI exam is not needed. The goals of treatment are to control symptoms as much as possible and to prevent the disorder from getting worse. Some people may need to stay in the hospital early in the condition to help control symptoms. Stopping alcohol use can prevent additional loss of brain function and damage to nerves. Eating a well-balanced, nourishing diet can help, but it is not a substitute for stopping alcohol use. Not drinking alcohol or drinking in moderation and getting enough nutrition reduce the risk of developing Wernicke-Korsakoff syndrome. If a heavy drinker will not quit, thiamine supplements and a good diet may reduce the chance of getting this condition, but do not eliminate the risk. Without treatment, Wernicke-Korsakoff syndrome gets steadily worse and can be life threatening. With treatment, you can control symptoms (such as uncoordinated movement and vision difficulties), and slow or stop the disorder from getting worse. In people at risk, Wernicke's encephalopathy may be caused by carbohydrate loading or glucose infusion. Always supplement with thiamine before glucose infusion to prevent this.
wikidoc
null
/index.php/Wernicke-Korsakoff_syndrome_medical_therapy
78
# Wernicke-Korsakoff syndrome medical therapy Treatment consists of reversing the thiamine deficiency by giving supplemental thiamine, usually by starting with an initial intravenous or intramuscular dose followed by supplemental oral doses. It is important to start the thiamine treatment before giving any glucose as the encephalopathy will be worsened by the glucose. (Glucose administration promotes dehydrogenation of pyruvate, a biochemical reaction which consumes thiamine.) By the time amnesia and psychosis have occurred, complete recovery is unlikely.
wikidoc
null
/index.php/Wernicke-Korsakoff_syndrome_overview
117
# Wernicke-Korsakoff syndrome overview Wernicke-Korsakoff syndrome is a is a degenerative brain disorder cause by thiamine deficiency. This is usually secondary to alcohol abuse. Although Wernicke's and Korsakoff's may appear to be two different disorders, they are generally considered to be different stages of the same disorder, which is called Wernicke-Korsakoff syndrome(WKS). Wernicke's encephalopathy represents the acute phase of the disorder, and Korsakoff's amnesic syndrome represents the chronic phase. Not drinking alcohol or drinking in moderation and getting enough nutrition reduce the risk of developing Wernicke-Korsakoff syndrome. Thiamine supplements and a good diet may reduce the chance of getting this condition, but do not eliminate the risk.
wikidoc
null
/index.php/Wernicke-Korsakoff_syndrome_pathophysiology
284
# Wernicke-Korsakoff syndrome pathophysiology Wernicke-Korsakoff syndrome results from thiamin deficiency. It is generally agreed that Wernicke's encephalpathy results from severe acute deficiency of thiamine (Vitamin B1), whilst Korsakoff's psychosis results from chronic deficiency of thiamin. The metabolically active form of thiamine is thiamine diphosphate which plays a major role as a cofactor in glucose metabolism. The enzymes which are dependent on thiamin diphosphate are associated with the TCA Cycle and catalyse the oxidation of pyruvate,alphaketoglutarate and branched chain amino acids. Thus, anything that encourages glucose metabolism will exacerbate an existing clinical or sub-clinical thiamine deficiency. When Wernicke's encephalopathy accompanies Korsakoff's syndrome, the combination is called the Wernicke-Korsakoff syndrome. Korsakoff's is a continuum of Wernicke's encephalopathy, though a recognised episode of Wernicke's is not always obvious. Wernicke-Korsakoff syndrome in alcoholics especially is associated with atrophy of specific regions of the brain, especially the mamillary bodies. Other regions include the anterior region of the thalamus (accounting for amnesic symptoms), the medial dorsal thalamus, the basal forebrain, and median and dorsal raphe nuclei. Korsakoff's involves neuronal loss, that is, damage to neurons; gliosis which is a result of damage to supporting cells of the central nervous system; and hemorrhage or bleeding in mammillary bodies. Damage to the dorsomedial nucleus of the thalamus is also associated with this disorder. Frequently, for unknown reasons, patients with Korsakoff's psychosis will exhibit marked degeneration of the mamillary bodies. The mechanism of this degeneration is unknown, but it supports current neurological theory that the mamillary bodies play a role in various memory circuit within the brain. An example of a memory circuit is the Papez circuit.
wikidoc
null
/index.php/West_London_Mental_Health_(NHS)_Trust
590
# West London Mental Health (NHS) Trust The head quarters is situated in St. Bernard's Hospital Building. HQ[›] This is on the south side of the Uxbridge Road between the towns of Southall and Hanwell and 8½ miles west from London, in the Southall district of the London Borough of Ealing, Greater London (Middlesex), England. Currently the Trust management is exploring the possibility of becoming a NHS Foundation Trust. This, it believes, will give it more flexibility to better meet the needs of the people who live in the locality that it serves, and to whom it will become directly accountable. The trust HQ occupies some of the original buildings once known as Hanwell Asylum. Here the first superintendent Dr (later Sir) William & Mrs Mildred Ellis who were so much impressed with Moral therapy and humane treatment they saw offered to people suffering Mental disorders at the Quaker Asylum in York that they both imposed these methods on the staff at Hanwell. This was as such the very first large scale experiment. The second superintendent brought mechanical restraints - as a form of treatment - back. The third superintendent Dr John Conolly against stiff opposition backed up with much vitriol, took the example further, and did away with all mechanical restraints. To the surprise and disbelieve of many he found, like the Ellises before, that bedlam diminished, behaviour became less defensive and cooperation improved dramatically, and many recovered or much improved. This event added to his other pioneering work such as developing proper diets and conditions for his patients and battles to set up regular training lecture specialising in mental health, for doctor training, all led to him receiving world wide recognition. Broadmoor high secure hospital: In order to end the isolation suffered by the high secure services from the rest of the NHS, the Health Act 1999 was passed, allowing NHS Trusts to provide for these. sth[›] After a three month consultation in the early part of the following year it was agreed that the high secure services based at Broadmoor and those provided by the Ealing, Hammersmith and Fulham Mental Health NHS Trust should be combined into one organisation. This created the existing West London Mental Health NHS Trust, which took over governance in 2001. ^ B: By special referral only. ^ ed: The extra posts are for Forensic Divisional Directors and Security Directors. ^ HQ: Technically speaking, it maybe said that there is also a Trust HQ at Broadmoor, since a Board Director is based permanently on site. However Board meetings are usually held at St Bernard's. ^ O-a: Due to open July 2007 – staff being recruited and trained. ^ O-b: Still treats some women patients, Waiting for Orchard Unit at Ealing site to be commisioned before remainder are transferred. ^ L: The present trust was established 1st October 2000. Then the former Ealing, Hammersmith & Fulham Mental Health NHS Trust which was created in 1999 was then dissolved 2001 leaving the WLMHT to take over its duties. This was under government order number 20002562. See: Statutory Instrument 1992 No. 2539. The West London Healthcare National Health Service Trust (Establishment) Order 1992. Accessed 2007-05-16 ^ Pad: Part opened, staff being recruited and trained. ^ sth: Some of the reasons why changes needed to come about, can be gleaned from reading: *Dell Susanne, Robertson, Graham.(1988) Sentenced to hospital : offenders in Broadmoor. Oxford University Press, Oxford ; New York. ISBN: 019712156X Dewey Class 365/.942294 19 Summary: authors describe the treatment of some Broadmoor patients and together with their psychiatric and criminal histories.
wikidoc
null
/index.php/West_Nile
229
# West nile virus WNV is an enveloped positive-sense ssRNA virus of 11000 base pairs (bp) that is considered a member of the Japanese encephalitis serocomplex. It belongs to the genus Flavivirus and family Flaviviridae. Its RNA encodes structural and non-structural proteins. Although 7 lineages of WNV have been described, only lineage 1 and 2 are clinically significant. The viral natural reservoir includes many species, such as humans, horses, dogs, and cats; but the main natural reservoir is birds. WNV is a member of Japanese encephalitis serocomplex and belongs to the genus Flavivirus, family Flaviviridae. Other species of the this serocomplex include the St Louis encephalitis virus and the Japanese encephalitis virus. The WNV has an icosahedral symmetry, with a smooth surface. It is an enveloped virus with a nucleocapsid core built of RNA and capsid proteins. Its genome is contained in a single-stranded RNA of about 11000 bp. It contains a single open reading frame (ORF), a 5' untranslated region (UTR), and another 3' region which is also not translated. The ORF contains a single polyprotein that produces 3 smaller types of structure proteins and 7 of non-structural proteins following processing and translation. The WNV may be classified in 7 phylogenetic lineages. Of these, only 1 and 2 have been identified as causative agents of disease in humans and are considered clinically significant.
wikidoc
null
/index.php/West_nile_virus_infection_overview
1,450
# West nile virus infection overview ## Overview West Nile virus (WNV) is an enveloped positive-sense ssRNA virus that is considered a member of the Japanese encephalitis serocomplex. It belongs to the family Flaviviridae. It was first isolated in 1937 in Uganda; and since then has disseminated to become a worldwide infection. The natural reservoir of the virus is mainly birds, but it is usually transmitted by Culex mosquito bites to humans and other animals, and less commonly transmitted from human to human by blood transfusions or tissue transplantation. WNV infection is a spectrum of clinical disease that may have an asymptomatic course, a mild "West Nile fever" characterized by fever and constitutional symptoms, or a more severe "neuroinvasive disease" that includes severe neurological deficits. If left untreated, the virus usually self-resolves among immunocompetent patients, but may progress to lead a complicated course among the elderly, immunosuppressed patients, or those with malignancies, advanced cardiovascular, and renal disease. Diagnosis is often made by serological testing, plaque reduction neutralization test (PRNT), reverse transcription polymerase chain reaction (RT-PCR), immunofluoresence, or immunohistochemistry. Management is generally aimed at supportive care only, but antiviral pharmacologic therapy has been frequently administered in neuroinvasive cases. Universal screening is not recommended, but screening donors of blood products and tissue transplants for WNV using nucleic acid testing (NAT) is mandatory. Prognosis is excellent in mild cases, but the disease may cause permanent neurological impairment or even death if neuroinvasive disease develops. ## Historical Perspective WNV was first isolated in 1937 in Uganda from a hospitalized patient who presented with isolated fever. Between 1950 and 1960, small villages in the Mediterranean basin had repeated outbreaks, especially in Israel and Egypt. These outbreaks allowed researchers to study the molecular and clinical features of the disease and further understand its mode of transmission and natural history. Several WNV outbreaks were recorded in the second half of the 20th century in Europe, Middle East, Far East, and Africa. It was not until 1999 when the first WNV outbreak was documented in USA, making WNV a worldwide infection. Perhaps the most severe documented outbreak occurred in 2002 in USA, recording the highest number of meningoencephalitis from a single WNV outbreak. The first description of a person-to-person transmission was reported in 2002 among patients with blood transfusions and tissue transplantation. ## Pathophysiology The natural reservoir of WNV is birds, particularly species with high-level viremia. In contrast, viremia is relatively rare among infected humans, who are considered dead-end hosts of the virus. WNV is transmitted by bites of various species of mosquitoes. Following inoculation, replication of the virus occurs in the Langerhans epidermal dendritic cell. Among immunocompetent hosts, the replication process is immediately followed by activation of the immune system, including complement system pathways, and humoral and adaptive immune responses that act simultaneously to clear the infection. On the other hand, immunocompromised patients may suffer CNS dissemination and fatal outcomes due to the failure to activate proper immunological pathways. Finally, the role of genetics in WNV susceptibility is not fully understood; but mice models and a few human experiments have described genetic mutations that may predispose individuals to worse clinical disease of WNV infections. ## Epidemiology & Demographics WNV is considered a worldwide infective agent. Since most cases are asymptomatic and self-limited, the true incidence and prevalence of West Nile fever are often underestimated. Between the years 1999 and 2013, a total of 39557 cases were reported by the CDC in USA alone. The 2002 outbreak in USA marks the WNV outbreak with the most recorded rates of neuroinvasive disease. Nonetheless, only 1/140 to 1/256 cases of West Nile fever are complicated by encephalitis or meningitis. WNV infection occurs predominantly during the end of summer and beginning of fall. Females are more likely to develop WNV infection. The prevalence of the disease is not affected by ethnicity or age, but elderly patients are more likely to experience a complicated clinical course. ## Risk Factors Certain factors may increase the risk of infection with WNV by a mosquito bite, such as warm temperatures, extensive outdoor exposure, homelessness, and absence of window screens. Occupational risk factors include in-field occupations, such as agriculture. Severe clinical disease is often associated with advanced age, immunosuppression, malignancy, diabetes mellitus, hypertension, and renal disease. An increased risk of death is observed among immunosuppressed patients and those presenting with altered level of consciousness. Certain conditions such as encephalitis, advanced cardiovascular disease, and hepatitis C virus may also carry an increased risk of death among patients infected with WNV. ## Screening Universal screening for WNV is not recommended. As blood and transplant-related transmissions of the virus have been reported, nucleic acid tests (NAT) may be used to screen for WNV among potential blood and solid organ donors. In blood donation, individual screening is not recommended either. Instead, a "minipool" nucleic acid testing program (MP NAT) is implemented. Positive pools warrant further investigation for individuals. Patients with positive NAT may not donate blood or solid organs for at least 120 days. Re-testing after 120 days is indicated. ## Differentiating West Nile Virus from Other Diseases West Nile fever must be differentiated from other diseases that cause fever, skin rash, myalgias, and back pain, such as other viral infections due to rhinovirus, enterovirus D68, coxsackievirus, influenza, echovirus. Patients with severe WNV infection may present with meningitis, encephalitis, or flaccid paralysis. These diseases must be differentiated from other diseases that cause severe headache, altered mental status, seizures, and paralysis, such as herpes virus encephalitis, enterovirus encephalitis, bacterial encephalitis, metabolic encephalitis, poliomyelitis, and Guillain-Barre syndrome. ## Natural History, Complications, & Prognosis WNV is usually transmitted to humans by the culex mosquito after feeding on infected birds with high-level viremia. Following an incubation period of 2-14 day, untreated patients can remain asymptomatic or present with West Nile fever or with life-threatening neuroinvasive disease. Common complications of WNV infections include neurological impairment. The prognosis of mild disease is excellent; whereas West Nile meningitis and encephalitis may have residual neurologic deficits. ## History & Symptoms WNV infection is considered a clinical spectrum. Infection due to WNV may have any of 3 different clinical presentations: Asymptomatic (~70-80%), mild febrile syndrome termed West Nile fever (~20%), and neuroinvasive disease termed West Nile meningitis or encephalitis (<1%). Patients who are suspected to have WNV infection should specifically be inquired about recent mosquito bites. ## Physical Examination On physical examination, patients with WNV infection may have no specific signs. Physical examination findings may range from an isolated fever to signs of severe neurological impairment, meningeal irritation, stupor, and coma. ## Lab Tests The front-line assay for laboratory diagnosis of WNV infection is the IgM assay. IgM and IgG ELISA tests can cross-react between flaviviruses; therefore, serum samples that are antibody-positive on initial screening should be evaluated by a more specific test. Currently the plaque reduction neutralization test (PRNT) is the recommended test for differentiating between flavivirus infections. Specimens submitted for WNV testing should also be tested by ELISA and PRNT against other arboviruses known to be active or be present in the area or in the region where the patient traveled. Numerous procedures have been developed for detecting viable WNV, WNV antigen or WNV RNA in human diagnostic samples. These procedures vary in their sensitivity, specificity, and time required to conduct the test. Among the most sensitive procedures for detecting WNV in samples are those using RT-PCR to detect WNV RNA in human CSF, serum, and other tissues. Confirmation of virus isolate identity can be accomplished by indirect immunofluorescence assay (IFA) using virus-specific monoclonal antibodies or nucleic acid detection. Immunohistochemistry (IHC) using virus-specific MAbs on brain tissue has been very useful in identifying cases of WNV infection. ## Medical Therapy There is currently no specific antiviral pharmacologic therapy indicated for patients with WNV infection, but interferon-alpha-2b or ribavirin have been used. Patients with mild disease may be followed-up as outpatients; whereas patients with severe disease require hospitalization and close monitoring. Current management of infected patients is based on supportive care aimed at symptom relief and prevention of complications. ## Primary Prevention Human vaccines are not available for WNV infection. With the absence of a vaccine, prevention of WNV disease depends on community-level mosquito control programs to reduce vector densities, personal protective measures to decrease exposure to infected mosquitoes, and screening of blood and organ donors. ## Future or Investigational Therapies Human vaccines against WNV are under development, and they have shown promising results in phase I and II trials. Ribavirin and interferon alfa-2b are currently being studied for the treatment of WNV CNS infections, as both drugs have demonstrated benefit in in vitro studies.
wikidoc
null
/index.php/West_syndrome
1,296
# West syndrome West syndrome, is an uncommon to rare and serious form of epilepsy in infants. The triad of developmental regression, infantile spasms and pattern of hypsarrhythmia on EEG is termed as west syndrome. The syndrome is age-related, generally occurring between the third and the twelfth month, generally manifesting around the fifth month. There are various causes ("polyetiology"). The syndrome is often caused by an organic brain dysfunction whose origins may be prenatal, perinatal (caused during birth) or postnatal. West syndrome was named after the English doctor and surgeon William James West (1793-1848), who lived in Tonbridge. In 1841 he observed this type of epilepsy in his own son, who was approximately four months old at the time. He published his observations from a scientific perspective in an article in The Lancet. He named the seizures "Salaam Tics" at the time. It is still unknown which bio-chemical mechanisms lead to the occurrence of West syndrome. It is conjectured that it is a malfunction of the neurotransmitter function, or more precisely, a malfunction in the regulation of the GABA transmission process. Another possibility being researched is a hyper-production of the Corticotropin-releasing hormone (CRH). It is possible that more than one factor is involved. Both hypotheses are supported by the effect of certain medications used to treat West syndrome. If a cause presents itself, the syndrome is referred to as symptomatic West syndrome, as the attacks manifest as a symptom of another anomaly. These are the possible causes being considered: On average, West syndrome appears in 1 to 5 per 100 children with Down's syndrome as babies. Whereas this form of epilepsy is relatively difficult to treat in children who do not have the chromosomal differences involved in Down's syndrome, the syndrome often affects those who do far more mildly and they often react better to medication. The German Down Syndrom InfoCenter noted in 2003 that what was normally a serious epilepsy was in such cases often a relatively benign one. EEG records for Down's syndrome children are often more symmetrical with fewer unusual findings. Although not all children can become entirely free from attacks with medication, children with Down's syndrome are less likely to go on to develop Lennox-Gastaut syndrome or other forms of epilepsy than those without additional hereditary material on the 21st chromosome. The reason why it is easier to treat children with Down's syndrome is not known. When a direct cause cannot be determined but the children has other neurological disorder, the case is referred to as cryptogenic West syndrome, where an underlying cause is most likely but even with our modern means cannot be detected. Sometimes multiple children within the same family develop West syndrome. In this case it is also referred to as cryptogenic, in which genetic and sometimes hereditary influences play a role. There are known cases in which West syndrome appears in successive generations in boys; this has to do with X-chromosomal heredity. In 45 out of every 50 children affected, the spasms appear for the first time between the third and the twelfth month of age. In rarer cases, spasms may occur in the first two months or during the second to fourth year of age. It is not possible to make a generalised prognosis for development due to the variability of causes, as mentioned above, the differing types of symptoms and etiology. Each case must be considered individually. The prognosis for children with idiopathic West syndrome are mostly more positive than for those with the cryptogenic or symptomatic forms. Idiopathic cases are less likely to show signs of developmental problems before the attacks begin, the attacks can often be treated more easily and effectively and there is a lower relapse rate. Children with this form of the syndrome are less likely to go on to develop other forms of epilepsy; around two in every five children develop at the same rate as healthy children. In other cases, however, treatment of West syndrome is relatively difficult and the results of therapy often dissatisfying; for children with symptomatic and cryptogenic West syndrome, the prognosis is generally not positive, especially when they prove resistant to therapy. Statistically, 5 out of every 100 children with West syndrome do not survive beyond five years of age, in some cases due to the cause of the syndrome, in others for reasons related to their medication. Only less than half of all children can become entirely free from attacks with the help of medication. Statistics show that treatment produces a satisfactory result in around three out of ten cases, with only one in every 25 children's cognitive and motoric development developing more or less normally. A large proportion (up to 90%) of children suffer severe physical and cognitive impairments, even when treatment for the attacks is successful. This is not usually because of the epileptic fits, but rather because of the causes behind them (cerebral anomalies or their location or degree of severity). Severe, frequent attacks can (further) damage the brain. As many as 6 out of 10 children with West syndrome suffer from epilepsy later in life. Sometimes West syndrome turns into a focal or other generalised epilepsy. Around half of all children develop Lennox-Gastaut syndrome. Infantile spasms are often misdiagnosed as colic. The most useful test in diagnosing seizures is EEG. MRI and Ct scans can be done to rule out oragnic causes of west syndrome. Hypsarrhythmia, the pathognomonic EEG pattern of West Syndrome is typically characterized by a high amplitude, arrhythmic and asynchronous pattern. Children with infantile spasms and hypsarrhythmic EEGs had marked abnormalities in coherence and spectral power as compared to normal children. The epileptic seizures which can be observed in infants with West syndrome fall into three categories. Typically, the following triad of attack types appears; while the three types usually appear simultaneously, they also can occur independently of each other: Compared with other forms of epilepsy, West syndrome is difficult to treat. To raise the chance of successful treatment and keep down the risk of longer-lasting effects, it is very important that the condition is diagnosed as early as possible and that treatment begins straight away. However, there is no guarantee that therapy will work even in this case. Insufficient research has yet been carried out into whether the form of treatment has an effect upon the long-term prognosis. Based on what is known today, the prognosis depends mainly on the cause of the attacks and the length of time that hypsarrhythmia lasts. In general it can be said that the prognosis is worse when the patient does not react as well to therapy and the epileptic over-activity in the brain continues. Treatment differs in each individual case and depends on the cause of the West syndrome (etiological classification) and the state of brain development at the time of the damage. Vigabatrin is known for being effective, especially in children with tuberous sclerosis, with few and benign side effects. But due to some recent studies showing visual field constriction (loss of peripheral vision), it is not yet approved in United States. It is currently debated that a short use (6 months or less) of Vigabatrin will not affect vision. Also, considering the effect of frequent seizures on day to day life and mental development, some parents prefer to take the risk of some vision loss. When those two are proving ineffective, other drugs may be used in conjunction or alone. topiramate (Topamax), lamotrigine (Lamictal), levetiracetam (Keppra) and zonisamide (Zonegran) are amongst the most widely use. The ketogenic diet have been tested and his shown to be effective , up to 70% of children having a 50% or more reduction in seizure .
wikidoc
null
/index.php/Westermark%27s_sign
82
# Westermark sign In chest radiography, the Westermark Sign, is a sign that represents a focus of oligemia (vasoconstriction) seen distal to a pulmonary embolus (PE). While the chest x-ray is abnormal in the majority of PE cases, the Westermark sign is seen in only 2% of patients. The Westermark sign, like Hampton's hump (a wedge shaped, pleural based consolidation associated with pulmonary infarction), has a low sensitivity (11%) and high specificity (92%) for the diagnosis of pulmonary embolus.
wikidoc
null
/index.php/Western_State_Hospital_(Washington_State)
57
# Western State Hospital (Washington State) Western State Hospital is a mental hospital on the former Fort Steilacoom in Lakewood, Washington. It is administered by Washington State Department of Social and Health Services (DSHS). It opened in 1871, predating statehood by almost twenty years, and is the second oldest state institution after the University of Washington.
wikidoc
null
/index.php/Weston_General_Hospital
787
# Weston General Hospital Weston General Hospital is an NHS district general hospital in the town of Weston-super-Mare, North Somerset, England operated by Weston Area Health NHS Trust (WAHT). It has an Accident & Emergency department, an intensive care unit, an Oncology and Haematology day unit, and a day case unit. Weston General has 358 beds and 1,800 staff, and has the largest midwifery-led maternity unit in the country. The hospital also has a 12 bed private unit, The Waterside Suite, wholly owned by the hospital trust, with profits being re-invested into the main hospital. The Healthcare Commission, an independent body which promotes and drives quality healthcare in the United Kingdom, has inspected Weston General Hospital and published its findings. In the 2005/2006 period, on the quality of the healthcare it provided the hospital was rated as weak on a four point scale of weak, fair, good and excellent. This placed the hospital in the bottom performing 9% of trusts in the country. On the same scale the hospital's use of resources was also rated weak, placing it in the bottom 37% of trusts in the country. In the 2006/2007 period, the hospital's quality of healthcare score was upgraded to fair, but its use of resources rating remained at weak. The hospital, like others, has had problems with hospital acquired infections such as MRSA and Clostridium difficile (C. diff). In 2003 the trust had the highest rate of MRSA infections in the country. In August 2007 the hospital was criticised in the local press following the death of a 75 year old cancer patient from C diff. Responding the hospital stated that it had reduced infection rates by 25% through 2007. Performance figures released by the trust in September 2007 showed that hospital acquired infection rates had fallen further with just one case of MRSA in August and 18 of C.Diff, compared with more than 30 just a few months previous. These improvements are attributed to a new "bare below the elbow" initiatve to ensure that staff clean their hands and wrists, plus regular steam cleaning of patient beds. On July 7 2003, BBC Television programme Inside Out broadcast allegations from a whistleblower that senior management within the hospital were putting pressure on employees to manipulate waiting list statistics to make them look more favourable. An independent enquiry in 2004 concluded that this manipulation did take place. In 2006, one of the managers named by Inside Out lost a libel case against the BBC, which had alleged that she was involved in the falsification of waiting lists. Waiting lists are still a problem — in 2006 the hospital was one of only eight in the country that failed to reduce waiting times for treatment. Their Royal Highnesses the Duke and Duchess of York (later King George VI and Queen Elizabeth the Queen Mother) officially opened the Queen Alexandra Memorial Hospital on The Boulevard in 1928. Over the years, equipment was added and updated. Portable and temporary buildings were added to the hospital in an attempt to keep pace with the growing needs of the community. With the growth in the town of Weston, and in particular around the area of Worle, it became evident that the town needed a new hospital. Much debate took place resulting in a new hospital being built and opened on 16 September 1986, on the edge of Uphill village. In January 2003 the hospital opened a new oncology and haematology day unit, the Jackson Barstow Wing was opened to treat patients from the surrounding area. The new unit meant that patients could receive treatments, including chemotherapy and blood transfusions without having to travel to Bristol. Weston General Hospital opened a new paediatric unit, the Seashore Centre, in February 2007. The unit, which features paediatric outpatients and a 10 bed day ward, was needed because the only major children's facilities in the region are located at Bristol Children's Hospital and at Musgrove Park Hospital in Taunton. The hospital is served by a number of voluntary organisations including an active League of Friends whose volunteers staff the hospital shop and raise money for projects within the hospital; Freewheelers Emergency Voluntary Service, who use motorcycles to provide emergency out-of-hours transport of blood, diagnostic specimens and drugs; and Sunshine Radio, a hospital radio station manned by volunteers. The hospital also works closely with nearby Weston Hospicecare which provides palliative care for patients with life threatening conditions such as cancer. The new children's centre was partly funded by an appeal, Weston Super Kids, backed by many in the town including the Mayor who made it her chosen charity for her year in office.
wikidoc
null
/index.php/Wet_wipe
130
# Wet wipe A wet wipe, also known as a wet nap or a moist towelette, is a small moistened piece of paper or cloth that often comes folded and individually wrapped in its own wrapper for convenience, much like a packet of sugar or a condom. Such towelettes are for cleansing or disinfecting. Cleansing towelettes are generally moistened with scented water, while disinfecting towelettes are moistened with alcohol. They are often dispensed in restaurants, at service stations, along with airline meals, in doctors' offices, and other similar places. They are often included as part of a standard sealed cutlery package. Wet wipes can also be bought in stores for private usage. In South East Asia, wet wipes are often sold out of refrigerators to gain the refreshing effect.
wikidoc
null
/index.php/Wheat
1,581
# Wheat Wheat (Triticum spp.) is a grass that is cultivated worldwide. Globally, it is an important human food grain ranking second in total production as a cereal crop behind maize; the third being rice. Wheat grain is a staple food used to make flour for leavened, flat and steamed breads; cookies, cakes, pasta, noodles and couscous; and for fermentation to make beer, alcohol, vodka or biofuel. Wheat is planted to a limited extent as a forage crop for livestock, and the straw can be used as fodder for livestock or as a construction material for roofing thatch. Wheat originated in Southwest Asia in the area known as the Fertile Crescent. The genetic relationships between einkorn and emmer indicate that the most likely site of domestication is near Diyarbakır in Turkey . These wild wheats were domesticated as part of the origins of agriculture in the Fertile Crescent. Cultivation and repeated harvesting and sowing of the grains of wild grasses led to the domestication of wheat through selection of mutant forms with tough ears which remained intact during harvesting, larger grains, and a tendency for the spikelets to stay on the stalk until harvested . Because of the loss of seed dispersal mechanisms, domesticated wheats have limited capacity to propagate in the wild. The cultivation of wheat began to spread beyond the Fertile Crescent during the Neolithic period. By 5,000 years ago, wheat had reached Ethiopia, India, Ireland and Spain. A millennium later it reached China. Three thousand years ago agricultural cultivation with horse drawn plows increased cereal grain production, as did the use of seed drills to replace broadcast sowing in the 18th century. Yields of wheat continued to increase, as new land came under cultivation and with improved agricultural husbandry involving the use of fertilizers, threshing machines and reaping machines (the 'combine harvester'), tractor-drawn cultivators and planters, and better varieties (see green revolution and Norin 10 wheat). With population growth rates falling, while yields continue to rise, the area devoted to wheat may now begin to decline for the first time in modern human history. But now in 2007 wheat stocks have reached their lowest since 1981, and 2006 was the first year in which the world consumed more wheat than the world produced - a gap that is continuously widening as the requirement for wheat increases beyond production. The use of wheat as a bio-fuel will exacerbate the situation. Wheat genetics is more complicated than that of most other domesticated species. Some wheat species are diploid, with two sets of chromosomes, but many are stable polyploids, with four sets of chromosomes (tetraploid) or six (hexaploid). In traditional agricultural systems wheat is often grown as landraces, informal farmer-maintained populations that often maintain high levels of morophological diversity. Although landraces of wheat are no longer grown in Europe and North America, they continue to be important elsewhere. The origins of formal wheat breeding lie in the nineteenth century, when single line varieties were created through selection of seed from a single plant noted to have desired properties. Modern wheat breeding developed in the first years of the twentieth century and was closely linked to the development of Mendelian genetics. The standard method of breeding inbred wheat cultivars is by crossing two lines using hand emasculation, then selfing or inbreeding the progeny. Selections are identified (shown to have the genes responsible for the varietal differences) ten or more generations before release as a variety or cultivar. F1 hybrid wheat cultivars should not be confused with wheat cultivars deriving from standard plant breeding. Heterosis or hybrid vigor (as in the familiar F1 hybrids of maize) occurs in common (hexaploid) wheat, but it is difficult to produce seed of hybrid cultivars on a commercial scale as is done with maize because wheat flowers are complete and normally self-pollinate. Commercial hybrid wheat seed has been produced using chemical hybridizing agents, plant growth regulators that selectively interfere with pollen development, or naturally occurring cytoplasmic male sterility systems. Hybrid wheat has been a limited commercially success, in Europe (particularly France), the USA and South Africa. The four wild species of wheat, along with the domesticated varieties einkorn, emmer and spelt, have hulls (in German, Spelzweizen). This more primitive morphology consists of toughened glumes that tightly enclose the grains, and (in domesticated wheats) a semi-brittle rachis that breaks easily on threshing. The result is that when threshed, the wheat ear breaks up into spikelets. To obtain the grain, further processing, such as milling or pounding, is needed to remove the hulls or husks. In contrast, in free-threshing (or naked) forms such as durum wheat and common wheat, the glumes are fragile and the rachis tough. On threshing, the chaff breaks up, releasing the grains. Hulled wheats are often stored as spikelets because the toughened glumes give good protection against pests of stored grain. There are many botanical classification systems used for wheat species, discussed in a separate article on Wheat taxonomy. The name of a wheat species from one information source may not be the name of a wheat species in another. Within a species, wheat cultivars are further classified by wheat breeders and farmers in terms of growing season, such as winter wheat vs. spring wheat, by gluten content, such as hard wheat (high protein content) vs. soft wheat (high starch content), or by grain color (red, white or amber). Harvested wheat grain that enters trade is classified according to grain properties (see below) for the purposes of the commodities market. Wheat buyers use the classifications to help determine which wheat to purchase as each class has special uses. Wheat producers determine which classes of wheat are the most profitable to cultivate with this system. Wheat is widely cultivated as a cash crop because it produces a good yield per unit area, grows well in a temperate climate even with a moderately short growing season, and yields a versatile, high-quality flour that is widely used in baking. Most breads are made with wheat flour, including many breads named for the other grains they contain like most rye and oat breads. Many other popular foods are made from wheat flour as well, resulting in a large demand for the grain even in economies with a significant food surplus. In 2007 there was a dramatic rise in the price of wheat due to freezes and flooding in the northern hemisphere and a drought in Australia. Wheat futures in September, 2007 for December and March delivery had risen above $9.00 a bushel, prices never seen before. There were complaints in Italy about the high price of pasta. While winter wheat lies dormant during a winter freeze, wheat normally requires between 110 and 130 days between planting and harvest, depending upon climate, seed type, and soil conditions. Crop management decisions require the knowledge of stage of development of the crop. In particular, spring fertilizer applications, herbicides, fungicides, growth regulators are typically applied at specific stages of plant development. For example, current recommendations often indicate the second application of nitrogen be done when the ear (not visible at this stage) is about 1 cm in size (Z31 on Zadoks scale). Knowledge of stages is also interesting to identify periods of higher risk, in terms of climate. For example, the meiosis stage is extremely susceptible to low temperatures (under 4 °C) or high temperatures (over 25 °C). Farmers also benefit from knowing when the flag leaf (last leaf) appears as this leaf represents about 75% of photosynthesis reactions during the grain filling period and as such should be preserved from disease or insect attacks to ensure a good yield. Several systems exist to identify crop stages, with the Feekes and Zadoks scales being the most widely used. Each scale is a standard system which describes successive stages reached by the crop during the agricultural season. Estimates of the amount of wheat production lost owing to plant diseases vary between 10-25% in Missouri. A wide range of organisms infect wheat, of which the most important are viruses and fungi. Wheat is used as a food plant by the larvae of some Lepidoptera species including The Flame, Rustic Shoulder-knot, Setaceous Hebrew Character and Turnip Moth. Hard wheats are harder to process and red wheats may need bleaching. Therefore, soft and white wheats usually command higher prices than hard and red wheats on the commodities market. Raw wheat berries can be powdered into flour, germinated and dried creating malt, crushed and de-branned into cracked wheat, parboiled (or steamed), dried, crushed and de-branned into bulgur, or processed into semolina, pasta, or roux. They are a major ingredient in such foods as bread, breakfast cereals (e.g. Wheatena, Cream of Wheat), porridge, crackers, biscuits, pancakes, cakes, and gravy. 100 grams of hard red winter wheat contains about 12.6 grams of protein, 1.5 grams of total fat, 71 grams of carbohydrate (by difference), 12.2 grams of dietary fiber, and 3.2 mg of iron or 17% of the amount required daily. 100 grams of hard red spring wheat contains about 15.4 grams of protein, 1.9 grams of total fat, 68 grams of carbohydrate (by difference), 12.2 grams of dietary fiber, and 3.6 mg of iron or 20% of the amount required daily. Gluten protein found in wheat (and other Triticeae) is hard to digest, and intolerable for people with celiac disease (an autoimmune disorder in ~1% of Indo-European populations).
wikidoc
null
/index.php/Wheat_allergy
1,693
# Wheat allergy Wheat allergy, is a food allergy, but can also be a respiratory or contact allergy resulting from occupational exposure. Like all allergies wheat allergy involves IgE and mast cell response. Typically the allergy is limited to the seed storage proteins of wheat, some reactions are restricted to wheat proteins, while others can react across many varieties of seeds and other plant tissues. Wheat allergy may be a misnomer since there are many allergenic components in wheat, for example serine proteinase inhibitors, glutelins and prolamins and different responses are often attributed to different proteins. The most severe response is exercise/aspirin induced anaphylaxis attributed to one omega gliadin that is a relative of the protein that causes coeliac disease. Other more common symptoms include nausea, urticaria, atopy. There are four major classes of seed storage proteins: albumins, globulins, prolamins and glutelins. Within wheat prolamins are called gliadins and glutelins are called glutenins. These two protein groups form the classic glutens. While gluten is a causative agent of Coeliac disease (CD), coeliac disease can be contrasted to gluten allergy by the involvement of different immune cells and antibody types (See Comparative pathophysiology of gluten sensitivities), and because the list of allergens extend beyond the classic gluten category of proteins. Prolamins and the closely related glutelins, a recent study in Japan found that glutinins are a more frequent allergen, however gliadins are associated with the most severe disease. A proteomics based study found a γ-gliadin isoform gene. Wheat dependent exercise induced anaphylaxis (WDEIA) is primarily mediated by ω-5 gliadin which is encoded by the Gli-1B gene derived from the Aegilops speltoides B genome within wheat. At present many of the allergens of wheat have not been characterized; however, the early studies found many to be in the albumin class . A recent study in Europe confirmed the increased presence of allergies to amylase/trypsin inhibitors (serpins) and lipid transfer protein (LPT). but less reactivity to the globulin fraction The allergies tend to differ between populations (Italian, Japanese, Danish or Swiss), indicating a potential genetic component to these reactivities. Respiratory allergies are an occupational disease that develop in food service workers. Previous studies detected 40 allergens from wheat; some cross-reacted with rye proteins and a few cross-reacted with grass pollens. A later study showed that baker's allergy extend over a broad range of cereal grasses (wheat, durum wheat, triticale, cereal rye, barley, rye grass, oats, canary grass, rice, maize, sorghum and Johnson grass) though the greatest similarities were seen between wheat and rye and that these allergies show cross reactivity between seed proteins and pollen proteins including a prominent crossreactivity between the common environment rye pollen and wheat gluten Proteins are made of a chain of dehydrated amino acids. When enzymes cut proteins into pieces they add water back to the site at which they cut, called enzymatic hydrolysis, for proteins it is called proteolysis. The initial products of this hydrolysis are polypeptides, and smaller products are called simply peptides; these are called wheat protein hydrolysates. These hydrolysates can create allergens out of wheat proteins that previously did not exist by the exposure of buried antigenic sites in the proteins. When proteins are cut into polypeptides, buried regions are exposed to the surface, and these buried regions may possibly be antigenic. Such hydrolyzed wheat protein is used as an additive in foods and cosmetics. The peptides are often 1 kD in size (9 amino acid residues in length) and may increase the allergic response. These wheat polypeptides can cause immediate contact urticaria in susceptible people. Wheat allergies are not altogether different from other food allergies or respiratory allergies. However two conditions, exercise/aspirin induced anaphylaxis and urticaria occur more frequently with wheat allergies. Common symptoms of a wheat allergy include eczema (atopic dermatitis), hives (urticaria), asthma, "Hay fever" (allergic rhinitis), angioedema (tissue swelling due to fluid leakage from blood vessels), abdominal cramps, nausea, and vomiting. Rarer symptoms include anaphylactic shock, arthritis, bloated stomach, chest pains, depression or mood swings, diarrhea, dizziness, headache, joint and muscle aches and pains (may be associated with progressive arthritis), palpitations, psoriasis, irritable bowel syndrome (IBS), swollen throat or tongue, tiredness and lethargy, and unexplained cough. Reactions may become more severe with repeated exposure. Wheat gliadins and potentially oat avenins are associated with another disease, known as wheat- dependent exercise Induced Anaphylaxis (WDEIA) which is similar to Baker's Allergy as both are mediated by IgE responses. In WDEIA, however, the ω-gliadins or a high molecular weight glutenin subunit, and similar proteins in other Triticeae genera enter the blood stream during exercise where they cause acute asthmatic or allergic reaction. One recent study of ω-gliadins demonstrated these gliadins are more similar to the bulk of oat avenins than α/β or γ gliadins but, so far, oat avenins have not been linked to WDEIA. Wheat may specifically induce WDEIA and certain chronic urticaria because the anti-gliadin IgE detects ω5-gliadins expressed by most of the Gli-B1 alleles but almost no responses prolamins extracted from rye or wheat/rye translocates. The Gli-B1 gene in wheat, Triticum aestivum comes from one of three progenitor species, Aegilops speltoides, indicating that nascent mutations on the B genome of wheat or from a small number of cultivated triticeae species. . Recent study of WDEIA shows that both aspirin and exercise increase the presence of gliadin in the blood stream and the chronic induced behavior may extend to NSAIDs, MSG, Benzoate and other synthetic chemical food additives. Baker's allergy has a ω-gliadin component and thioredoxin hB component. In addition, a gluten-extrinsic allergen has been identified as aspergillus amylase, added to flour to increase its baking properties. Contact Sensitivity , Atopic Dermatitis , Eczema, and Urticaria appear to be related phenomena the cause is generally the believed to be the hydrophobic prolamin components of certain Triticeae, Aveneae cultivars, in wheat one of these proteins is ω-gliadin (Gli-B1 gene product). A study of mothers and infants on an allergen-free diet demonstrated that these conditions can be avoided if wheat sensitive cohort in the population avoid wheat in the first year of life . As with exercise induced anaphylaxis aspirin (also: tartrazine, sodium benzoate, sodium glutamate (MSG), sodium metabisulfite, tyramine) may be sensitizing factors for reactivity. Studies of the wheat-dependent exercise induced anaphylaxis demonstrate that atopy and EIA can be triggered from the ingestion of that aspirin and probably NSAIDs allow the entry of wheat proteins into the blood, where IgE reacts within allergens in the dermal tissues. Some individuals may be so sensitive that low dose aspirin therapy can increase risk for both atopy and WDEIA. Wheat allergies were also common with contact dermatitis. A primary cause was the donning agent used for latex gloves prior to the 1990s, however most gloves now use protein free starch as donning agents. There appears to be an association of autoimmune rheumatoid arthritis (ARA) both with GSE and gluten allergies . ARA in GSE/CD may be secondary to tTG autoimmunity. In a recent study in Turkey, 8 of 20 ARA patients had wheat reactivities on the RAST tests. When this allergic food and all other patient specific RAST+ foods were removed half of the patients had improved ARA by serological markers. In patients with wheat allergies, rye was effectively substituted. This may indicate that some proportion of RA in GSE/CD is due to downstream effects of allergic responses. In addition, cross-reactive anti-beef-collagen antibodies (IgG) may explain some rheumatoid arthritis (RA) incidences. Migraines. In the late 70s it was reported that people with migraines had reactions to food allergens, like ARA, the most common reaction was to wheat (78%), orange, eggs, tea, coffee, chocolate, milk, beef, corn, cane sugar, and yeast. When 10 foods causing the most reactions were removed migranes fell precipitously, hypertension declined. Some specific instances are attributed to wheat. Autism. Parents of children with autism often ascribe the children's gastrointestinal symptoms to allergies to wheat and other foods. The published data on this approach are sparse, with the only double-blind study reporting negative results. Diagnoses of wheat allergy may deserve special consideration. Omega-5 gliadin, the most potent wheat allergen, cannot be detected in whole wheat preparations, it must be extracted and partially digested (similar to how it degrades in the intestine) to reach full activity. Other studies show that digestion of wheat proteins to about 10 amino acids can increase the allergic response 10 fold. Certain allergy test may not be suitable to detect all wheat allergies, resulting in cryptic allergies. See Gluten-free diet. Wheat allergies differ from gluten-diet exclusion in that some types of allergens do not create species crossreactive responses, an individual may be able to consume barley and rye safely, although more than likely they will be allergic to other wheat such as spelt and Kamut. Wheat is often a cryptic contaminant of many foods more obvious items are bread crumbs, maltodextrin,bran, cereal extract, couscous, cracker meal, enriched flour, gluten, high-gluten flour, high-protein flour, seitan, semolina wheat, vital gluten, wheat bran, wheat germ, wheat gluten, wheat malt, wheat starch or whole wheat flour. Less obvious sources of wheat could be gelatinized starch, hydrolyzed vegetable protein, modified food starch, modified starch, natural flavoring, soy sauce, soy bean paste, hoisin sauce, starch, vegetable gum, specifically Beta-glucan, vegetable starch. Triticeae gluten-free oats (free of Wheat, rye or barley) may be a useful source of cereal fiber. Some wheat allergies allow the use of rye bread as a substitute. Wheat-free Millet flour, buckwheat, flax seed meal, corn meal, quinoa flour, and chia seed flour can also be used a substitutes. Spelt and kamut are grains closely related to common wheat, and are not usually a suitable substitute for people with wheat allergy or coeliac disease. Rice flour is a commonly used alternative for those allergic to wheat. Many people with wheat allergies are also allergic to soy, milk and alternate food ingredients. Many alternative cereals/flours substitute soy and/or dairy products. Those with wheat/gluten sensitivity should read labels carefully.
wikidoc
null
/index.php/Wheatgrass
1,132
# Wheatgrass Wheatgrass is a young plant of the genus Caroline, (especially Agropyron cristatum, a relative of wheat although some wheatgrass products are made from Triticum aestivum: common wheat). Fresh leaf buds of this plant can be pressed into juice or dried to a powder, both providing chlorophyll, amino acids, minerals, vitamins, and enzymes. The unprocessed plant contains fiber, which promotes colon health. The consumption of wheatgrass in the occident began in the 1930s with the attempts of Charles F Schnabel to popularize the plant. Ann Wigmore continued to contribute to the popularization of wheatgrass in the 1940s. Believing that it contributed to the remission of her cancer, Wigmore wrote several books on the subject. Template:Disputed The average dosage taken by consumers of wheatgrass is 3.5 grams (powder or tablets). Some also have a fresh squeezed 30ml shot once daily or for more therapeutic benefits a higher dose up to 2–4 oz taken 1-3 times per day on an empty stomach and before meals. For detoxification, some users may increase their intake to 3–4 times per day. It should be noted that consumers with a poor diet may experience nausea on high dosages of wheatgrass. Wheatgrass grown indoors does not have as many nutrients as wheatgrass grown outdoors under natural conditions. Fresh squeezed wheat grass juice is especially nutrient deficient because it is 95% water and only 5% dry matter, unlike the dehydrated forms.[citation needed]. Outdoor wheatgrass is only available for a few days each year from plants grown in regions renown for winter wheat, the "bread basket" regions of the US and Canada. Winter wheat requires more than 200 days of slow growth in cold temperatures to reach the peak nutritional content. Even after that long of time, the plant is only 7 to 10 inches high because it was allowed to grow during the cold winter months in climates like midwestern United States, which is natural to the plant. Compared to wheat grass grown outdoors in the proper climate, the leaves of tray-grown wheatgrass are very thin, pale and contain a much lower in nutritional content[citation needed]. Much higher nutritional benefit comes from wheatgrass grown under natural conditions and harvested at the one time of year when the nutritional value reaches its peak[citation needed]. Most people who seek such high nutritional content wheatgrass use dehydrated powders and tablets from reputable companies that grow the wheatgrass organically under natural conditions in an ideal climate such as the midwest of the United States and Canada.[citation needed] It is also important that the wheatgrass be harvested before the "jointing stage" which usually occurs for only a few days for winter wheat grown in breadbasket areas in the United States and Canada. Proponents of wheatgrass use claims that regular ingestion of the plant can give more energy, alkalize the body, improve the digestive system, prevent cancer, diabetes and heart disease, cure constipation, detoxify heavy metals from the bloodstream, cleanse the liver, prevent hair loss and help to make menopause more manageable, and aiding in general well being. "The claims [of wheatgrass proponents] include prevention of cancer, prevention of heart disease, prevention of diabetes, chelation or detoxification of heavy metals, cleansing, liver cleansing and prevention of hair loss and none of these claims have actually been substantiated in the scientific literature." ~ Dr Samir Samman One of the most popular claims about wheatgrass, and one that is frequently made by both supporters and retailers, is that 1 serving of wheatgrass is as nutritionally valuable as a kilogram of green vegetables. This claim most likely originates from a statement commonly attributed to the "father of wheatgrass", Charles F. Schnabel, who is alleged to have said that "Fifteen pounds of wheatgrass is equivalent to 350 pounds of the choicest vegetables". Although it does seem to be quite an exaggerated statement, it was most probably coined due to Schnabel trying to express its unknown and seemingly miraculous health attributes. Schnabel's research was with wheatgrass grown outdoors in Kansas. Schnabel's wheatgrass grew slowly through the cold of winter and was harvested at a very specific time in the early spring, which farmers refer to as the "jointing stage." It was then dehydrated and made into powders and tablets for human consumption. Schnabel's wheatgrass required 200 days of slow growth through the winter and early spring in Kansas to build those high nutritional levels. When wheatgrass is allowed to develop normally in its natural climate, a dense root structure combines with more than 200 days of sunlight to produce a plant with extremely high nutritional values. To use Schnabel's research to promote wheatgrass grown for ten days in a hot house is an obvious invalid comparison. Wheatgrass grown quickly and unnaturally in trays for ten days under artificial conditions contains considerably less nutrional content that wheatgrass grown outdoors in a climate like the midwestern United States and Canada, harvested at once-per-year jointing stage. The nutritionally dense wheatgrass of the kind grown by Schnable is still available in tablet and powder form through natural food stores and online in the United States and most other countries. Seven tablets (3.5 grams) or a teaspoon of wheatgrass powder grown organically through the winter and harvested before the jointing state is equal in nutrition to a USDA serving of spinach or other dark green vegetables. Not all dehydrated wheatgrass is grown in accordance with Schnabel's research. The chlorophyll molecule is structurally similar to hemoglobin, leading some to believe that wheatgrass helps blood flow, digestion and general detoxification of the body. Although no research exists that directly connects chlorophyll with blood building, many nutrients associated with dark green leafy vegetables have been shown to be important for healthy blood.[citation needed] It has been shown by comparative analysis that dehydrated wheatgras powder, if grown under natural conditions, has a much higher nutritional value than so-called "fresh juice" grown under unnatural hot-house conditions. Comparison: Artificial vs. Natural Ann Wigmore encouraged her students to dehydrate raw foods at low temperatures to preserve their nutrients. She inappropriately used the scientific findings of Schnable on dehydrated wheatgrass to support growing wheatgrass rapidly under artificial conditions. In The Simpsons episode "When You Dish upon a Star", Homer invents a cocktail made of wheatgrass and vodka called a "lawnmower". Also appears in the episode "Make Room for Lisa" where Lisa is given a shot of wheatgrass juice by the owner of the New Age store who interprets Lisa's disgust at the taste as a sign of working taste buds. Wheatgrass is referenced in Sex and the City when a character that Samantha is dating has 'funky tasting spunk.' Wheatgrass was referenced as a good way to change this.[citation needed]
wikidoc
null
/index.php/Wheaton_Franciscan_Healthcare
72
# Wheaton Franciscan Healthcare Wheaton Franciscan Services, Inc., known as Wheaton Franciscan Healthcare, is a not-for-profit health care system and housing organization operated by the Wheaton Franciscan Sisters of Wheaton, Illinois. It operates more than 100 health and shelter service organizations in Colorado, Illinois, Iowa, and Wisconsin. The system has 15 hospitals, four long-term care facilities, and 70 clinics. Wheaton employs 22,904 people including 3,530 physicians.
wikidoc
null
/index.php/Wheelbench
294
# Wheelbench A wheelbench is a wheeled mobility device in which the user lies down. The device is propelled manually. The user pushes the wheels with their hands in the same manner as propelling a wheelchair or the wheelbench can be moved by a second person pulling or pushing it by the handles. A wheelbench is constructed in a similar way to a wheelchair, except that it has a stretcher on the top instead of a seat. A wheelbench is collapsible, just like a wheelchair. Wheelbenches are used by people for whom both sitting and walking is difficult or impossible. The term sitting disability is used to describe a condition in which sitting is difficult, painful and perhaps medically injurious and which may be due to illness, injury, or other disability. A notable symptom of sitting disability is severe back pain. While mobility impairment is widely recognised, sitting disability is rarely mentioned in research or legal documents. Hence, wheelbenches are not as well known to society as wheelchairs. A wheelbench has some resemblance to a hospital gurney or wheel chair. The difference is that the gurney is primarily made to move patients around in a hospital and is less comfortable for long distances or outdoors. A wheelbench has bigger wheels, just like a wheelchair. During the last decades it has been a political objective of the Western world to ensure "full equality and active participation" for persons with disabilities. Volunteer organizations that represent people with back pain, have worked hard to gain equal access for people with sitting disability by integrating Universal design into society. Public buildings are asked to be made accessible, with room for wheelbenches in elevators, doors and hallways. Large chrome wheels and flashy paint jobs set some wheelbenches apart.
wikidoc
null
/index.php/Wheeze_differential_diagnosis
36
# Wheeze differential diagnosis For the differential diagnosis of wheeze and cough, click here. For the differential diagnosis of wheeze and fever, click here. For the differential diagnosis of wheeze and slurred speech, click here.
wikidoc
null