page_title
stringlengths
1
91
page_text
stringlengths
0
34.2k
Cortical blindness
Cortical blindness is the total or partial loss of vision in a normal-appearing eye caused by damage to the brains occipital cortex. Cortical blindness can be acquired or congenital, and may also be transient in certain instances. Acquired cortical blindness is most often caused by loss of blood flow to the occipital cortex from either unilateral or bilateral posterior cerebral artery blockage (ischemic stroke) and by cardiac surgery. In most cases, the complete loss of vision is not permanent and the patient may recover some of their vision (cortical visual impairment). Congenital cortical blindness is most often caused by perinatal ischemic stroke, encephalitis, and meningitis. Rarely, a patient with acquired cortical blindness may have little or no insight that they have lost vision, a phenomenon known as Anton–Babinski syndrome. Cortical blindness and cortical visual impairment (CVI), which refers to the partial loss of vision caused by cortical damage, are both classified as subsets of neurological visual impairment (NVI). NVI and its three subtypes—cortical blindness, cortical visual impairment, and delayed visual maturation—must be distinguished from ocular visual impairment in terms of their different causes and structural foci, the brain and the eye respectively. One diagnostic marker of this distinction is that the pupils of individuals with cortical blindness will respond to light whereas those of individuals with ocular visual impairment will not. Symptoms The most common symptoms of acquired and transient cortical blindness include: A complete loss of visual sensation and of vision Preservation/sparing of the abilities to perceive light and/or moving, but not static objects (Riddoch syndrome) A lack of visual fixation and tracking Denial of visual loss (Anton–Babinski syndrome) Visual hallucinations Macular sparing, in which vision in the fovea is spared from the blindness. Causes The most common cause of cortical blindness is ischemia (oxygen deprivation) to the occipital lobes caused by blockage to one or both of the posterior cerebral arteries. However, other conditions have also been known to cause acquired and transient cortical blindness, including: Congenital abnormalities of the occipital lobe Head trauma to the occipital lobe of the brain Bilateral lesions of the primary visual cortex Infection Creutzfeldt–Jakob disease (CJD), in association with a rapid onset of dementia rarely Dissociative identity disorder (DID) Side effect of some anti-epilepsy drugs (AEDs) Hyperammonemia Eclampsia and, rarely, pre-eclampsiaThe most common causes of congenital cortical blindness are: Traumatic brain injury (TBI) to the occipital lobe of the brain Congenital abnormalities of the occipital lobe Perinatal ischemia Encephalitis Meningitis Diagnosis A patient with cortical blindness has no vision but the response of his/her pupil to light is intact (as the reflex does not involve the cortex). Therefore, one diagnostic test for cortical blindness is to first objectively verify the optic nerves and the non-cortical functions of the eyes are functioning normally. This involves confirming that patient can distinguish light/dark, and that his/her pupils dilate and contract with light exposure. Then, the patient is asked to describe something he/she would be able to recognize with normal vision. For example, the patient would be asked the following: "How many fingers am I holding up?" "What does that sign (on a custodians closet, a restroom door, an exit sign) say?" "What kind of vending machine (with a vivid picture of a well-known brand name on it) is that?"Patients with cortical blindness will not be able to identify the item being questioned about at all or will not be able to provide any details other than color or perhaps general shape. This indicates that the lack of vision is neurological rather than ocular. It specifically indicates that the occipital cortex is unable to correctly process and interpret the intact input coming from the retinas. Fundoscopy should be normal in cases of cortical blindness. Cortical blindness can be associated with visual hallucinations, denial of visual loss (Anton–Babinski syndrome), and the ability to perceive moving but not static objects. (Riddoch syndrome). Outcome The prognosis of a patient with acquired cortical blindness depends largely on the original cause of the blindness. For instance, patients with bilateral occipital lesions have a much lower chance of recovering vision than patients who suffered a transient ischemic attack or women who experienced complications associated with eclampsia. In patients with acquired cortical blindness, a permanent complete loss of vision is rare. The development of cortical blindness into the milder cortical visual impairment is a more likely outcome. Furthermore, some patients regain vision completely, as is the case with transient cortical blindness associated with eclampsia and the side effects of certain anti-epilepsy drugs. Recent research by Krystel R. Huxlin and others on the relearning of complex visual motion following V1 damage has offered potentially promising treatments for individuals with acquired cortical blindness. These treatments focus on retraining and retuning certain intact pathways of the visual cortex which are more or less preserved in individuals who sustained damage to V1. Huxlin and others found that specific training focused on utilizing the "blind field" of individuals who had sustained V1 damage improved the patients ability to perceive simple and complex visual motion. This sort of relearning therapy may provide a good workaround for patients with acquired cortical blindness in order to better make sense of the visual environment. See also Blindsight References Further reading Books Vighetto, A., & Krolak-Salmon, P. (2007). Cortical blindness. New York, NY: Cambridge University Press. Papers Balliet, R., Blood, K. M., & Bach-y-Rita, P. (1985). Visual field rehabilitation in the cortically blind? : Journal of Neurology, Neurosurgery & Psychiatry Vol 48(11) Nov 1985, 1113-1124. Trevethan, C. T., & Sahraie, A. (2003). Spatial and temporal processing in a subject with cortical blindness following occipital surgery: Neuropsychologia Vol 41(10) 2003, 1296-1306. == External links ==
Primary hyperparathyroidism
Primary hyperparathyroidism (or PHPT) is a medical condition where the parathyroid gland (or a benign tumor within it) produce excess amounts of parathyroid hormone (PTH). The symptoms of the condition relate to the resulting elevated serum calcium (hypercalcemia), which can cause digestive symptoms, kidney stones, psychiatric abnormalities, and bone disease. The diagnosis is initially made on blood tests; an elevated level of calcium together with a raised (or inappropriately high) level of parathyroid hormone are typically found. To identify the source of the excessive hormone secretion, medical imaging may be performed. Parathyroidectomy, the surgical removal of one or more parathyroid glands, may be required to control symptoms. Signs and symptoms The signs and symptoms of primary hyperparathyroidism are those of hypercalcemia. They are classically summarized by "stones, bones, abdominal groans, thrones and psychiatric overtones". "Stones" refers to kidney stones, nephrocalcinosis, and diabetes insipidus (polyuria and polydipsia). These can ultimately lead to kidney failure. "Bones" refers to bone-related complications. The classic bone disease in hyperparathyroidism is osteitis fibrosa cystica, which results in pain and sometimes pathological fractures. Other bone diseases associated with hyperparathyroidism are osteoporosis, osteomalacia, and arthritis. "Abdominal groans" refers to gastrointestinal symptoms of constipation, indigestion, nausea and vomiting. Hypercalcemia can lead to peptic ulcers and acute pancreatitis. The peptic ulcers can be an effect of increased gastric acid secretion by hypercalcemia. "Thrones" refers to polyuria and constipation "Psychiatric overtones" refers to effects on the central nervous system. Symptoms include lethargy, fatigue, depression, memory loss, psychosis, ataxia, delirium, and coma.Left ventricular hypertrophy may also be seen.Other signs include proximal muscle weakness, itching, and band keratopathy of the eyes.When subjected to formal research, symptoms of depression, pain, and gastric dysfunction seem to correlate with mild cases of hypercalcemia. Causes The most common cause of primary hyperparathyroidism is a sporadic, single parathyroid adenoma resulting from a clonal mutation (~97%). Less common are parathyroid hyperplasia (~2.5%), parathyroid carcinoma (malignant tumor), and adenomas in more than one gland (together ~0.5%).Primary hyperparathyroidism is also a feature of several familial endocrine disorders: Multiple endocrine neoplasia type 1 and type 2A (MEN type 1 and MEN type 2A), and familial hyperparathyroidism.Genetic associations include: In all cases, the disease is idiopathic, but is thought to involve inactivation of tumor suppressor genes (Menin gene in MEN1), or involve gain of function mutations (RET proto-oncogene MEN 2a).Recently, it was demonstrated that liquidators of the Chernobyl power plant are faced with a substantial risk of primary hyperparathyroidism, possibly caused by radioactive strontium isotopes. Diagnosis The diagnosis of primary hyperparathyroidism is made by blood tests.Serum calcium levels are usually elevated, and the parathyroid hormone level is abnormally high compared with an expected low level in response to the high calcium. A relatively elevated parathyroid hormone has been estimated to have a sensitivity of 60–80% and a specificity of approximately 90% for primary hyperparathyroidism.A more powerful variant of comparing the balance between calcium and parathyroid hormone is to perform a 3-hour calcium infusion. After infusion, a parathyroid hormone level above a cutoff of 14 ng/L has a sensitivity of 100% and a specificity of 93% in detecting primary hyperparathyroidism, with a confidence interval of 80% to 100%.Urinary cAMP is occasionally measured; it is generally elevated due to activation of Gs proteins when PTH binds to its receptor.Biochemical confirmation of primary hyperparathyroidism is following by investigations to localize the culprit lesion. Primary hyperparathyroidism is most commonly due to solitary parathyroid adenoma. Less commonly it may be due to double parathyroid adenomas or parathyroid hyperplasia. Tc99 sestamibi scan of head, neck and upper thorax is the most commonly used test for localizing parathyroid adenomas having a sensitivity and specificity of 70–80%. Sensitivity falls down to 30% in case of double/multiple parathyroid adenomas or in case of parathyroid hyperplasia. Ultrasonography is also a useful test in localizing suspicious parathyroid lesions. Normocalcemic Primary Hyperparathyroidism Normocalcemic PHPT was first recognized in 2009 by an international panel of experts. By definition these patients have normal serum calcium (though usually in the upper range) and are typically found to have elevated PTH during workup for osteoporosis. In order to diagnose normocalcemic PHPT, ionized calcium levels should be normal, and all secondary causes for secondary hyperparathyroidism (such as vitamin D deficiency and chronic kidney disease) ruled out. Treatment Treatment is usually surgical removal of the gland(s) containing adenomas, but medication may also be required. Surgery The surgical removal of one or more of the parathyroid glands is known as a parathyroidectomy; this operation was first performed in 1925. The symptoms of the disease, listed above, are indications for surgery. Surgery reduces all cause mortality as well as resolving symptoms. However, cardiovascular mortality is not significantly reduced.The 2002 NIH Workshop on Asymptomatic Primary Hyperparathyroidism developed criteria for surgical intervention . The criteria were revised at the Third International Workshop on the Management of Asymptomatic Primary Hyperparathyroidism . These criteria were chosen on the basis of clinical experience and observational and clinical trial data as to which patients are more likely to have end-organ effects of primary hyperparathyroidism (nephrolithiasis, skeletal involvement), disease progression if surgery is deferred, and the most benefit from surgery. The panel emphasized the need for parathyroidectomy to be performed by surgeons who are highly experienced and skilled in the operation. The Third International Workshop guidelines concluded that surgery is indicated in asymptomatic patients who meet any one of the following conditions: Serum calcium concentration of 1.0 mg/dL (0.25 mmol/L) or more above the upper limit of normal Creatinine clearance that is reduced to <60 mL/min Bone density at the hip, lumbar spine, or distal radius that is more than 2.5 standard deviations below peak bone mass (T score <-2.5) and/or previous fragility fracture Age less than 50 yearsOperative intervention can be delayed in patients over 50 years of age who are asymptomatic or minimally symptomatic and who have serum calcium concentrations <1.0 mg/dL (0.2 mmol/L) above the upper limit of normal, and in patients who are medically unfit for surgeryMore recently, three randomized controlled trials have studied the role of surgery in patients with asymptomatic hyperparathyroidism. The largest study reported that surgery resulted in an increase in bone mass, but no improvement in quality of life after one to two years among patients in the following groups: Untreated, asymptomatic primary hyperparathyroidism Serum calcium between 2.60 and 2.85 mmol/liter (10.4–11.4 mg/dL) Age between 50 and 80 yr No medications interfering with Ca metabolism No hyperparathyroid bone disease No previous operation in the neck Creatinine level < 130 μmol/liter (<1.47 mg/dL)Two other trials reported improvements in bone density and some improvement in quality of life with surgery. Medications Medications are used when surgery is not indicated or for poor surgical candidates. Calcimimetics are used to reduce the amount of parathyroid hormone released by the parathyroid glands and subsequent hypercalcemia. Other medications used for PHPT includes treatments for osteoporosis such as estrogen replacement therapy, bisphosphonates or denosumab and for treatment hypercalciuria to reduce the risk for kidney stones. Epidemiology Primary hyperparathyroidism affects approximately 1 per 1,000 people (0.1%), while there are 25–30 new cases per 100,000 people per year in the United States. The prevalence of primary hyperparathyroidism has been estimated to be 3 in 1000 in the general population and as high as 21 in 1000 in postmenopausal women. It is almost exactly three times as common in women as men.Primary hyperparathyroidism is associated with increased all-cause mortality. Children In contrast with primary hyperparathyroidism in adults, primary hyperparathyroidism in children is considered a rare endocrinopathy. Pediatric primary hyperparathyroidism can be distinguished by its more severe manifestations, in contrast to the less intense manifestations in adult primary hyperparathyroidism. Multiple endocrine neoplasia is more likely to be associated with childhood and adolescent primary hyperparathyroidism. The fundamental skeletal radiologic manifestation include diffuse osteopenia, pathologic fractures and the coexistence of resorption and sclerosis at numerous sites. Skeletal lesions can be specifically bilateral, symmetric and multifocal, exhibiting different types of bone resorption. Pathologic fractures of the femoral neck and spine can potentially initiate serious complications. Because pediatric primary hyperparathyroidism is frequently associated with pathologic fractures it can be misdiagnosed as osteogenesis imperfecta. Pediatric patients with primary hyperparathyroidism are best remedied by parathyroidectomy. Early diagnosis of pediatric primary hyperparathyroidism is all-important to minimize disease complications and start off timely and relevant treatment. Research directions Future developments such as calcimimetic agents (e.g. cinacalcet) which activate the parathyroid calcium-sensing receptor may offer a good alternative to surgery. See also Secondary hyperparathyroidism Tertiary hyperparathyroidism References == External links ==
Meningoencephalitis
Meningoencephalitis (; from Ancient Greek: μῆνιγξ, romanized: meninx, lit. membrane; Ancient Greek: ἐγκέφαλος, romanized: enképhalos, lit. brain; and the medical suffix -itis, "inflammation"), also known as herpes meningoencephalitis, is a medical condition that simultaneously resembles both meningitis, which is an infection or inflammation of the meninges, and encephalitis, which is an infection or inflammation of the brain. Signs and symptoms Signs of meningoencephalitis include unusual behavior, personality changes, and thinking problems.Symptoms may include headache, fever, pain in neck movement, light sensitivity, and seizure. Causes Causative organisms include protozoans, viral and bacterial pathogens.Specific types include: Bacterial Veterinarians have observed meningoencephalitis in animals infected with listeriosis, caused by the pathogenic bacteria L. monocytogenes. Meningitis and encephalitis already present in the brain or spinal cord of an animal may form simultaneously into meningeoencephalitis. The bacterium commonly targets the sensitive structures of the brain stem. L. monocytogenes meningoencephalitis has been documented to significantly increase the number of cytokines, such as IL-1β, IL-12, IL-15, leading to toxic effects on the brain.Meningoencephalitis may be one of the severe complications of diseases originating from several Rickettsia species, such as Rickettsia rickettsii (agent of Rocky Mountain spotted fever (RMSF)), Rickettsia conorii, Rickettsia prowazekii (agent of epidemic louse-borne typhus), and Rickettsia africae. It can cause impairments to the cranial nerves, paralysis to the eyes, and sudden hearing loss. Meningoencephalitis is a rare, late-stage manifestation of tick-borne ricksettial diseases, such as RMSF and human monocytotropic ehrlichiosis (HME), caused by Ehrlichia chaffeensis (a species of rickettsiales bacteria).Other bacteria that can cause it are Mycoplasma pneumoniae, Tuberculosis, Borrelia (Lyme disease) and Leptospirosis. Viral Tick-borne encephalitis West Nile virus Measles Epstein–Barr virus Varicella-zoster virus Enterovirus Herpes simplex virus type 1 Herpes simplex virus type 2 Rabies virus Adenovirus, although meningoencephalitis is almost solely seen in heavily immunocompromised patients. Mumps, a relatively common cause of meningoencephalitis. However, most cases are mild, and mumps meningoencephalitis generally does not result in death or neurologic sequelae. HIV, a very small number of individuals exhibit meningoencephalitis at the primary stage of infection. Autoimmune Antibodies targeting amyloid beta peptide proteins which have been used during research on Alzheimers disease. Anti-N-methyl-D-aspartate (anti-NMDA) receptor antibodies, which are also associated with seizures and a movement disorder, and related to anti-NMDA receptor encephalitis. NAIM or "Nonvasculitic autoimmune inflammatory meningoencephalitis" (NAIM). They can be divided into GFAP- and GFAP+ cases. The second is related to the autoimmune GFAP astrocytopathy. Protozoal Naegleria fowleri (percolozoa) Trypanosoma brucei (euglenozoa) Toxoplasma gondii (apicomplexa) Animal Halicephalobus gingivalisThis nematode is an exceptionally rare cause of meningoencephalitis. Other/multiple Other causes include granulomatous meningoencephalitis and vasculitis. The fungus, Cryptococcus neoformans, can be symptomatically manifested within the CNS as meningoencephalitis with hydrocephalus being a very characteristic finding due to the unique thick polysaccharide capsule of the organism. Diagnosis Clinical diagnosis includes evaluation for the presence of recurrent or recent herpes infection, fever, headache, altered mental status, convulsions, disturbance of consciousness, and focal signs. Testing of cerebrospinal fluid is usually performed. Treatment Antiviral therapy, such as acyclovir and ganciclovir, work best when applied as early as possible. May also be treated with interferon as an immune therapy. Symptomatic therapy can be applied as needed. High fever can be treated by physical regulation of body temperature. Seizure can be treated with antiepileptic drugs. High intracranial pressure can be treated with drugs such as mannitol. If caused by an infection then the infection can be treated with antibiotic drugs. See also Meningitis Meningism Primary amoebic meningoencephalitis Encephalitis Naegleria fowleri References == External links ==
Short stature
Short stature refers to a height of a human which is below typical. Whether a person is considered short depends on the context. Because of the lack of preciseness, there is often disagreement about the degree of shortness that should be called short. Dwarfism is the condition of being very short, often caused by a medical condition. In a medical context, short stature is typically defined as an adult height that is more than two standard deviations below a population’s mean for age and gender, which corresponds to the shortest 2.3% of individuals in that population. The median or typical adult height in developed countries is about 178 centimetres (5 ft 10 in) for men and 165 centimetres (5 ft 5 in) for women. Causes Shortness in children and young adults nearly always results from below-average growth in childhood, while shortness in older adults usually results from loss of height due to kyphosis of the spine or collapsed vertebrae from osteoporosis. The most common causes of short stature in childhood are constitutional growth delay or familial short stature. From a medical perspective, severe shortness can be a variation of normal, resulting from the interplay of multiple familial genes. It can also be due to one or more of many abnormal conditions, such as chronic (prolonged) growth hormone or thyroid hormone deficiency, malnutrition, disease of a major organ system, mistreatment, treatment with certain drugs, chromosomal deletions. Human growth hormone (HGH) deficiency may occur at any time during infancy or childhood, with the most obvious sign being a noticeable slowing of growth. The deficiency may be genetic. Among children without growth hormone deficiency, short stature may be caused by Turner syndrome or Noonan syndrome, chronic kidney disease, being small for gestational age at birth, Prader–Willi syndrome, Wiedemann-Steiner syndrome, or other conditions. Genetic skeletal dysplasias also known as osteochondrodysplasia usually manifest in short-limbed disproportionate short stature.When the cause is unknown, it is called idiopathic short stature. Short stature can also be caused by the bone plates fusing at an earlier age than normal, therefore stunting growth. Normally, the bone age is the same as the biological age but for some people, it is older. For many people with advanced bone ages, they hit a growth spurt early on which propels them to average height but stop growing at an earlier age. However, in some cases, people who are naturally shorter combined with their advanced bone age, end up being even shorter than the height they normally would have been because of their stunted growth. Some of the reasons growth development may slow include: Genetics. When a child’s parents and grandparents are short, the child may also be short; this is known as familial short stature. Also, the target height is merely an estimate and some children simply dont grow as tall as expected. Genetic conditions. Several genetic syndromes can lead to short stature, including Prader-Willi syndrome, Turner syndrome and Noonan syndrome. Chronic diseases. Growth hormone is produced by the pituitary gland, located in the middle of the brain. Therefore, chronic medical problems that affect the pituitary gland may also affect growth. For example, radiation to the brain can affect pituitary function, so pediatric cancer or its treatment can lead to short stature. Gastrointestinal diseases that impair nutrition, such as inflammatory bowel disease and celiac can affect growth, can also be a cause. Many other conditions can also delay the growth rate, including hypothyroidism, heart disease, kidney disease, immunological disease and several other endocrine disorders. Growth hormone deficiency. Some children simply dont produce enough growth hormone, including those born with a poorly developed pituitary gland. Malnutrition. This is caused by an inadequate food supply, an eating disorder, or an underlying condition or treatment that affects appetite, lack of nourishment is a common cause of growth delay. Psychosocial stress.exposure to violence because of war or famine or just being in a home environment that’s not very nurturing, children can also experience a psychosocial stress that keeps them from growing properly,” Dr. Patel says. “That can be reversed just by removing them from the stressful environment.” Classification Chronic illnesses, malnutrition, endocrine, metabolic disorders or chromosomal anomalies are characterized by proportionate short stature. On the other hand, most genetic skeletal dysplasias are known for short stature that may be proportionate or disproportionate. Disproportionate short stature can be further subdivided as specified by the body segments affected by shortening, namely limbs versus trunk: Short-limb short stature in which there is limb shortening as achondroplasia, hypochondroplasia, pseudoachondroplasia and multiple epiphyseal dysplasia. Short-trunk short stature in which there is trunk shortening as spondyloepiphyseal dysplasia and mucopolysaccharidosisShort-limb short stature can be further subcategorised in accordance with limb segment affected by shortening. These subcategories of limb shortening include, rhizomelic (humerus and femur), mesomelic (radius, ulna, tibia and fibula) and acromelic (hands and feet). Anthropometric measurements provide are very beneficial tools to the diagnostic process of genetic skeletal dysplasias. The anthropometric measurements include height, sitting height, arm span, upper/ lower-body segment ratio, sitting height/height ratio, and arm span/height ratio for age. They also aid in the differential diagnosis of skeletal dysplasia subtypes. Treatment The decision to treat is based on a belief that the child will be disabled by being extremely short as an adult, so that the risks of treatment (including sudden death) will outweigh the risks of not treating the symptom of short stature. Although short children commonly report being teased about their height, most adults who are very short are not physically or psychologically disabled by their height. However, there is some evidence to suggest that there is an inverse linear relationship with height and with risk of suicide.Treatment is expensive and requires many years of injections with human growth hormones. The result depends on the cause, but is typically an increase in final height of about 5 to 10 centimetres (2.0 to 3.9 in) taller than predicted. Thus, treatment takes a child who is expected to be much shorter than a typical adult and produces an adult who is still obviously shorter than average. For example, several years of successful treatment in a girl who is predicted to be 146 centimetres (4 ft 9 in) as an adult may result in her being 151 centimetres (4 ft 11 in) instead. Increasing final height in children with short stature may be beneficial and could enhance health-related quality of life outcomes, barring troublesome side effects and excessive cost of treatments. Cost The cost of treatment depends on the amount of growth hormone given, which in turn depends on the childs weight and age. One years worth of drugs normally costs about US$20,000 for a small child and over $50,000 for a teenager. These drugs are normally taken for five or more years. Cultural issues From a social perspective shortness can be a problem independently of the cause. In many societies there are advantages associated with taller stature and disadvantages associated with shorter stature, and vice versa. Pharmaceutical companies Genentech and Eli Lilly, makers of human growth hormone, have worked to medicalize short stature by convincing the public that short stature is a disease rather than a natural variation in human height. Limiting sales of the hormone to children diagnosed with growth hormone deficiency, rather than being short for any reason, limited their sales market. Expanding it to all children whose height was below the third percentile would create 90,000 new customers and US$10 billion in revenue. In the early 1990s, they paid two US charities, the Human Growth Foundation and the MAGIC Foundation, to measure the height of thousands of American children in schools and public places, and to send letters urging medical consultations for children whose height was deemed low. Parents and schools were not told that the charities were being paid by the drug companies to do this.Paired with a campaign to advertise the hormone to physicians, the campaign was successful, and tens of thousands of children began receiving HGH. About half of them do not have growth hormone deficiency, and consequently benefited very little, if at all, from the hormone injections. Criticism of the universal screening program eventually resulted in its end. Advantage Short stature decreases the risk of venous insufficiency. History During World War I in Britain, the minimum height for soldiers was 5 feet 3 inches (160 cm). Thus thousands of men under this height were denied the opportunity to fight in the war. As a result of pressure to allow them entry, special "Bantam Battalions" were created composed of men who were 4 feet 10 inches (147 cm) to 5 feet 3 inches (160 cm). By the end of the war there were 29 Bantam Battalions of about 1,000 men each. Officers were of normal size. See also Dwarfism List of shortest people National Organization of Short Statured Adults Primordial dwarfism Psychosocial short stature—growth inhibition caused by extreme stress Pygmy References == External links ==
Foot
The foot (PL: feet) is an anatomical structure found in many vertebrates. It is the terminal portion of a limb which bears weight and allows locomotion. In many animals with feet, the foot is a separate organ at the terminal part of the leg made up of one or more segments or bones, generally including claws or nails. Etymology The word "foot", in the sense of meaning the "terminal part of the leg of a vertebrate animal" comes from "Old English fot "foot," from Proto-Germanic *fot (source also of Old Frisian fot, Old Saxon fot, Old Norse fotr, Danish fod, Swedish fot, Dutch voet, Old High German fuoz, German Fuß, Gothic fotus "foot"), from PIE root *ped- "foot". The "plural form feet is an instance of i-mutation." Structure The human foot is a strong and complex mechanical structure containing 26 bones, 33 joints (20 of which are actively articulated), and more than a hundred muscles, tendons, and ligaments. The joints of the foot are the ankle and subtalar joint and the interphalangeal joints of the foot. An anthropometric study of 1197 North American adult Caucasian males (mean age 35.5 years) found that a mans foot length was 26.3 cm with a standard deviation of 1.2 cm.The foot can be subdivided into the hindfoot, the midfoot, and the forefoot: The hindfoot is composed of the talus (or ankle bone) and the calcaneus (or heel bone). The two long bones of the lower leg, the tibia and fibula, are connected to the top of the talus to form the ankle. Connected to the talus at the subtalar joint, the calcaneus, the largest bone of the foot, is cushioned underneath by a layer of fat.The five irregular bones of the midfoot, the cuboid, navicular, and three cuneiform bones, form the arches of the foot which serves as a shock absorber. The midfoot is connected to the hind- and fore-foot by muscles and the plantar fascia.The forefoot is composed of five toes and the corresponding five proximal long bones forming the metatarsus. Similar to the fingers of the hand, the bones of the toes are called phalanges and the big toe has two phalanges while the other four toes have three phalanges each. The joints between the phalanges are called interphalangeal and those between the metatarsus and phalanges are called metatarsophalangeal (MTP). Both the midfoot and forefoot constitute the dorsum (the area facing upward while standing) and the planum (the area facing downward while standing). The instep is the arched part of the top of the foot between the toes and the ankle. Bones tibia, fibula tarsus (7): talus, calcaneus, cuneiformes (3), cuboid, and navicular metatarsus (5): first, second, third, fourth, and fifth metatarsal bone phalanges (14)There can be many sesamoid bones near the metatarsophalangeal joints, although they are only regularly present in the distal portion of the first metatarsal bone. Arches The human foot has two longitudinal arches and a transverse arch maintained by the interlocking shapes of the foot bones, strong ligaments, and pulling muscles during activity. The slight mobility of these arches when weight is applied to and removed from the foot makes walking and running more economical in terms of energy. As can be examined in a footprint, the medial longitudinal arch curves above the ground. This arch stretches from the heel bone over the "keystone" ankle bone to the three medial metatarsals. In contrast, the lateral longitudinal arch is very low. With the cuboid serving as its keystone, it redistributes part of the weight to the calcaneus and the distal end of the fifth metatarsal. The two longitudinal arches serve as pillars for the transverse arch which run obliquely across the tarsometatarsal joints. Excessive strain on the tendons and ligaments of the feet can result in fallen arches or flat feet. Muscles The muscles acting on the foot can be classified into extrinsic muscles, those originating on the anterior or posterior aspect of the lower leg, and intrinsic muscles, originating on the dorsal (top) or plantar (base) aspects of the foot. Extrinsic All muscles originating on the lower leg except the popliteus muscle are attached to the bones of the foot. The tibia and fibula and the interosseous membrane separate these muscles into anterior and posterior groups, in their turn subdivided into subgroups and layers. Anterior group Extensor group: the tibialis anterior originates on the proximal half of the tibia and the interosseous membrane and is inserted near the tarsometatarsal joint of the first digit. In the non-weight-bearing leg, the tibialis anterior dorsiflexes the foot and lift its medial edge (supination). In the weight-bearing leg, it brings the leg toward the back of the foot, like in rapid walking. The extensor digitorum longus arises on the lateral tibial condyle and along the fibula, and is inserted on the second to fifth digits and proximally on the fifth metatarsal. The extensor digitorum longus acts similar to the tibialis anterior except that it also dorsiflexes the digits. The extensor hallucis longus originates medially on the fibula and is inserted on the first digit. It dorsiflexes the big toe and also acts on the ankle in the unstressed leg. In the weight-bearing leg, it acts similarly to the tibialis anterior.Peroneal group: the peroneus longus arises on the proximal aspect of the fibula and peroneus brevis below it. Together, their tendons pass behind the lateral malleolus. Distally, the peroneus longus crosses the plantar side of the foot to reach its insertion on the first tarsometatarsal joint, while the peroneus brevis reaches the proximal part of the fifth metatarsal. These two muscles are the strongest pronators and aid in plantar flexion. The peroneus longus also acts like a bowstring that braces the transverse arch of the foot. Posterior group The superficial layer of posterior leg muscles is formed by the triceps surae and the plantaris. The triceps surae consists of the soleus and the two heads of the gastrocnemius. The heads of gastrocnemius arise on the femur, proximal to the condyles, and the soleus arises on the proximal dorsal parts of the tibia and fibula. The tendons of these muscles merge to be inserted onto the calcaneus as the Achilles tendon. The plantaris originates on the femur proximal to the lateral head of the gastrocnemius and its long tendon is embedded medially into the Achilles tendon. The triceps surae is the primary plantar flexor. Its strength becomes most obvious during ballet dancing. It is fully activated only with the knee extended, because the gastrocnemius is shortened during flexion of the knee. During walking it not only lifts the heel, but also flexes the knee, assisted by the plantaris.In the deep layer of posterior muscles, the tibialis posterior arises proximally on the back of the interosseous membrane and adjoining bones, and divides into two parts in the sole of the foot to attach to the tarsus. In the non-weight-bearing leg, it produces plantar flexion and supination, and, in the weight-bearing leg, it proximates the heel to the calf. The flexor hallucis longus arises on the back of the fibula on the lateral side, and its relatively thick muscle belly extends distally down to the flexor retinaculum where it passes over to the medial side to stretch across the sole to the distal phalanx of the first digit. The popliteus is also part of this group, but, with its oblique course across the back of the knee, does not act on the foot. Intrinsic On the top of the foot, the tendons of extensor digitorum brevis and extensor hallucis brevis lie deep in the system of long extrinsic extensor tendons. They both arise on the calcaneus and extend into the dorsal aponeurosis of digits one to four, just beyond the penultimate joints. They act to dorsiflex the digits. Similar to the intrinsic muscles of the hand, there are three groups of muscles in the sole of foot, those of the first and last digits, and a central group: Muscles of the big toe: the abductor hallucis stretches medially along the border of the sole, from the calcaneus to the first digit. Below its tendon, the tendons of the long flexors pass through the tarsal canal. The abductor hallucis is an abductor and a weak flexor, and also helps maintain the arch of the foot. The flexor hallucis brevis arises on the medial cuneiform bone and related ligaments and tendons. An important plantar flexor, it is crucial to ballet dancing. Both these muscles are inserted with two heads proximally and distally to the first metatarsophalangeal joint. The adductor hallucis is part of this group, though it originally formed a separate system (see Contrahens). It has two heads, the oblique head originating obliquely across the central part of the midfoot, and the transverse head originating near the metatarsophalangeal joints of digits five to three. Both heads are inserted into the lateral sesamoid bone of the first digit. The adductor hallucis acts as a tensor of the plantar arches and also adducts the big toe and might plantar flex the proximal phalanx.Muscles of the little toe: Stretching laterally from the calcaneus to the proximal phalanx of the fifth digit, the abductor digiti minimi form the lateral margin of the foot and are the largest of the muscles of the fifth digit. Arising from the base of the fifth metatarsal, the flexor digiti minimi is inserted together with abductor on the first phalanx. Often absent, the opponens digiti minimi originates near the cuboid bone and is inserted on the fifth metatarsal bone. These three muscles act to support the arch of the foot and to plantar flex the fifth digit. Central muscle group: The four lumbricals arise on the medial side of the tendons of flexor digitorum longus and are inserted on the medial margins of the proximal phalanges. The quadratus plantae originates with two slips from the lateral and medial margins of the calcaneus and inserts into the lateral margin of the flexor digitorum tendon. It is also known as the flexor accessorius. The flexor digitorum brevis arises inferiorly on the calcaneus and its three tendons are inserted into the middle phalanges of digits two to four (sometimes also the fifth digit). These tendons divide before their insertions and the tendons of flexor digitorum longus pass through these divisions. Flexor digitorum brevis flexes the middle phalanges. It is occasionally absent. Between the toes, the dorsal and plantar interossei stretch from the metatarsals to the proximal phalanges of digits two to five. The plantar interossei adduct and the dorsal interossei abduct these digits, and are also plantar flexors at the metatarsophalangeal joints. Clinical significance Due to their position and function, feet are exposed to a variety of potential infections and injuries, including athletes foot, bunions, ingrown toenails, Mortons neuroma, plantar fasciitis, plantar warts, and stress fractures. In addition, there are several genetic disorders that can affect the shape and function of the feet, including clubfoot or flat feet. This leaves humans more vulnerable to medical problems that are caused by poor leg and foot alignments. Also, the wearing of shoes, sneakers and boots can impede proper alignment and movement within the ankle and foot. For example, high-heeled shoes are known to throw off the natural weight balance (this can also affect the lower back). For the sake of posture, flat soles with no heels are advised. A doctor who specializes in the treatment of the feet practices podiatry and is called a podiatrist. A pedorthist specializes in the use and modification of footwear to treat problems related to the lower limbs. Fractures of the foot include: Lisfranc fracture – in which one or all of the metatarsals are displaced from the tarsus Jones fracture – a fracture of the fifth metatarsal March fracture – a fracture of the distal third of one of the metatarsals occurring because of recurrent stress Calcaneal fracture Broken toe – a fracture of a phalanx Cuneiform fracture – Due to the ligamentous support of the midfoot, isolated cuneiform fractures are rare. Pronation In anatomy, pronation is a rotational movement of the forearm (at the radioulnar joint) or foot (at the subtalar and talocalcaneonavicular joints). Pronation of the foot refers to how the body distributes weight as it cycles through the gait. During the gait cycle the foot can pronate in many different ways based on rearfoot and forefoot function. Types of pronation include neutral pronation, underpronation (supination), and overpronation. Neutral pronationAn individual who neutrally pronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will roll in a medial direction, such that the weight is distributed evenly across the metatarsus. In this stage of the gait, the knee will generally, but not always, track directly over the hallux. This rolling inward motion as the foot progresses from heel to toe is the way that the body naturally absorbs shock. Neutral pronation is the most ideal, efficient type of gait when using a heel strike gait; in a forefoot strike, the body absorbs shock instead via flexion of the foot. OverpronationAs with a neutral pronator, an individual who overpronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, however, the foot will roll too far in a medial direction, such that the weight is distributed unevenly across the metatarsus, with excessive weight borne on the hallux. In this stage of the gait, the knee will generally, but not always, track inward. An overpronator does not absorb shock efficiently. Imagine someone jumping onto a diving board, but the board is so flimsy that when it is struck, it bends and allows the person to plunge straight down into the water instead of back into the air. Similarly, an overpronators arches will collapse, or the ankles will roll inward (or a combination of the two) as they cycle through the gait. An individual whose bone structure involves external rotation at the hip, knee, or ankle will be more likely to overpronate than one whose bone structure has internal rotation or central alignment. An individual who overpronates tends to wear down their running shoes on the medial (inside) side of the shoe toward the toe area.When choosing a running or walking shoe, a person with overpronation can choose shoes that have good inside support—usually by strong material at the inside sole and arch of the shoe. It is usually visible. The inside support area is marked by strong greyish material to support the weight when a person lands on the outside foot and then roll onto the inside foot. Underpronation (supination) An individual who underpronates also initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will not roll far enough in a medial direction. The weight is distributed unevenly across the metatarsus, with excessive weight borne on the fifth metatarsal, toward the lateral side of the foot. In this stage of the gait, the knee will generally, but not always, track laterally of the hallux. Like an overpronator, an underpronator does not absorb shock efficiently – but for the opposite reason. The underpronated foot is like a diving board that, instead of failing to spring someone in the air because it is too flimsy, fails to do so because it is too rigid. There is virtually no give. An underpronators arches or ankles dont experience much motion as they cycle through the gait. An individual whose bone structure involves internal rotation at the hip, knee, or ankle will be more likely to underpronate than one whose bone structure has external rotation or central alignment. Usually – but not always – those who are bow-legged tend to underpronate. An individual who underpronates tends to wear down their running shoes on the lateral (outside) side of the shoe toward the rear of the shoe in the heel area. Society and culture Humans usually wear shoes or similar footwear for protection from hazards when walking outside. There are a number of contexts where it is considered inappropriate to wear shoes. Some people consider it rude to wear shoes into a house and a Māori Marae should only be entered with bare feet. Foot fetishism is the most common sexual fetish. Other animals A paw is the soft foot of a mammal, generally a quadruped, that has claws or nails (e.g., a cat or dogs paw). A hard foot is called a hoof. Depending on style of locomotion, animals can be classified as plantigrade (sole walking), digitigrade (toe walking), or unguligrade (nail walking). The metatarsals are the bones that make up the main part of the foot in humans, and part of the leg in large animals or paw in smaller animals. The number of metatarsals are directly related to the mode of locomotion with many larger animals having their digits reduced to two (elk, cow, sheep) or one (horse). The metatarsal bones of feet and paws are tightly grouped compared to, most notably, the human hand where the thumb metacarpal diverges from the rest of the metacarpus. Metaphorical and cultural usage The word "foot" is used to refer to a "...linear measure was in Old English (the exact length has varied over time), this being considered the length of a mans foot; a unit of measure used widely and anciently. In this sense the plural is often foot. The current inch and foot are implied from measurements in 12c." The word "foot" also has a musical meaning; a "...metrical foot (late Old English, translating Latin pes, Greek pous in the same sense) is commonly taken to represent one rise and one fall of a foot: keeping time according to some, dancing according to others."The word "foot" was used in Middle English to mean "a person" (c. 1200). The expression "...to put ones best foot foremost first recorded 1849 (Shakespeare has the better foot before, 1596)". The expression to "...put ones foot in (ones) mouth "say something stupid" was first used in 1942. The expression "put (ones) foot in something" meaning to "make a mess of it" was used in 1823.The word "footloose" was first used in the 1690s, meaning "free to move the feet, unshackled"; the more "figurative sense of "free to act as one pleases" was first used in 1873. Like "footloose", "flat-footed" at first had its obvious literal meaning (in 1600, it meant "with flat feet") but by 1912 it meant "unprepared" (U.S. baseball slang). See also References Bibliography France, Diane L. (2008). Human and Nonhuman Bone Identification: A Color Atlas. CRC Press. ISBN 978-1-4200-6286-1. Marieb, Elaine Nicpon; Hoehn, Katja (2007). Human anatomy & physiology. Pearson Education. ISBN 978-0-321-37294-9. Platzer, Werner (2004). Color Atlas of Human Anatomy, Vol. 1: Locomotor System (5th ed.). Thieme. ISBN 3-13-533305-1. "Anatomy of the foot and ankle". Podiatry Channel. Archived from the original on 31 August 2009. Retrieved 21 August 2009. External links Foot at Curlie
Stridor
Stridor (Latin for "creaking or grating noise") is a high-pitched extra-thoracic breath sound resulting from turbulent air flow in the larynx or lower in the bronchial tree. It is different from a stertor which is a noise originating in the pharynx. Stridor is a physical sign which is caused by a narrowed or obstructed airway. It can be inspiratory, expiratory or biphasic, although it is usually heard during inspiration. Inspiratory stridor often occurs in children with croup. It may be indicative of serious airway obstruction from severe conditions such as epiglottitis, a foreign body lodged in the airway, or a laryngeal tumor. Stridor should always command attention to establish its cause. Visualization of the airway by medical experts equipped to control the airway may be needed. Causes Stridor may occur as a result of: foreign bodies (e.g., aspirated foreign body, aspirated food bolus); infections (e.g., epiglottitis, retropharyngeal abscess, croup); subglottic stenosis (e.g., following prolonged intubation or congenital); airway edema (e.g., following instrumentation of the airway, tracheal intubation, drug side effect, allergic reaction); laryngospasm (from aspiration, GERD, or complication of anesthesia) subglottic hemangioma (rare); vascular rings compressing the trachea; thyroiditis such as Riedels thyroiditis; vocal cord palsy; tracheomalacia or tracheobronchomalacia (e.g., collapsed trachea). congenital anomalies of the airway are present in 87% of all cases of stridor in infants and children. vasculitis. infectious mononucleosis peritonsillar abscess Laryngeal edema is a common cause of stridor post extubation (occurring from pressure of the endotracheal tube on the mucosa as a result of endotracheal tube that is too large (e.g. pediatrics), cuff over inflation, and prolonged intubation times.); tumor (e.g., laryngeal papillomatosis, squamous cell carcinoma of larynx, trachea or esophagus); ALL (T-cell ALL can present with mediastinal mass that compresses the trachea and causes inspiratory stridor) Diagnosis Stridor is mainly diagnosed on the basis of history and physical examination, with a view to revealing the underlying problem or condition. Chest and neck x-rays, bronchoscopy, CT-scans, and/or MRIs may reveal structural pathology. Flexible fiberoptic bronchoscopy can also be very helpful, especially in assessing vocal cord function or in looking for signs of compression or infection. Treatments The first issue of clinical concern in the setting of stridor is whether or not tracheal intubation or tracheostomy is immediately necessary. A reduction in oxygen saturation is considered a late sign of airway obstruction, particularly in a child with healthy lungs and normal gas exchange. Some patients will need immediate tracheal intubation. If intubation can be delayed for a period, a number of other potential options can be considered, depending on the severity of the situation and other clinical details. These include: Expectant management with full monitoring, oxygen by face mask, and positioning the head on the bed for optimum conditions (e.g., 45 - 90 degrees). Use of nebulized racemic adrenaline epinephrine (0.5 to 0.75 ml of 2.25% racemic epinephrine added to 2.5 to 3 ml of normal saline) in cases where airway edema may be the cause of the stridor. (Nebulized Codeine in a dose not exceeding 3 mg/kg may also be used, but not together with racemic adrenaline [because of the risk of ventricular arrhythmias].) Use of dexamethasone (Decadron) 4–8 mg IV q 8 - 12 h in cases where airway edema may be the cause of the stridor; note that some time (in the range of hours) may be needed for dexamethasone to work fully. Use of inhaled Heliox (70% helium, 30% oxygen); the effect is almost instantaneous. Helium, being a less dense gas than nitrogen, reduces turbulent flow through the airways. Always ensure an open airway.In obese patients elevation of the panniculus has shown to relieve symptoms by 80%. References External links Audio Breath Sounds Archived 2020-12-15 at the Wayback Machine—Multiple case studies with audio files of lung sounds. Stridor at eMedicine Congenital stridor at eMedicine MedlinePlus Encyclopedia: Breathing sounds—abnormal (stridor) Diseases Database (DDB): 27190 Stridor sounds at R.A.L.E. Lung Sounds
Perforation
A perforation is a small hole in a thin material or web. There is usually more than one perforation in an organized fashion, where all of the holes collectively are called a perforation. The process of creating perforations is called perforating, which involves puncturing the workpiece with a tool. Perforations are usually used to allow easy separation of two sections of the material, such as allowing paper to be torn easily along the line. Packaging with perforations in paperboard or plastic film is easy for consumers to open. Other purposes include filtrating fluids, sound deadening, allowing light or fluids to pass through, and to create an aesthetic design.Various applications include plastic films to allow the packages to breathe, medical films, micro perforated plate and sound and vapor barriers. Processes Pins and needles Rotary pinned perforation rollers are precision tools that can be used to perforate a wide variety of materials. The pins or needles can be used cold or heated. Cold perforation tools include needle punches. There are a handful of manufacturers that specialize in hot and cold needle perforation tooling and equipment. In materials that have elasticity this can result in a "volcano" hole that is preferred in many applications. Pinned rollers can be made from a variety of materials, including plastic, steel, and aluminum. In more brittle films, cold perforation can cause slitting rather than creating a round hole, which can jeopardize the materials integrity under pressure. The solution to this is often heating the pin; i.e. hot pin perforation. Hot perforation melts a hole in the material, causing a reinforced ring around the hole. Hot needle perforation also assists when high density pin patterns are utilized, as the heat aids the perforation of the material. Die and punch Die and punch sets can be used for thicker materials or materials that require large holes; this process is the most common for metalworking. The workpiece is sheared by pressing (either by machine or hand tool) the punch through the workpiece and into the die. The middle section of the workpiece is scrap; commonly known as the chad in paper and similar materials. The punch and die are shaped to produce the desired shaped hole. The clearance (the distance between the outside circumference of the punch and the inner circumference of the die) must be properly maintained to ensure a clean cut. Burrs are produced on the side of the workpiece that is against the die.Common applications are fruit and vegetable bags, hole punching and ticket punching. Laser perforation Laser cutting can place many precise holes in a web. Laser perforations look similar in many respects to hot needle perforations. However, laser systems are expensive. The big advantage of laser perforation is the consistency of the hole size, compared to mechanical perforation. This is very important in modified atmosphere packaging for fresh produce. The laser perforation is often carried out on roll slitting machines (slitter rewinder) as the printed material is slit down to the finished roll size. Applications Perforation frequently refers to the practice of creating a long series of holes or slits so that paper or plastics can be torn more easily along a given line: this is used in easy-open packaging. Since the creation of perforation devices in the 1840s and 1850s, it has seen use in several areas. Postage stamps are one common application of this, where small round holes are cut in lines to create individual pieces. Perforations on stamps are rather large, in the order of a millimeter, in comparison other perforated materials often have smaller holes. It is common for cheque-books, notebooks and legal pads to have perforations making it easier to tear out individual pages or leaves. Perforation is used in ways to separate loose leaf (or even a form of graph paper from a ringed binder). A fine perforation next to the rings allows the page to be separated from the book with no confetti. Screwcaps on glass or plastic bottles are sealed with a ring at the bottom of the cap attached by perforation. Twisting the cap has the effect of rupturing the material between the perforations and indicating that the original seal has been broken. The edges of film stock are perforated to allow it to be moved precise distances at a time continuously. Similarly, punched cards for use in looms and later in computers input and output devices in some cases were perforated to ensure correct positioning of the card in the device, and to encode information. Perforation of steel strips is used in the manufacture of some zesters and rasps. Historically, perforation patterns other than linear were used to mark stamps. A series of patents had been issued in the late 19th century for perforation machines to be used on rail lines for ticketing. Libraries and private collections used similar perforating stamps to mark ownership of books. End sheets, title pages, and image plates were punched with the namesake of the collection. Today, similarly elaborate perforation patterns continue to be used in orienteering. Bread bags for some bread often have micro-perforations in the plastic, which is supposed to keep the bread fresh by releasing excess moisture. Similarly, bags of concrete use small perforations to allow air to escape while they are being filled. See also Film perforations Punched tape == References ==
Eye
Eyes are organs of the visual system. They provide living organisms with vision, the ability to receive and process visual detail, as well as enabling several photo response functions that are independent of vision. Eyes detect light and convert it into electro-chemical impulses in neurons (neurones). In higher organisms, the eye is a complex optical system which collects light from the surrounding environment, regulates its intensity through a diaphragm, focuses it through an adjustable assembly of lenses to form an image, converts this image into a set of electrical signals, and transmits these signals to the brain through complex neural pathways that connect the eye via the optic nerve to the visual cortex and other areas of the brain. Eyes with resolving power have come in ten fundamentally different forms, and 96% of animal species possess a complex optical system. Image-resolving eyes are present in molluscs, chordates and arthropods.The most simple eyes, pit eyes, are eye-spots which may be set into a pit to reduce the angle of light that enters and affects the eye-spot, to allow the organism to deduce the angle of incoming light. From more complex eyes, retinal photosensitive ganglion cells send signals along the retinohypothalamic tract to the suprachiasmatic nuclei to effect circadian adjustment and to the pretectal area to control the pupillary light reflex. Overview Complex eyes distinguish shapes and colours. The visual fields of many organisms, especially predators, involve large areas of binocular vision for depth perception. In other organisms, particularly prey animals, eyes are located to maximise the field of view, such as in rabbits and horses, which have monocular vision. The first proto-eyes evolved among animals 600 million years ago about the time of the Cambrian explosion. The last common ancestor of animals possessed the biochemical toolkit necessary for vision, and more advanced eyes have evolved in 96% of animal species in six of the ~35 main phyla. In most vertebrates and some molluscs, the eye allows light to enter and project onto a light-sensitive layer of cells known as the retina. The cone cells (for colour) and the rod cells (for low-light contrasts) in the retina detect and convert light into neural signals which are transmitted to the brain via the optic nerve to produce vision. Such eyes are typically spheroid, filled with the transparent gel-like vitreous humour, possess a focusing lens, and often an iris. Muscles around the iris change the size of the pupil, regulating the amount of light that enters the eye and reducing aberrations when there is enough light. The eyes of most cephalopods, fish, amphibians and snakes have fixed lens shapes, and focusing is achieved by telescoping the lens in a similar manner to that of a camera.The compound eyes of the arthropods are composed of many simple facets which, depending on anatomical detail, may give either a single pixelated image or multiple images per eye. Each sensor has its own lens and photosensitive cell(s). Some eyes have up to 28,000 such sensors arranged hexagonally, which can give a full 360° field of vision. Compound eyes are very sensitive to motion. Some arthropods, including many Strepsiptera, have compound eyes of only a few facets, each with a retina capable of creating an image. With each eye producing a different image, a fused, high-resolution image is produced in the brain. Possessing detailed hyperspectral colour vision, the Mantis shrimp has the worlds most complex colour vision system. Trilobites, now extinct, had unique compound eyes. Clear calcite crystals formed the lenses of their eyes. They differ in this from most other arthropods, which have soft eyes. The number of lenses in such an eye varied widely; some trilobites had only one while others had thousands of lenses per eye. In contrast to compound eyes, simple eyes have a single lens. Jumping spiders have one pair of large simple eyes with a narrow field of view, augmented by an array of smaller eyes for peripheral vision. Some insect larvae, like caterpillars, have a type of simple eye (stemmata) which usually provides only a rough image, but (as in sawfly larvae) can possess resolving powers of 4 degrees of arc, be polarization-sensitive, and capable of increasing its absolute sensitivity at night by a factor of 1,000 or more. Ocelli, some of the simplest eyes, are found in animals such as some of the snails. They have photosensitive cells but no lens or other means of projecting an image onto those cells. They can distinguish between light and dark but no more, enabling them to avoid direct sunlight. In organisms dwelling near deep-sea vents, compound eyes are adapted to see the infra-red light produced by the hot vents; enabling the creatures to avoid being boiled alive. Types There are ten different eye layouts—indeed every technological method of capturing an optical image commonly used by human beings, with the exceptions of zoom and Fresnel lenses, occur in nature. Eye types can be categorised into "simple eyes", with one concave photoreceptive surface, and "compound eyes", which comprise a number of individual lenses laid out on a convex surface. Note that "simple" does not imply a reduced level of complexity or acuity. Indeed, any eye type can be adapted for almost any behaviour or environment. The only limitations specific to eye types are that of resolution—the physics of compound eyes prevents them from achieving a resolution better than 1°. Also, superposition eyes can achieve greater sensitivity than apposition eyes, so are better suited to dark-dwelling creatures. Eyes also fall into two groups on the basis of their photoreceptors cellular construction, with the photoreceptor cells either being cilliated (as in the vertebrates) or rhabdomeric. These two groups are not monophyletic; the cnidaria also possess cilliated cells, and some gastropods, as well as some annelids possess both.Some organisms have photosensitive cells that do nothing but detect whether the surroundings are light or dark, which is sufficient for the entrainment of circadian rhythms. These are not considered eyes because they lack enough structure to be considered an organ, and do not produce an image. Non-compound eyes Simple eyes are rather ubiquitous, and lens-bearing eyes have evolved at least seven times in vertebrates, cephalopods, annelids, crustaceans and cubozoa. Pit eyes Pit eyes, also known as stemma, are eye-spots which may be set into a pit to reduce the angles of light that enters and affects the eye-spot, to allow the organism to deduce the angle of incoming light. Found in about 85% of phyla, these basic forms were probably the precursors to more advanced types of "simple eyes". They are small, comprising up to about 100 cells covering about 100 µm. The directionality can be improved by reducing the size of the aperture, by incorporating a reflective layer behind the receptor cells, or by filling the pit with a refractile material.Pit vipers have developed pits that function as eyes by sensing thermal infra-red radiation, in addition to their optical wavelength eyes like those of other vertebrates (see infrared sensing in snakes). However, pit organs are fitted with receptors rather different from photoreceptors, namely a specific transient receptor potential channel (TRP channels) called TRPV1. The main difference is that photoreceptors are G-protein coupled receptors but TRP are ion channels. Spherical lens eye The resolution of pit eyes can be greatly improved by incorporating a material with a higher refractive index to form a lens, which may greatly reduce the blur radius encountered—hence increasing the resolution obtainable. The most basic form, seen in some gastropods and annelids, consists of a lens of one refractive index. A far sharper image can be obtained using materials with a high refractive index, decreasing to the edges; this decreases the focal length and thus allows a sharp image to form on the retina. This also allows a larger aperture for a given sharpness of image, allowing more light to enter the lens; and a flatter lens, reducing spherical aberration. Such a non-homogeneous lens is necessary for the focal length to drop from about 4 times the lens radius, to 2.5 radii.Heterogeneous eyes have evolved at least nine times: four or more times in gastropods, once in the copepods, once in the annelids, once in the cephalopods, and once in the chitons, which have aragonite lenses. No extant aquatic organisms possess homogeneous lenses; presumably the evolutionary pressure for a heterogeneous lens is great enough for this stage to be quickly "outgrown".This eye creates an image that is sharp enough that motion of the eye can cause significant blurring. To minimise the effect of eye motion while the animal moves, most such eyes have stabilising eye muscles.The ocelli of insects bear a simple lens, but their focal point usually lies behind the retina; consequently, those cant form a sharp image. Ocelli (pit-type eyes of arthropods) blur the image across the whole retina, and are consequently excellent at responding to rapid changes in light intensity across the whole visual field; this fast response is further accelerated by the large nerve bundles which rush the information to the brain. Focusing the image would also cause the suns image to be focused on a few receptors, with the possibility of damage under the intense light; shielding the receptors would block out some light and thus reduce their sensitivity. This fast response has led to suggestions that the ocelli of insects are used mainly in flight, because they can be used to detect sudden changes in which way is up (because light, especially UV light which is absorbed by vegetation, usually comes from above). Multiple lenses Some marine organisms bear more than one lens; for instance the copepod Pontella has three. The outer has a parabolic surface, countering the effects of spherical aberration while allowing a sharp image to be formed. Another copepod, Copilia, has two lenses in each eye, arranged like those in a telescope. Such arrangements are rare and poorly understood, but represent an alternative construction. Multiple lenses are seen in some hunters such as eagles and jumping spiders, which have a refractive cornea: these have a negative lens, enlarging the observed image by up to 50% over the receptor cells, thus increasing their optical resolution. Refractive cornea In the eyes of most mammals, birds, reptiles, and most other terrestrial vertebrates (along with spiders and some insect larvae) the vitreous fluid has a higher refractive index than the air. In general, the lens is not spherical. Spherical lenses produce spherical aberration. In refractive corneas, the lens tissue is corrected with inhomogeneous lens material (see Luneburg lens), or with an aspheric shape. Flattening the lens has a disadvantage; the quality of vision is diminished away from the main line of focus. Thus, animals that have evolved with a wide field-of-view often have eyes that make use of an inhomogeneous lens.As mentioned above, a refractive cornea is only useful out of water. In water, there is little difference in refractive index between the vitreous fluid and the surrounding water. Hence creatures that have returned to the water—penguins and seals, for example—lose their highly curved cornea and return to lens-based vision. An alternative solution, borne by some divers, is to have a very strongly focusing cornea. Reflector eyes An alternative to a lens is to line the inside of the eye with "mirrors", and reflect the image to focus at a central point. The nature of these eyes means that if one were to peer into the pupil of an eye, one would see the same image that the organism would see, reflected back out.Many small organisms such as rotifers, copepods and flatworms use such organs, but these are too small to produce usable images. Some larger organisms, such as scallops, also use reflector eyes. The scallop Pecten has up to 100 millimetre-scale reflector eyes fringing the edge of its shell. It detects moving objects as they pass successive lenses.There is at least one vertebrate, the spookfish, whose eyes include reflective optics for focusing of light. Each of the two eyes of a spookfish collects light from both above and below; the light coming from above is focused by a lens, while that coming from below, by a curved mirror composed of many layers of small reflective plates made of guanine crystals. Compound eyes A compound eye may consist of thousands of individual photoreceptor units or ommatidia (ommatidium, singular). The image perceived is a combination of inputs from the numerous ommatidia (individual "eye units"), which are located on a convex surface, thus pointing in slightly different directions. Compared with simple eyes, compound eyes possess a very large view angle, and can detect fast movement and, in some cases, the polarisation of light. Because the individual lenses are so small, the effects of diffraction impose a limit on the possible resolution that can be obtained (assuming that they do not function as phased arrays). This can only be countered by increasing lens size and number. To see with a resolution comparable to our simple eyes, humans would require very large compound eyes, around 11 metres (36 ft) in radius.Compound eyes fall into two groups: apposition eyes, which form multiple inverted images, and superposition eyes, which form a single erect image. Compound eyes are common in arthropods, annelids and some bivalved molluscs. Compound eyes in arthropods grow at their margins by the addition of new ommatidia. Apposition eyes Apposition eyes are the most common form of eyes and are presumably the ancestral form of compound eyes. They are found in all arthropod groups, although they may have evolved more than once within this phylum. Some annelids and bivalves also have apposition eyes. They are also possessed by Limulus, the horseshoe crab, and there are suggestions that other chelicerates developed their simple eyes by reduction from a compound starting point. (Some caterpillars appear to have evolved compound eyes from simple eyes in the opposite fashion.) Apposition eyes work by gathering a number of images, one from each eye, and combining them in the brain, with each eye typically contributing a single point of information. The typical apposition eye has a lens focusing light from one direction on the rhabdom, while light from other directions is absorbed by the dark wall of the ommatidium. Superposition eyes The second type is named the superposition eye. The superposition eye is divided into three types: refracting, reflecting and parabolic superpositionThe refracting superposition eye has a gap between the lens and the rhabdom, and no side wall. Each lens takes light at an angle to its axis and reflects it to the same angle on the other side. The result is an image at half the radius of the eye, which is where the tips of the rhabdoms are. This type of compound eye, for which a minimal size exists below which effective superposition cannot occur, is normally found in nocturnal insects, because it can create images up to 1000 times brighter than equivalent apposition eyes, though at the cost of reduced resolution. In the parabolic superposition compound eye type, seen in arthropods such as mayflies, the parabolic surfaces of the inside of each facet focus light from a reflector to a sensor array. Long-bodied decapod crustaceans such as shrimp, prawns, crayfish and lobsters are alone in having reflecting superposition eyes, which also have a transparent gap but use corner mirrors instead of lenses. Parabolic superposition This eye type functions by refracting light, then using a parabolic mirror to focus the image; it combines features of superposition and apposition eyes. Other Another kind of compound eye, found in males of Order Strepsiptera, employs a series of simple eyes—eyes having one opening that provides light for an entire image-forming retina. Several of these eyelets together form the strepsipteran compound eye, which is similar to the schizochroal compound eyes of some trilobites. Because each eyelet is a simple eye, it produces an inverted image; those images are combined in the brain to form one unified image. Because the aperture of an eyelet is larger than the facets of a compound eye, this arrangement allows vision under low light levels.Good fliers such as flies or honey bees, or prey-catching insects such as praying mantis or dragonflies, have specialised zones of ommatidia organised into a fovea area which gives acute vision. In the acute zone, the eyes are flattened and the facets larger. The flattening allows more ommatidia to receive light from a spot and therefore higher resolution. The black spot that can be seen on the compound eyes of such insects, which always seems to look directly at the observer, is called a pseudopupil. This occurs because the ommatidia which one observes "head-on" (along their optical axes) absorb the incident light, while those to one side reflect it.There are some exceptions from the types mentioned above. Some insects have a so-called single lens compound eye, a transitional type which is something between a superposition type of the multi-lens compound eye and the single lens eye found in animals with simple eyes. Then there is the mysid shrimp, Dioptromysis paucispinosa. The shrimp has an eye of the refracting superposition type, in the rear behind this in each eye there is a single large facet that is three times in diameter the others in the eye and behind this is an enlarged crystalline cone. This projects an upright image on a specialised retina. The resulting eye is a mixture of a simple eye within a compound eye. Another version is a compound eye often referred to as "pseudofaceted", as seen in Scutigera. This type of eye consists of a cluster of numerous ommatidia on each side of the head, organised in a way that resembles a true compound eye. The body of Ophiocoma wendtii, a type of brittle star, is covered with ommatidia, turning its whole skin into a compound eye. The same is true of many chitons. The tube feet of sea urchins contain photoreceptor proteins, which together act as a compound eye; they lack screening pigments, but can detect the directionality of light by the shadow cast by its opaque body. Nutrients The ciliary body is triangular in horizontal section and is coated by a double layer, the ciliary epithelium. The inner layer is transparent and covers the vitreous body, and is continuous from the neural tissue of the retina. The outer layer is highly pigmented, continuous with the retinal pigment epithelium, and constitutes the cells of the dilator muscle. The vitreous is the transparent, colourless, gelatinous mass that fills the space between the lens of the eye and the retina lining the back of the eye. It is produced by certain retinal cells. It is of rather similar composition to the cornea, but contains very few cells (mostly phagocytes which remove unwanted cellular debris in the visual field, as well as the hyalocytes of Balazs of the surface of the vitreous, which reprocess the hyaluronic acid), no blood vessels, and 98–99% of its volume is water (as opposed to 75% in the cornea) with salts, sugars, vitrosin (a type of collagen), a network of collagen type II fibres with the mucopolysaccharide hyaluronic acid, and also a wide array of proteins in micro amounts. Amazingly, with so little solid matter, it tautly holds the eye. Evolution Photoreception is phylogenetically very old, with various theories of phylogenesis. The common origin (monophyly) of all animal eyes is now widely accepted as fact. This is based upon the shared genetic features of all eyes; that is, all modern eyes, varied as they are, have their origins in a proto-eye believed to have evolved some 650-600 million years ago, and the PAX6 gene is considered a key factor in this. The majority of the advancements in early eyes are believed to have taken only a few million years to develop, since the first predator to gain true imaging would have touched off an "arms race" among all species that did not flee the photopic environment. Prey animals and competing predators alike would be at a distinct disadvantage without such capabilities and would be less likely to survive and reproduce. Hence multiple eye types and subtypes developed in parallel (except those of groups, such as the vertebrates, that were only forced into the photopic environment at a late stage). Eyes in various animals show adaptation to their requirements. For example, the eye of a bird of prey has much greater visual acuity than a human eye, and in some cases can detect ultraviolet radiation. The different forms of eye in, for example, vertebrates and molluscs are examples of parallel evolution, despite their distant common ancestry. Phenotypic convergence of the geometry of cephalopod and most vertebrate eyes creates the impression that the vertebrate eye evolved from an imaging cephalopod eye, but this is not the case, as the reversed roles of their respective ciliary and rhabdomeric opsin classes and different lens crystallins show.The very earliest "eyes", called eye-spots, were simple patches of photoreceptor protein in unicellular animals. In multicellular beings, multicellular eyespots evolved, physically similar to the receptor patches for taste and smell. These eyespots could only sense ambient brightness: they could distinguish light and dark, but not the direction of the light source.Through gradual change, the eye-spots of species living in well-lit environments depressed into a shallow "cup" shape. The ability to slightly discriminate directional brightness was achieved by using the angle at which the light hit certain cells to identify the source. The pit deepened over time, the opening diminished in size, and the number of photoreceptor cells increased, forming an effective pinhole camera that was capable of dimly distinguishing shapes. However, the ancestors of modern hagfish, thought to be the protovertebrate, were evidently pushed to very deep, dark waters, where they were less vulnerable to sighted predators, and where it is advantageous to have a convex eye-spot, which gathers more light than a flat or concave one. This would have led to a somewhat different evolutionary trajectory for the vertebrate eye than for other animal eyes. The thin overgrowth of transparent cells over the eyes aperture, originally formed to prevent damage to the eyespot, allowed the segregated contents of the eye chamber to specialise into a transparent humour that optimised colour filtering, blocked harmful radiation, improved the eyes refractive index, and allowed functionality outside of water. The transparent protective cells eventually split into two layers, with circulatory fluid in between that allowed wider viewing angles and greater imaging resolution, and the thickness of the transparent layer gradually increased, in most species with the transparent crystallin protein.The gap between tissue layers naturally formed a biconvex shape, an optimally ideal structure for a normal refractive index. Independently, a transparent layer and a nontransparent layer split forward from the lens: the cornea and iris. Separation of the forward layer again formed a humour, the aqueous humour. This increased refractive power and again eased circulatory problems. Formation of a nontransparent ring allowed more blood vessels, more circulation, and larger eye sizes. Relationship to life requirements Eyes are generally adapted to the environment and life requirements of the organism which bears them. For instance, the distribution of photoreceptors tends to match the area in which the highest acuity is required, with horizon-scanning organisms, such as those that live on the African plains, having a horizontal line of high-density ganglia, while tree-dwelling creatures which require good all-round vision tend to have a symmetrical distribution of ganglia, with acuity decreasing outwards from the centre. Of course, for most eye types, it is impossible to diverge from a spherical form, so only the density of optical receptors can be altered. In organisms with compound eyes, it is the number of ommatidia rather than ganglia that reflects the region of highest data acquisition.: 23–24  Optical superposition eyes are constrained to a spherical shape, but other forms of compound eyes may deform to a shape where more ommatidia are aligned to, say, the horizon, without altering the size or density of individual ommatidia. Eyes of horizon-scanning organisms have stalks so they can be easily aligned to the horizon when this is inclined, for example, if the animal is on a slope.An extension of this concept is that the eyes of predators typically have a zone of very acute vision at their centre, to assist in the identification of prey. In deep water organisms, it may not be the centre of the eye that is enlarged. The hyperiid amphipods are deep water animals that feed on organisms above them. Their eyes are almost divided into two, with the upper region thought to be involved in detecting the silhouettes of potential prey—or predators—against the faint light of the sky above. Accordingly, deeper water hyperiids, where the light against which the silhouettes must be compared is dimmer, have larger "upper-eyes", and may lose the lower portion of their eyes altogether. In the giant Antarctic isopod Glyptonotus a small ventral compound eye is physically completely separated from the much larger dorsal compound eye. Depth perception can be enhanced by having eyes which are enlarged in one direction; distorting the eye slightly allows the distance to the object to be estimated with a high degree of accuracy.Acuity is higher among male organisms that mate in mid-air, as they need to be able to spot and assess potential mates against a very large backdrop. On the other hand, the eyes of organisms which operate in low light levels, such as around dawn and dusk or in deep water, tend to be larger to increase the amount of light that can be captured.It is not only the shape of the eye that may be affected by lifestyle. Eyes can be the most visible parts of organisms, and this can act as a pressure on organisms to have more transparent eyes at the cost of function.Eyes may be mounted on stalks to provide better all-round vision, by lifting them above an organisms carapace; this also allows them to track predators or prey without moving the head. Physiology Visual acuity Visual acuity, or resolving power, is "the ability to distinguish fine detail" and is the property of cone cells. It is often measured in cycles per degree (CPD), which measures an angular resolution, or how much an eye can differentiate one object from another in terms of visual angles. Resolution in CPD can be measured by bar charts of different numbers of white/black stripe cycles. For example, if each pattern is 1.75 cm wide and is placed at 1 m distance from the eye, it will subtend an angle of 1 degree, so the number of white/black bar pairs on the pattern will be a measure of the cycles per degree of that pattern. The highest such number that the eye can resolve as stripes, or distinguish from a grey block, is then the measurement of visual acuity of the eye. For a human eye with excellent acuity, the maximum theoretical resolution is 50 CPD (1.2 arcminute per line pair, or a 0.35 mm line pair, at 1 m). A rat can resolve only about 1 to 2 CPD. A horse has higher acuity through most of the visual field of its eyes than a human has, but does not match the high acuity of the human eyes central fovea region.Spherical aberration limits the resolution of a 7 mm pupil to about 3 arcminutes per line pair. At a pupil diameter of 3 mm, the spherical aberration is greatly reduced, resulting in an improved resolution of approximately 1.7 arcminutes per line pair. A resolution of 2 arcminutes per line pair, equivalent to a 1 arcminute gap in an optotype, corresponds to 20/20 (normal vision) in humans. However, in the compound eye, the resolution is related to the size of individual ommatidia and the distance between neighbouring ommatidia. Physically these cannot be reduced in size to achieve the acuity seen with single lensed eyes as in mammals.
Eye
Compound eyes have a much lower acuity than vertebrate eyes. Colour perception "Colour vision is the faculty of the organism to distinguish lights of different spectral qualities." All organisms are restricted to a small range of electromagnetic spectrum; this varies from creature to creature, but is mainly between wavelengths of 400 and 700 nm. This is a rather small section of the electromagnetic spectrum, probably reflecting the submarine evolution of the organ: water blocks out all but two small windows of the EM spectrum, and there has been no evolutionary pressure among land animals to broaden this range.The most sensitive pigment, rhodopsin, has a peak response at 500 nm. Small changes to the genes coding for this protein can tweak the peak response by a few nm; pigments in the lens can also filter incoming light, changing the peak response. Many organisms are unable to discriminate between colours, seeing instead in shades of grey; colour vision necessitates a range of pigment cells which are primarily sensitive to smaller ranges of the spectrum. In primates, geckos, and other organisms, these take the form of cone cells, from which the more sensitive rod cells evolved. Even if organisms are physically capable of discriminating different colours, this does not necessarily mean that they can perceive the different colours; only with behavioural tests can this be deduced.Most organisms with colour vision can detect ultraviolet light. This high energy light can be damaging to receptor cells. With a few exceptions (snakes, placental mammals), most organisms avoid these effects by having absorbent oil droplets around their cone cells. The alternative, developed by organisms that had lost these oil droplets in the course of evolution, is to make the lens impervious to UV light—this precludes the possibility of any UV light being detected, as it does not even reach the retina. Rods and cones The retina contains two major types of light-sensitive photoreceptor cells used for vision: the rods and the cones. Rods cannot distinguish colours, but are responsible for low-light (scotopic) monochrome (black-and-white) vision; they work well in dim light as they contain a pigment, rhodopsin (visual purple), which is sensitive at low light intensity, but saturates at higher (photopic) intensities. Rods are distributed throughout the retina but there are none at the fovea and none at the blind spot. Rod density is greater in the peripheral retina than in the central retina. Cones are responsible for colour vision. They require brighter light to function than rods require. In humans, there are three types of cones, maximally sensitive to long-wavelength, medium-wavelength, and short-wavelength light (often referred to as red, green, and blue, respectively, though the sensitivity peaks are not actually at these colours). The colour seen is the combined effect of stimuli to, and responses from, these three types of cone cells. Cones are mostly concentrated in and near the fovea. Only a few are present at the sides of the retina. Objects are seen most sharply in focus when their images fall on the fovea, as when one looks at an object directly. Cone cells and rods are connected through intermediate cells in the retina to nerve fibres of the optic nerve. When rods and cones are stimulated by light, they connect through adjoining cells within the retina to send an electrical signal to the optic nerve fibres. The optic nerves send off impulses through these fibres to the brain. Pigmentation The pigment molecules used in the eye are various, but can be used to define the evolutionary distance between different groups, and can also be an aid in determining which are closely related—although problems of convergence do exist.Opsins are the pigments involved in photoreception. Other pigments, such as melanin, are used to shield the photoreceptor cells from light leaking in from the sides. The opsin protein group evolved long before the last common ancestor of animals, and has continued to diversify since.There are two types of opsin involved in vision; c-opsins, which are associated with ciliary-type photoreceptor cells, and r-opsins, associated with rhabdomeric photoreceptor cells. The eyes of vertebrates usually contain ciliary cells with c-opsins, and (bilaterian) invertebrates have rhabdomeric cells in the eye with r-opsins. However, some ganglion cells of vertebrates express r-opsins, suggesting that their ancestors used this pigment in vision, and that remnants survive in the eyes. Likewise, c-opsins have been found to be expressed in the brain of some invertebrates. They may have been expressed in ciliary cells of larval eyes, which were subsequently resorbed into the brain on metamorphosis to the adult form. C-opsins are also found in some derived bilaterian-invertebrate eyes, such as the pallial eyes of the bivalve molluscs; however, the lateral eyes (which were presumably the ancestral type for this group, if eyes evolved once there) always use r-opsins. Cnidaria, which are an outgroup to the taxa mentioned above, express c-opsins—but r-opsins are yet to be found in this group. Incidentally, the melanin produced in the cnidaria is produced in the same fashion as that in vertebrates, suggesting the common descent of this pigment. Additional images See also Adaptation (eye) (night vision) Emission theory (vision) Eye color Eye development Eye disease Eye injury Eye movement Eyelid Nictitating membrane Ophthalmology Orbit (anatomy) Simple eye in invertebrates Tapetum lucidum Tears Notes References Citations Bibliography Ali, Mohamed Ather; Klyne, M.A. (1985). Vision in Vertebrates. New York: Plenum Press. ISBN 978-0-306-42065-8. Further reading Yong, Ed (14 January 2016). "Inside the Eye: Natures Most Exquisite Creation". National Geographic. External links Evolution of the eye Anatomy of the eye – flash animated interactive. (Adobe Flash) Webvision. The organisation of the retina and visual system. An in-depth treatment of retinal function, open to all but geared most towards graduate students. Eye strips images of all but bare essentials before sending visual information to the brain, UC Berkeley research shows
Hyperparathyroidism
Hyperparathyroidism is an increase in parathyroid hormone (PTH) levels in the blood. This occurs from a disorder either within the parathyroid glands (primary hyperparathyroidism) or as response to external stimuli (secondary hyperparathyroidism). Symptoms of hyperparathyroidism are caused by inappropriately normal or elevated blood calcium leaving the bones and flowing into the blood stream in response to increased production of parathyroid hormone. In healthy people, when blood calcium levels are high, parathyroid hormone levels should be low. With long-standing hyperparathyroidism, the most common symptom is kidney stones. Other symptoms may include bone pain, weakness, depression, confusion, and increased urination. Both primary and secondary may result in osteoporosis (weakening of the bones).In 80% of cases, primary hyperparathyroidism is due to a single benign tumor known as a parathyroid adenoma. Most of the remainder are due to several of these adenomas. Very rarely it may be due to parathyroid cancer. Secondary hyperparathyroidism typically occurs due to vitamin D deficiency, chronic kidney disease, or other causes of low blood calcium. The diagnosis of primary hyperparathyroidism is made by finding elevated calcium and PTH in the blood.Primary hyperparathyroidism may only be cured by removing the adenoma or overactive parathyroid glands. In those without symptoms, mildly increased blood calcium levels, normal kidneys, and normal bone density monitoring may be all that is required. The medication cinacalcet may also be used to decrease PTH levels in those unable to have surgery although it is not a cure. In those with very high blood calcium levels, treatment may include large amounts of intravenous normal saline. Low vitamin D should be corrected in those with secondary hyperparathyroidism but low Vitamin D pre-surgery is controversial for those with primary hyperparathyroidism. Low vitamin D levels should be corrected post-parathyroidectomy.Primary hyperparathyroidism is the most common type. In the developed world, between one and four per thousand people are affected. It occurs three times more often in women than men and is often diagnosed between the ages of 50 and 60 but is not uncommon before then. The disease was first described in the 1700s. In the late 1800s, it was determined to be related to the parathyroid. Surgery as a treatment was first carried out in 1925. Signs and symptoms In primary hyperparathyroidism, about 75% of people are "asymptomatic". While most primary patients are asymptomatic at the time of diagnosis, asymptomatic is poorly defined and represents only those without "obvious clinical sequalae" such as kidney stones, bone disease, or hypercalcemic crisis. These "asymptomatic" patients may have other symptoms such as depression, anxiety, gastrointestinal distress, and neuromuscular problems that are not counted as symptoms. The problem is often picked up incidentally during blood work for other reasons, and the test results show a higher amount of calcium in the blood than normal. Many people only have non-specific symptoms.Common manifestations of hypercalcemia include weakness and fatigue, depression, bone pain, muscle soreness (myalgias), decreased appetite, feelings of nausea and vomiting, constipation, pancreatitis, polyuria, polydipsia, cognitive impairment, kidney stones (), vertigo and osteopenia or osteoporosis. A history of acquired racquet nails (brachyonychia) may be indicative of bone resorption. Radiographically, hyperparathyroidism has a pathognomic finding of rugger jersey spine. Parathyroid adenomas are very rarely detectable on clinical examination. Surgical removal of a parathyroid tumor eliminates the symptoms in most patients.In secondary hyperparathyroidism due to lack of Vitamin D absorption, the parathyroid gland is behaving normally; clinical problems are due to bone resorption and manifest as bone syndromes such as rickets, osteomalacia, and renal osteodystrophy. Causes Radiation exposure increases the risk of primary hyperparathyroidism. A number of genetic conditions including multiple endocrine neoplasia syndromes also increase the risk. Parathyroid adenomas have been linked with DDT although a causal link has not yet been established. Mechanism Normal parathyroid glands measure the ionized calcium (Ca2+) concentration in the blood and secrete parathyroid hormone accordingly; if the ionized calcium rises above normal, the secretion of PTH is decreased, whereas when the Ca2+ level falls, parathyroid hormone secretion is increased.Secondary hyperparathyroidism occurs if the calcium level is abnormally low. The normal glands respond by secreting parathyroid hormone at a persistently high rate. This typically occurs when the 1,25 dihydroxyvitamin D3 levels in the blood are low and hypocalcemia is present. A lack of 1,25 dihydroxyvitamin D3 can result from a deficient dietary intake of vitamin D, or from a lack of exposure of the skin to sunlight, so the body cannot make its own vitamin D from cholesterol. The resulting hypovitaminosis D is usually due to a partial combination of both factors. Vitamin D3 (or cholecalciferol) is converted to 25-hydroxyvitamin D (or calcidiol) by the liver, from where it is transported via the circulation to the kidneys, and it is converted into the active hormone, 1,25 dihydroxyvitamin D3. Thus, a third cause of secondary hyperparathyroidism is chronic kidney disease. Here the ability to manufacture 1,25 dihydroxyvitamin D3 is compromised, resulting in hypocalcemia. Diagnosis The gold standard of diagnosis is the PTH immunoassay. Once an elevated PTH has been confirmed, the goal of diagnosis is to determine whether the hyperparathyroidism is primary or secondary in origin by obtaining a serum calcium level: Tertiary hyperparathyroidism has a high PTH and high serum calcium. It is differentiated from primary hyperparathyroidism by a history of chronic kidney failure and secondary hyperparathyroidism.Hyperparathyroidism can cause hyperchloremia and increase renal bicarbonate loss, which may result in a normal anion gap metabolic acidosis. Differential diagnosis Familial benign hypocalciuric hypercalcaemia can present with similar lab changes. In this condition, the calcium creatinine clearance ratio, however, is typically under 0.01. Blood tests Intact PTH In primary hyperparathyroidism, parathyroid hormone (PTH) levels are either elevated or "inappropriately normal" in the presence of elevated calcium. Typically, PTH levels vary greatly over time in the affected patient and (as with Ca and Ca++ levels) must be retested several times to see the pattern. The currently accepted test for PTH is intact PTH, which detects only relatively intact and biologically active PTH molecules. Older tests often detected other, inactive fragments. Even intact PTH may be inaccurate in patients with kidney dysfunction. Intact pth blood tests may be falsely low if biotin has been ingested in the previous few days prior to the blood test. Calcium levels In cases of primary hyperparathyroidism or tertiary hyperparathyroidism, heightened PTH leads to increased serum calcium (hypercalcemia) due to: increased bone resorption, allowing the flow of calcium from bone to blood reduced kidney clearance of calcium increased intestinal calcium absorption Serum phosphate In primary hyperparathyroidism, serum phosphate levels are abnormally low as a result of decreased reabsorption of phosphate in the kidney tubules. However, this is only present in about 50% of cases. This contrasts with secondary hyperparathyroidism, in which serum phosphate levels are generally elevated because of kidney disease. Alkaline phosphatase Alkaline phosphatase levels are usually elevated in hyperparathyroidism. In primary hyperparathyroidism, levels may remain within the normal range, but this is inappropriately normal given the increased levels of plasma calcium. Nuclear medicine A technetium sestamibi scan is a procedure in nuclear medicine that identifies hyperparathyroidism (or parathyroid adenoma). It is used by surgeons to locate ectopic parathyroid adenomas, most commonly found in the anterior mediastinum. Classification Primary Primary hyperparathyroidism results from a hyperfunction of the parathyroid glands themselves. The oversecretion of PTH is due to a parathyroid adenoma, parathyroid hyperplasia, or rarely, a parathyroid carcinoma. This disease is often characterized by the quartet stones, bones, groans, and psychiatric overtones referring to the presence of kidney stones, hypercalcemia, constipation, and peptic ulcers, as well as depression, respectively.In a minority of cases, this occurs as part of a multiple endocrine neoplasia (MEN) syndrome, either type 1 (caused by a mutation in the gene MEN1) or type 2a (caused by a mutation in the gene RET), which is also associated with the adrenal tumor pheochromcytoma. Other mutations that have been linked to parathyroid neoplasia include mutations in the genes HRPT2 and CASR.Patients with bipolar disorder who are receiving long-term lithium treatment are at increased risk for hyperparathyroidism. Elevated calcium levels are found in 15% to 20% of patients who have been taking lithium long-term. However, only a few of these patients have significantly elevated levels of parathyroid hormone and clinical symptoms of hyperparathyroidism. Lithium-associated hyperparathyroidism is usually caused by a single parathyroid adenoma. Secondary Secondary hyperparathyroidism is due to physiological (i.e. appropriate) secretion of parathyroid hormone (PTH) by the parathyroid glands in response to hypocalcemia (low blood calcium levels). The most common causes are vitamin D deficiency (caused by lack of sunlight, diet or malabsorption) and chronic kidney failure.Lack of vitamin D leads to reduced calcium absorption by the intestine leading to hypocalcemia and increased parathyroid hormone secretion. This increases bone resorption. In chronic kidney failure the problem is more specifically failure to convert vitamin D to its active form in the kidney. The bone disease in secondary hyperparathyroidism caused by kidney failure is termed renal osteodystrophy. Tertiary Tertiary hyperparathyroidism is seen in those with long-term secondary hyperparathyroidism, which eventually leads to hyperplasia of the parathyroid glands and a loss of response to serum calcium levels. This disorder is most often seen in patients with end-stage kidney disease and is an autonomous activity. Treatment Treatment depends on the type of hyperparathyroidism encountered. Primary People with primary hyperparathyroidism who are symptomatic benefit from parathyroidectomy—surgery to remove the parathyroid tumor (parathyroid adenoma). Indications for surgery are: Symptomatic hyperparathyroidism Asymptomatic hyperparathyroidism with any of the following: 24-hour urinary calcium > 400 mg (see footnote, below) serum calcium > 1 mg/dl above upper limit of normal Creatinine clearance > 30% below normal for patients age Bone density > 2.5 standard deviations below peak (i.e., T-score of −2.5) People age < 50A 2020 Cochrane systematic review compared the surgical procedures of minimally invasive parathyroidectomy and classically used bilateral neck exploration, however it did not find one approach to be superior to the other in either benefits or risks.Surgery can rarely result in hypoparathyroidism. Secondary In people with secondary hyperparathyroidism, the high PTH levels are an appropriate response to low calcium and treatment must be directed at the underlying cause of this (usually vitamin D deficiency or chronic kidney failure). If this is successful, PTH levels return to normal levels, unless PTH secretion has become autonomous (tertiary hyperparathyroidism). Calcimimetics A calcimimetic (such as cinacalcet) is a potential therapy for some people with severe hypercalcemia and primary hyperparathyroidism who are unable to undergo parathyroidectomy, and for secondary hyperparathyroidism on dialysis. Treatment of secondary hyperparathyroidism with a calcimimetic in those on dialysis for CKD does not alter the risk of early death; however, it does decrease the likelihood of needing a parathyroidectomy. Treatment carries the risk of low blood calcium levels and vomiting. History The oldest known case was found in a cadaver from an Early Neolithic cemetery in southwest Germany. Notes References External links Hyperparathyroidism at Curlie Overview at Endocrine and Metabolic Diseases Information Service Insogna KL (September 2018). "Primary Hyperparathyroidism". The New England Journal of Medicine (Review). 379 (11): 1050–1059. CiteSeerX 10.1.1.322.5883. doi:10.1056/NEJMcp1714213. PMID 30207907. S2CID 205069527.
Hallux varus
Hallux varus is a deformity of the great toe joint where the hallux (great toe) is deviated medially (towards the midline of the body) away from the first metatarsal bone. The hallux usually moves in the transverse plane. Unlike hallux valgus, also known as hallux abducto valgus or bunion, hallux varus is uncommon in the West but it is common in cultures where the population remains unshod. Photos References == External links ==
Cavity
Cavity may refer to: Biology and healthcare Body cavity, a fluid-filled space in many animals where organs typically develop Gastrovascular cavity, the primary organ of digestion and circulation in cnidarians and flatworms Dental cavity or tooth decay, damage to the structure of a tooth Lung cavity, an air-filled space within the lung Radio frequency resonance Microwave cavity or RF cavity, a cavity resonator in the radio frequency range, for example used in particle accelerators Optical cavity, the cavity resonator of a laser Resonant cavity, a device designed to select for waves of particular wavelengths Other uses Cavity (band), a sludge metal band from Miami, Florida Cavity method, a mathematical method to solve some mean field type of models Cavity wall, a wall consisting of two skins with a cavity See also Cavitation, the phenomenon of partial vacuums forming in fluid, for example, in propellers Cavitary pneumonia, a type of pneumonia in which a hole is formed in the lung Cavity Search (disambiguation) Hollow (disambiguation)
Klumpke paralysis
Klumpkes paralysis is a variety of partial palsy of the lower roots of the brachial plexus. The brachial plexus is a network of spinal nerves that originates in the back of the neck, extends through the axilla (armpit), and gives rise to nerves to the upper limb. The paralytic condition is named after Augusta Déjerine-Klumpke. Signs and symptoms Symptoms include intrinsic minus hand deformity, paralysis of intrinsic hand muscles, and C8/T1 Dermatome distribution numbness. Involvement of T1 may result in Horners syndrome, with ptosis, and miosis. Weakness or lack of ability to use specific muscles of the shoulder or arm. It can be contrasted to Erb-Duchennes palsy, which affects C5 and C6. Cause Klumpkes paralysis is a form of paralysis involving the muscles of the forearm and hand, resulting from a brachial plexus injury in which the eighth cervical (C8) and first thoracic (T1) nerves are injured either before or after they have joined to form the lower trunk. The subsequent paralysis affects, principally, the intrinsic muscles of the hand (notably the interossei, thenar and hypothenar muscles) and the flexors of the wrist and fingers (notably flexor carpi ulnaris and ulnar half of the flexor digitorum profundus). The classic presentation of Klumpkes palsy is the “claw hand” where the forearm is supinated, the wrist extended and the fingers flexed. If Horner syndrome is present, there is miosis (constriction of the pupils) in the affected eye.The injury can result from difficulties in childbirth. The most common aetiological mechanism is caused by a traumatic vaginal delivery. The risk is greater when the mother is small or when the infant is of large weight. Risk of injury to the lower brachial plexus results from traction on an abducted arm, as with an infant being pulled from the birth canal by an extended arm above the head or with someone catching themselves by a branch as they fall from a tree. Lower brachial plexus injuries should be distinguished from upper brachial plexus injuries, which can also result from birth trauma but give a different syndrome of weakness known as Erbs palsy. Other trauma, such as motorcycle accidents, that have similar spinal cord injuries to C8 and T1, also show the same symptoms of Klumpkes paralysis. Diagnosis Electromyography and nerve conduction velocity testing can help to diagnose the location and severity of the lesion. Otherwise, the diagnosis is one made clinically after a thorough neurologic exam. Treatment Treatment effectiveness varies depending on the initial severity of the injury. Physiotherapy is used to increase strength of muscle and improve muscle functions. Electrical modalities such as electric nerve stimulation can also be used.Occupational therapy to provide exercises and coping mechanisms to improve the patients ability to perform activities of daily living. Goals of therapy are to improve tactile sensation, proprioception, and range of motion.Acute treatment of a severe injury will involve repositioning and splinting or casting of the extremity. Epidemiology Klumpke Palsy is listed as a rare disease by the Office of Rare Diseases (ORD) of the National Institutes of Health (NIH). This means that Klumpke palsy, or a subtype of Klumpke palsy, affects fewer than 200,000 people in the US population. See also Dystocia Erbs palsy References == External links ==
Franceschetti–Klein syndrome
Franceschetti–Klein syndrome (also known as "mandibulofacial dysostosis") is a syndrome that includes palpebral antimongoloid fissures, hypoplasia of the facial bones, macrostomia, vaulted palate, malformations of both the external and internal ear, buccal-auricular fistula, abnormal development of the neck with stretching of the cheeks, accessory facial fissures, and skeletal deformities.: 577 It is sometimes equated with Treacher Collins syndrome. See also Dysostosis References == External links ==
Aspergillosis
Aspergillosis is a fungal infection of usually the lungs, caused by the genus Aspergillus, a common mould that is breathed in frequently from the air around, but does not usually affect most people. It generally occurs in people with lung diseases such as asthma, cystic fibrosis or tuberculosis, or those who have had a stem cell or organ transplant, and those who cannot fight infection because of medications they take such as steroids and some cancer treatments. Rarely, it can affect skin.Aspergillosis occurs in humans, birds and other animals. Aspergillosis occurs in chronic or acute forms which are clinically very distinct. Most cases of acute aspergillosis occur in people with severely compromised immune systems, e.g. those undergoing bone marrow transplantation. Chronic colonization or infection can cause complications in people with underlying respiratory illnesses, such as asthma, cystic fibrosis, sarcoidosis, tuberculosis, or chronic obstructive pulmonary disease. Most commonly, aspergillosis occurs in the form of chronic pulmonary aspergillosis (CPA), aspergilloma, or allergic bronchopulmonary aspergillosis (ABPA). Some forms are intertwined; for example ABPA and simple aspergilloma can progress to CPA. Other, noninvasive manifestations include fungal sinusitis (both allergic in nature and with established fungal balls), otomycosis (ear infection), keratitis (eye infection), and onychomycosis (nail infection). In most instances, these are less severe, and curable with effective antifungal treatment. The most frequently identified pathogens are Aspergillus fumigatus and Aspergillus flavus, ubiquitous organisms capable of living under extensive environmental stress. Most people are thought to inhale thousands of Aspergillus spores daily but without effect due to an efficient immune response. Taken together, the major chronic, invasive, and allergic forms of aspergillosis account for around 600,000 deaths annually worldwide. Signs and symptoms A fungus ball in the lungs may cause no symptoms and may be discovered only with a chest X-ray, or it may cause repeated coughing up of blood, chest pain, and occasionally severe, even fatal, bleeding. A rapidly invasive Aspergillus infection in the lungs often causes cough, fever, chest pain, and difficulty breathing. Poorly controlled aspergillosis can disseminate through the blood to cause widespread organ damage. Symptoms include fever, chills, shock, delirium, seizures, and blood clots. The person may develop kidney failure, liver failure (causing jaundice), and breathing difficulties. Death can occur quickly. Aspergillosis of the ear canal causes itching and occasionally pain. Fluid draining overnight from the ear may leave a stain on the pillow. Aspergillosis of the sinuses causes a feeling of congestion and sometimes pain or discharge. It can extend beyond the sinuses.In addition to the symptoms, an X-ray or computerised tomography (CT) scan of the infected area provides clues for making the diagnosis. Whenever possible, a doctor sends a sample of infected material to a laboratory to confirm identification of the fungus. Cause Aspergillosis is caused by Aspergillus, a common mold, which tends to affect people who already have a lung disease such as cystic fibrosis or asthma, or who cannot fight infection themselves. The most common causative species is Aspergillus fumigatus. Risk factors People who are immunocompromised — such as patients undergoing hematopoietic stem cell transplantation, chemotherapy for leukaemia, or AIDS — are at an increased risk for invasive aspergillosis infections. These people may have neutropenia or corticoid-induced immunosuppression as a result of medical treatments. Neutropenia is often caused by extremely cytotoxic medications such as cyclophosphamide. Cyclophosphamide interferes with cellular replication including that of white blood cells such as neutrophils. A decreased neutrophil count inhibits the ability of the body to mount immune responses against pathogens. Although tumor necrosis factor alpha (TNF-α) — a signaling molecule related to acute inflammation responses — is produced, the abnormally low number of neutrophils present in neutropenic patients leads to a depressed inflammatory response. If the underlying neutropenia is not fixed, rapid and uncontrolled hyphal growth of the invasive fungi will occur and result in negative health outcomes. In addition to decreased neutrophil degranulation, the antiviral response against Flu and SARS-CoV-2 viruses, mediated by type I and type II interferon, is diminished jointly with the local antifungal immune response measured in the lungs of patients with IAPA (Influenza-Associated Pulmonary Aspergillosis) and CAPA (COVID-19-Associated Pulmonary Aspergillosis). Diagnosis On chest X-ray and CT, pulmonary aspergillosis classically manifests as a halo sign, and later, an air crescent sign. In hematologic patients with invasive aspergillosis, the galactomannan test can make the diagnosis in a noninvasive way. False-positive Aspergillus galactomannan tests have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition, or PlasmaLyte.On microscopy, Aspergillus species are reliably demonstrated by silver stains, e.g., Gridley stain or Gomori methenamine-silver. These give the fungal walls a gray-black colour. The hyphae of Aspergillus species range in diameter from 2.5 to 4.5 μm. They have septate hyphae, but these are not always apparent, and in such cases they may be mistaken for Zygomycota. Aspergillus hyphae tend to have dichotomous branching that is progressive and primarily at acute angles of around 45°. Prevention Prevention of aspergillosis involves a reduction of mold exposure via environmental infection-control. Antifungal prophylaxis can be given to high-risk patients. Posaconazole is often given as prophylaxis in severely immunocompromised patients. Screening A systematic review has evaluated the diagnostic accuracy of polymerase chain reaction (PCR) tests in people with defective immune systems from medical treatment such as chemotherapy. Evidence suggests PCR tests have moderate diagnostic accuracy when used for screening for invasive aspergillosis in high risk groups. CT and MRI are vital to diagnosis, however it is always highly recommended to under go a biopsy of the area to confirm a diagnosis. Treatment The current medical treatments for aggressive invasive aspergillosis include voriconazole and liposomal amphotericin B in combination with surgical debridement. For the less aggressive allergic bronchopulmonary aspergillosis, findings suggest the use of oral steroids for a prolonged period of time, preferably for 6–9 months in allergic aspergillosis of the lungs. Itraconazole is given with the steroids, as it is considered to have a "steroid-sparing" effect, causing the steroids to be more effective, allowing a lower dose. Other drugs used, such as amphotericin B, caspofungin (in combination therapy only), flucytosine (in combination therapy only), or itraconazole, are used to treat this fungal infection. However, a growing proportion of infections are resistant to the triazoles. A. fumigatus, the most commonly infecting species, is intrinsically resistant to fluconazole. Epidemiology Aspergillosis is thought to affect more than 14 million people worldwide, with allergic bronchopulmonary aspergillosis (ABPA, >4 million), severe asthma with fungal sensitization (>6.5 million), and chronic pulmonary aspergillosis (CPA, ~3 million) being considerably more prevalent than invasive aspergillosis (IA, >300,000). Other common conditions include Aspergillus bronchitis, Aspergillus rhinosinusitis (many millions), otitis externa, and Aspergillus onychomycosis (10 million). Alterations in the composition and function of the lung microbiome and mycobiome have been associated with an increasing number of chronic pulmonary diseases such as COPD, cystic fibrosis, chronic rhinosinusitis and asthma. Society and culture During the COVID-19 pandemic 2020/21, COVID-19-associated pulmonary aspergillosis was reported in some people who had been admitted to hospital and received longterm steroid treatment. Animals While relatively rare in humans, aspergillosis is a common and dangerous infection in birds, particularly in pet parrots. Mallards and other ducks are particularly susceptible, as they often resort to poor food sources during bad weather. Captive raptors, such as falcons and hawks, are susceptible to this disease if they are kept in poor conditions and especially if they are fed pigeons, which are often carriers of "asper". It can be acute in chicks, but chronic in mature birds.In the United States, aspergillosis has been the culprit in several rapid die-offs among waterfowl. From 8 December until 14 December 2006, over 2,000 mallards died near Burley, Idaho, an agricultural community about 150 miles southeast of Boise. Mouldy waste grain from the farmland and feedlots in the area is the suspected source. A similar aspergillosis outbreak caused by mouldy grain killed 500 mallards in Iowa in 2005.While no connection has been found between aspergillosis and the H5N1 strain of avian influenza (commonly called "bird flu"), rapid die-offs caused by aspergillosis can spark fears of bird flu outbreaks. Laboratory analysis is the only way to distinguish bird flu from aspergillosis.In dogs, aspergillosis is an uncommon disease typically affecting only the nasal passages (nasal aspergillosis). This is much more common in dolicocephalic breeds. It can also spread to the rest of the body; this is termed disseminated aspergillosis and is rare, usually affecting individuals with underlying immune disorders.In 2019, an outbreak of aspergillosis struck the rare kakapo, a large flightless parrot endemic to New Zealand. By June the disease had killed seven of the birds, whose total population at the time was only 142 adults and 72 chicks. One fifth of the population was infected with the disease and the entire species was considered at risk of extinction. See also Other ways in which aspergillus can cause disease in mammals: Primary cutaneous aspergillosis Aflatoxin References External links Aspergillosis, MedlinePlus, US National Library of Medicine Aspergillus & Aspergillosis Website National Aspergillosis Centre, Manchester, UK
Metabolic disorder
A metabolic disorder is a disorder that negatively alters the bodys processing and distribution of macronutrients, such as proteins, fats, and carbohydrates. Metabolic disorders can happen when abnormal chemical reactions in the body alter the normal metabolic process. It can also be defined as inherited single gene anomaly, most of which are autosomal recessive. Signs and symptoms Some of the symptoms that can occur with metabolic disorders are lethargy, weight loss, jaundice and seizures. The symptoms expressed would vary with the type of metabolic disorder. There are four categories of symptoms: acute symptoms, late-onset acute symptoms, progressive general symptoms and permanent symptoms. Causes Inherited metabolic disorders are one cause of metabolic disorders, and occur when a defective gene causes an enzyme deficiency. These diseases, of which there are many subtypes, are known as inborn errors of metabolism. Metabolic diseases can also occur when the liver or pancreas do not function properly. Types The principal classes of metabolic disorders are: Diagnosis Metabolic disorders can be present at birth, and many can be identified by routine screening. If a metabolic disorder is not identified early, then it may be diagnosed later in life, when symptoms appear. Specific blood and DNA tests can be done to diagnose genetic metabolic disorders.The gut microbiota, which is a population of microbes that live in the human digestive system, also has an important part in metabolism and generally has a positive function for its host. In terms of pathophysiological/mechanism interactions, an abnormal gut microbiota can play a role in metabolic disorder related obesity. Screening Metabolic disorder screening can be done in newborns via blood, skin, or hearing tests. Management Metabolic disorders can be treatable by nutrition management, especially if detected early. It is important for dieticians to have knowledge of the genotype to create a treatment that will be more effective for the individual. See also Metabolic syndrome Metabolic Myopathies Lysosomal storage disease Deficiency disease Hypermetabolism References Further reading Hoffmann, Georg F.; Zschocke, Johannes; Nyhan, William L. (21 November 2009). Inherited Metabolic Diseases: A Clinical Approach. Springer. ISBN 9783540747239. Gonzalez-Campoy JM, St Jeor ST, Castorino K, Ebrahim A, Hurley D, Jovanovic L, Mechanick JI, Petak SM, Yu YH, Harris KA, Kris-Etherton P, Kushner R, Molini-Blandford M, Nguyen QT, Plodkowski R, Sarwer DB, Thomas KT, American Association of Clinical Endocrinologists, American College of Endocrinology and the Obesity Society (September–October 2013). "Clinical practice guidelines for healthy eating for the prevention and treatment of metabolic and endocrine diseases in adults: cosponsored by the American Association of Clinical Endocrinologists/the American College of Endocrinology and the Obesity Society". Endocr Pract. 19 (Suppl 3): 1–82. doi:10.4158/EP13155.GL. PMID 24129260. Archived from the original on 4 March 2016. Retrieved 27 July 2015. External links "Metabolic disorders". KidsHealth.org. Retrieved 27 July 2015.
Weakness
Weakness is a symptom of a number of different conditions. The causes are many and can be divided into conditions that have true or perceived muscle weakness. True muscle weakness is a primary symptom of a variety of skeletal muscle diseases, including muscular dystrophy and inflammatory myopathy. It occurs in neuromuscular junction disorders, such as myasthenia gravis. Pathophysiology Muscle cells work by detecting a flow of electrical impulses from the brain, which signals them to contract through the release of calcium by the sarcoplasmic reticulum. Fatigue (reduced ability to generate force) may occur due to the nerve, or within the muscle cells themselves. New research from scientists at Columbia University suggests that muscle fatigue is caused by calcium leaking out of the muscle cell. This makes less calcium available for the muscle cell. In addition, the Columbia researchers propose that an enzyme activated by this released calcium eats away at muscle fibers.Substrates within the muscle generally serve to power muscular contractions. They include molecules such as adenosine triphosphate (ATP), glycogen and creatine phosphate. ATP binds to the myosin head and causes the ratchetting that results in contraction according to the sliding filament model. Creatine phosphate stores energy so ATP can be rapidly regenerated within the muscle cells from adenosine diphosphate (ADP) and inorganic phosphate ions, allowing for sustained powerful contractions that last between 5–7 seconds. Glycogen is the intramuscular storage form of glucose, used to generate energy quickly once intramuscular creatine stores are exhausted, producing lactic acid as a metabolic byproduct. Contrary to common belief, lactic acid accumulation doesnt actually cause the burning sensation felt when people exhaust their oxygen and oxidative metabolism, but in actuality, lactic acid in presence of oxygen recycles to produce pyruvate in the liver, which is known as the Cori cycle.Substrates produce metabolic fatigue by being depleted during exercise, resulting in a lack of intracellular energy sources to fuel contractions. In essence, the muscle stops contracting because it lacks the energy to do so. Differential diagnosis True vs. perceived weakness True weakness (or neuromuscular) describes a condition where the force exerted by the muscles is less than would be expected, for example muscular dystrophy. Perceived weakness (or non-neuromuscular) describes a condition where a person feels more effort than normal is required to exert a given amount of force but actual muscle strength is normal, for example chronic fatigue syndrome.In some conditions, such as myasthenia gravis, muscle strength is normal when resting, but true weakness occurs after the muscle has been subjected to exercise. This is also true for some cases of chronic fatigue syndrome, where objective post-exertion muscle weakness with delayed recovery time has been measured and is a feature of some of the published definitions. Asthenia vs. myasthenia Asthenia (Greek: ἀσθένεια, lit lack of strength but also disease) is a medical term referring to a condition in which the body lacks or has lost strength either as a whole or in any of its parts. It denotes symptoms of physical weakness and loss of strength. General asthenia occurs in many chronic wasting diseases (such as tuberculosis and cancer), sleep disorders or chronic disorders of the heart, lungs or kidneys, and is probably most marked in diseases of the adrenal gland. Asthenia may be limited to certain organs or systems of organs, as in asthenopia, characterized by ready fatiguability. Asthenia is also a side effect of some medications and treatments, such as Ritonavir (a protease inhibitor used in HIV treatment). Differentiating psychogenic (perceived) asthenia and true asthenia from myasthenia is often difficult, and in time apparent psychogenic asthenia accompanying many chronic disorders is seen to progress into a primary weakness.Myasthenia (my- from Greek μυο meaning "muscle" + -asthenia ἀσθένεια meaning "weakness"), or simply muscle weakness, is a lack of muscle strength. The causes are many and can be divided into conditions that have either true or perceived muscle weakness. True muscle weakness is a primary symptom of a variety of skeletal muscle diseases, including muscular dystrophy and inflammatory myopathy. It occurs in neuromuscular diseases, such as myasthenia gravis. Types Muscle fatigue can be central, neuromuscular, or peripheral muscular. Central muscle fatigue manifests as an overall sense of energy deprivation, and peripheral muscle weakness manifests as a local, muscle-specific inability to do work. Neuromuscular fatigue can be either central or peripheral. Central fatigue The central fatigue is generally described in terms of a reduction in the neural drive or nerve-based motor command to working muscles that results in a decline in the force output. It has been suggested that the reduced neural drive during exercise may be a protective mechanism to prevent organ failure if the work was continued at the same intensity. The exact mechanisms of central fatigue are unknown, though there has been considerable interest in the role of serotonergic pathways. Neuromuscular fatigue Nerves control the contraction of muscles by determining the number, sequence, and force of muscular contraction. When a nerve experiences synaptic fatigue it becomes unable to stimulate the muscle that it innervates. Most movements require a force far below what a muscle could potentially generate, and barring pathology, neuromuscular fatigue is seldom an issue.For extremely powerful contractions that are close to the upper limit of a muscles ability to generate force, neuromuscular fatigue can become a limiting factor in untrained individuals. In novice strength trainers, the muscles ability to generate force is most strongly limited by nerves ability to sustain a high-frequency signal. After an extended period of maximum contraction, the nerves signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply stop listening and gradually cease to move, often lengthening. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout. Part of the process of strength training is increasing the nerves ability to generate sustained, high frequency signals which allow a muscle to contract with their greatest force. It is this "neural training" that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrillar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force. Peripheral muscle fatigue Peripheral muscle fatigue during physical work is considered an inability for the body to supply sufficient energy or other metabolites to the contracting muscles to meet the increased energy demand. This is the most common case of physical fatigue—affecting a national average of 72% of adults in the work force in 2002. This causes contractile dysfunction that manifests in the eventual reduction or lack of ability of a single muscle or local group of muscles to do work. The insufficiency of energy, i.e. sub-optimal aerobic metabolism, generally results in the accumulation of lactic acid and other acidic anaerobic metabolic by-products in the muscle, causing the stereotypical burning sensation of local muscle fatigue, though recent studies have indicated otherwise, actually finding that lactic acid is a source of energy.The fundamental difference between the peripheral and central theories of muscle fatigue is that the peripheral model of muscle fatigue assumes failure at one or more sites in the chain that initiates muscle contraction. Peripheral regulation therefore depends on the localized metabolic chemical conditions of the local muscle affected, whereas the central model of muscle fatigue is an integrated mechanism that works to preserve the integrity of the system by initiating muscle fatigue through muscle derecruitment, based on collective feedback from the periphery, before cellular or organ failure occurs. Therefore, the feedback that is read by this central regulator could include chemical and mechanical as well as cognitive cues. The significance of each of these factors will depend on the nature of the fatigue-inducing work that is being performed.Though not universally used, "metabolic fatigue" is a common alternative term for peripheral muscle weakness, because of the reduction in contractile force due to the direct or indirect effects of the reduction of substrates or accumulation of metabolites within the myocytes. This can occur through a simple lack of energy to fuel contraction, or through interference with the ability of Ca2+ to stimulate actin and myosin to contract. Management References External links McArdles disease
Lobulation
A lobulation is an appearance resembling lobules. For instance, the thyroid gland may become large and lobulated in Hashimotos thyroiditis.Fetal lobulation, also known as fetal lobation, of the kidney is evident on scanning. Fetal lobation is a normal stage in the development of the kidney. In the adult a normal anatomic variant is that of persistent fetal lobulation of the kidney that may be mistaken for a tumour. See also Lobation == References ==
Allergic contact dermatitis
Allergic contact dermatitis (ACD) is a form of contact dermatitis that is the manifestation of an allergic response caused by contact with a substance; the other type being irritant contact dermatitis (ICD). Although less common than ICD, ACD is accepted to be the most prevalent form of immunotoxicity found in humans. By its allergic nature, this form of contact dermatitis is a hypersensitive reaction that is atypical within the population. The mechanisms by which these reactions occur are complex, with many levels of fine control. Their immunology centres on the interaction of immunoregulatory cytokines and discrete subpopulations of T lymphocytes. Signs and symptoms The symptoms of allergic contact dermatitis are very similar to the ones caused by irritant contact dermatitis, which makes the first even harder to diagnose. The first sign of allergic contact dermatitis is the presence of the rash or skin lesion at the site of exposure. Depending on the type of allergen causing it, the rash can ooze, drain or crust and it can become raw, scaled or thickened. Also, it is possible that the skin lesion does not take the form of a rash but it may include papules, blisters, vesicles or even a simple red area. The main difference between the rash caused by allergic contact dermatitis and the one caused by irritant contact dermatitis is that the latter tends to be confined to the area where the trigger touched the skin, whereas in allergic contact dermatitis the rash is more likely to be more widespread on the skin. Another characteristic of the allergic contact dermatitis rash is that it usually appears after a day or two after exposure to the allergen, unlike irritant contact dermatitis that appears immediately after the contact with the trigger. Other symptoms may include itching, skin redness or inflammation, localized swelling and the area may become more tender or warmer. If left untreated, the skin may darken and become leathery and cracked. Pain can also be present. Dermatitis can occur anywhere on the skin, but is most common on the hands (22% of people), scattered across the body (18%), or on the face (17%).The rash and other symptoms typically occur 24 to 48 hours after the exposure; in some cases, the rash may persist for weeks. Once an individual has developed a skin reaction to a certain substance it is most likely that they will have it for the rest of their life, and the symptoms will reappear when in contact with the allergen. Cause Common allergens implicated include the following: Bacitracin – topical antibiotic found by itself, or as Polysporin or Triple Antibiotic Balsam of Peru (Myroxylon pereirae) – used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties; derived from tree resin. It may also be a component of artificial vanilla and/or cinnamon flavorings. Chromium – used in the tanning of leather. Also a component of uncured cement/mortar, facial cosmetics and some bar soaps. Cobalt chloride – metal found in medical products; hair dye; antiperspirant; metal-plated objects such as snaps, buttons or tools; and in cobalt blue pigment Colophony (Rosin) – rosin, sap or sawdust typically from spruce or fir trees Formaldehyde – preservative with multiple uses, e.g., in paper products, paints, medications, household cleaners, cosmetic products, and fabric finishes. Often released into products by the use of formaldehyde releasers such as imidazolidinyl urea, diazolidinyl urea, Quaternium-15, DMDM Hydantoin, and 2-bromo-2-nitropropane-1,3-diol. Fragrance mix – group of the eight most common fragrance allergens found in foods, cosmetic products, insecticides, antiseptics, soaps, perfumes, and dental products Gold (gold sodium thiosulfate) – precious metal and compound often found in jewelry and dental materials Isothiazolinones – preservatives used in many personal care, household, and commercial products. Mercaptobenzothiazole – in rubber products, notably shoes, gloves, and car tires. Neomycin – topical antibiotic common in first aid creams and ointments, cosmetics, deodorant, soap, and pet food. Found by itself, or in Neosporin or Triple Antibiotic Nickel (nickel sulfate hexahydrate) – has been recognized as a significant cause of allergy. This metal is frequently encountered in stainless steel cookware, jewelry and clasps or buttons on clothing. Current estimates gauge are that roughly 2.5 million US adults and 250,000 children suffer from nickel allergy, which costs an estimated $5.7 billion per year for treatment of symptoms. A significant portion of nickel allergy is preventable. p-Phenylenediamine (PPD) - although its mainly used as a component of engineering polymers and composites like kevlar, it is also an ingredient in hair dyes which once sensitized to, becomes lifelong. One can develop active sensitization to products, including but not limited to black clothing, various inks, hair dye, dyed fur, dyed leather, and certain photographic products. Photographic developers, especially those containing metol Quaternium-15 – preservative in cosmetic products (self-tanners, shampoo, nail polish, sunscreen) and in industrial products (polishes, paints and waxes). Sap from certain species of mangrove and agave Soluble salts of platinum – see platinosis Thiomersal – mercury compound used in local antiseptics and in vaccines Topical anesthetics – such as pramoxine or diphenhydramine, after prolonged use Topical steroid – see steroid allergy Urushiol – oily coating from plants of Toxicodendron genus – poison ivy, poison oak, and poison sumac. Also found in mango plants, mango skin, cashews, and smoke from burning urushiol-containing plants, which can cause skin as well as severe lung irritation. Mechanism ACD arises as a result of two essential stages: an induction phase, which primes and sensitizes the immune system for an allergic response, and an elicitation phase, in which this response is triggered. As it involves a cell-mediated allergic response, ACD is termed a Type IV delayed hypersensitivity reaction, making it an exception in the usage of the designation "allergic," which otherwise usually refers to type I hypersensitivity reactions. Contact allergens are essentially soluble haptens (low in molecular weight) and, as such, have the physico-chemical properties that allow them to cross the stratum corneum of the skin. They can only cause their response as part of a complete antigen, involving their association with epidermal proteins forming hapten-protein conjugates. This, in turn, requires them to be protein-reactive. The conjugate formed is then recognized as a foreign body by the Langerhans cells (LCs) (and in some cases other Dendritic cells (DCs)), which then internalize the protein; transport it via the lymphatic system to the regional lymph nodes; and present the antigen to T-lymphocytes. This process is controlled by cytokines and chemokines – with tumor necrosis factor alpha (TNF-α) and certain members of the interleukin family (1, 13 and 18) – and their action serves either to promote or to inhibit the mobilization and migration of these LCs. As the LCs are transported to the lymph nodes, they become differentiated and transform into DCs, which are immunostimulatory in nature. Once within the lymph glands, the differentiated DCs present the allergenic epitope associated with the allergen to T lymphocytes. These T cells then divide and differentiate, clonally multiplying so that if the allergen is experienced again by the individual, these T cells will respond more quickly and more aggressively. White et al. have suggested that there appears to be a threshold to the mechanisms of allergic sensitisation by ACD-associated allergens (1986). This is thought to be linked to the level at which the toxin induces the up-regulation of the required mandatory cytokines and chemokines. It has also been proposed that the vehicle in which the allergen reaches the skin could take some responsibility in the sensitisation of the epidermis by both assisting the percutaneous penetration and causing some form of trauma and mobilization of cytokines itself. Memory response Once an individual is sensitized to an allergen, future contacts with the allergen can trigger a reaction, commonly known as a memory response, in the original site of sensitization. So for example if a person has an allergic contact dermatitis on the eyelids, say from use of makeup, touching the contact allergen with the fingers can trigger an allergic reaction on the eyelids.This is due to local skin memory T-cells, which remain in the original sensitization site. In a similar fashion, cytotoxic T lymphocytes patrol an area of skin and play an important role in controlling both the reactivation of viruses (such as the "cold sore" virus) and in limiting its replication when reactivated. Memory response, or "Retest Reactivity", usually takes 2 to 3 days after coming in contact with the allergen, and can persist for 2 to 4 weeks. Diagnosis Diagnosing allergic contact dermatitis is primarily based on physical exam and medical history. In some cases doctors can establish an accurate diagnosis based on the symptoms that the patient experiences and on the rashs appearance. In the case of a single episode of allergic contact dermatitis, this is all that is necessary. Chronic and/or intermittent rashes which are not readily explained by history and physical exam often will benefit from further testing. A patch test (contact delayed hypersensitivity allergy test) is a commonly used examination to determine the exact cause of an allergic contact dermatitis. According to the American Academy of Allergy, Asthma, and Immunology, "patch testing is the gold standard for contact allergen identification".The patch test consists of applying small quantities of potential allergens to small patches and which are then placed on the skin. After two days, they are removed and if a skin reaction occurred to one of the substances applied, a raised bump will be noticeable underneath the patch. The tests are again read at 72 or 96 hours after application. Patch testing is used for patients who have chronic, recurring contact dermatitis. Other tests that may be used to diagnose contact dermatitis and rule out other potential causes of the symptoms include a skin biopsy and culture of the skin lesion. Treatment The clinical expression of the dermatitis can be mitigated by avoidance of the allergen. Through compliance with avoidance measures, the immune system can become less stimulated. The key to avoidance is proper evaluation and detection of the inciting allergen. However, once the immune system registers the allergen, the recognition is permanent. The first step in treating the condition is appropriate recognition of the clinical problem, followed by identification of the culprit chemical and the source of that chemical. Corticosteroid creams should be used carefully and according to the prescribed directions because when overused over longer periods of time they can cause thinning of the skin. Also, in some instances such as poison ivy dermatitis calamine lotion and cool oatmeal baths may relieve itching.Usually, severe cases are treated with systemic corticosteroids which may be tapered gradually, with various dosing schedules ranging from a total of 12 – 20 days to prevent the recurrence of the rash (while the chemical allergen is still in the skin, up to 3 weeks, as well as a topical corticosteroid. Tacrolimus ointment or pimecrolimus cream can also be used additionally to the corticosteroid creams or instead of these. Oral antihistamines such as diphenhydramine or hydroxyzine may also be used in more severe cases to relieve the intense itching via sedation. Topical antihistamines are not advised as there might be a second skin reaction (treatment associated contact dermatitis) from the lotion itself. The other symptoms caused by allergic contact dermatitis may be eased with cool compresses to stop the itching. It is vital for treatment success that the trigger be identified and avoided. The discomfort caused by the symptoms may be relieved by wearing smooth-textured cotton clothing to avoid frictional skin irritation or by avoiding soaps with perfumes and dyes. Commonly, the symptoms may resolve without treatment in 2 to 4 weeks but specific medication may hasten the healing as long as the trigger is avoided. Also, the condition might become chronic if the allergen is not detected and avoided. Identification of the allergen can be aided by the site of the dermatitis. Allergic dermatitis of the hands is often due to contact with preservatives, fragrances, metals, rubber, or topical antibiotics. Dermatitis at the front of the face is often due to gold (from jewelry and foundation), make-up, moisturizers, wrinkle creams, and topical medication. Along the eyelids as well as the sides of the head and neck, dermatitis is often caused by shampoo and conditioner dripping down from the hair. Inflammation on one side of the face often suggests transfer of an allergen from the hands or from the face of a partner. Epidemiology Allergic contact dermatitis is common, affecting up to 20% of all people. People sensitive to one allergen are at an increased risk of being sensitive to others. Family members of those with allergic contact dermatitis are at higher risk of developing it themselves. Women are at higher risk of developing allergic contact dermatitis than men. References == External links ==
Enterocolitis
Enterocolitis is an inflammation of the digestive tract, involving enteritis of the small intestine and colitis of the colon. It may be caused by various infections, with bacteria, viruses, fungi, parasites, or other causes. Common clinical manifestations of enterocolitis are frequent diarrheal defecations, with or without nausea, vomiting, abdominal pain, fever, chills, alteration of general condition. General manifestations are given by the dissemination of the infectious agent or its toxins throughout the body, or – most frequently – by significant losses of water and minerals, the consequence of diarrhea and vomiting. Cause Among the causal agents of acute enterocolitis are: bacteria: Salmonella, Shigella, Escherichia coli (E. coli), Campylobacter etc. viruses: enteroviruses, rotaviruses, Norovirus, adenoviruses fungi: candidiasis, especially in immunosuppressed patients or who have previously received prolonged antibiotic treatment parasites: Giardia lamblia (with high frequency of infestation in the population, but not always with clinical manifestations), Balantidium coli, Blastocystis homnis, Cryptosporidium (diarrhea in people with immunosuppression), Entamoeba histolytica (produces the amebian dysentery, common in tropical areas). Diagnosis Types Specific types of enterocolitis include: necrotizing enterocolitis (most common in premature infants) pseudomembranous enterocolitis (also called "Pseudomembranous colitis") Treatment Treatment depends on aetiology e.g. Antibiotics such as metronidazole for bacteria infection, antiviral drug therapy for viral infection and anti-helminths for parasitic infections See also Gastroenteritis References == External links ==
3
3 (three) is a number, numeral and digit. It is the natural number following 2 and preceding 4, and is the smallest odd prime number and the only prime preceding a square number. It has religious or cultural significance in many societies. Evolution of the Arabic digit The use of three lines to denote the number 3 occurred in many writing systems, including some (like Roman and Chinese numerals) that are still in use. That was also the original representation of 3 in the Brahmic (Indian) numerical notation, its earliest forms aligned vertically. However, during the Gupta Empire the sign was modified by the addition of a curve on each line. The Nāgarī script rotated the lines clockwise, so they appeared horizontally, and ended each line with a short downward stroke on the right. In cursive script, the three strokes were eventually connected to form a glyph resembling a ⟨3⟩ with an additional stroke at the bottom: ३. The Indian digits spread to the Caliphate in the 9th century. The bottom stroke was dropped around the 10th century in the western parts of the Caliphate, such as the Maghreb and Al-Andalus, when a distinct variant ("Western Arabic") of the digit symbols developed, including modern Western 3. In contrast, the Eastern Arabs retained and enlarged that stroke, rotating the digit once more to yield the modern ("Eastern") Arabic digit "٣".In most modern Western typefaces, the digit 3, like the other decimal digits, has the height of a capital letter, and sits on the baseline. In typefaces with text figures, on the other hand, the glyph usually has the height of a lowercase letter "x" and a descender: "". In some French text-figure typefaces, though, it has an ascender instead of a descender. A common graphic variant of the digit three has a flat top, similar to the letter Ʒ (ezh). This form is sometimes used to prevent falsifying a 3 as an 8. It is found on UPC-A barcodes and standard 52-card decks. Mathematics 3 is the second smallest prime number and the first odd prime number. It is the first unique prime, such that the period length value of 1 of the decimal expansion of its reciprocal, 0.333..., is unique. 3 is a twin prime with 5, and a cousin prime with 7, and the only known number n {\displaystyle n} such that n {\displaystyle n} ! - 1 and n {\displaystyle n} ! + 1 are prime, as well as the only prime number p {\displaystyle p} such that p {\displaystyle p} - 1 yields another prime number, 2. A triangle is made of three sides. It is the smallest non-self-intersecting polygon and the only polygon not to have proper diagonals. When doing quick estimates, 3 is a rough approximation of π, 3.1415..., and a very rough approximation of e, 2.71828... 3 is the first Mersenne prime, as well as the second Mersenne prime exponent and the second double Mersenne prime exponent, for 7 and 127, respectively. 3 is also the first of five known Fermat primes, which include 5, 17, 257, and 65537. It is the second Fibonacci prime (and the second Lucas prime), the second Sophie Germain prime, the third Harshad number in base 10, and the second factorial prime, as it is equal to 2! + 1. 3 is the second and only prime triangular number, and Gauss proved that every integer is the sum of at most 3 triangular numbers. 3 is the number of non-collinear points needed to determine a plane and a circle. Three is the only prime which is one less than a perfect square. Any other number which is n 2 {\displaystyle n^{2}} − 1 for some integer n {\displaystyle n} is not prime, since it is ( n {\displaystyle n} − 1)( n {\displaystyle n} + 1). This is true for 3 as well (with n {\displaystyle n} = 2), but in this case the smaller factor is 1. If n {\displaystyle n} is greater than 2, both n {\displaystyle n} − 1 and n {\displaystyle n} + 1 are greater than 1 so their product is not prime. A natural number is divisible by three if the sum of its digits in base 10 is divisible by 3. For example, the number 21 is divisible by three (3 times 7) and the sum of its digits is 2 + 1 = 3. Because of this, the reverse of any number that is divisible by three (or indeed, any permutation of its digits) is also divisible by three. For instance, 1368 and its reverse 8631 are both divisible by three (and so are 1386, 3168, 3186, 3618, etc.). See also Divisibility rule. This works in base 10 and in any positional numeral system whose base divided by three leaves a remainder of one (bases 4, 7, 10, etc.). Three of the five Platonic solids have triangular faces – the tetrahedron, the octahedron, and the icosahedron. Also, three of the five Platonic solids have vertices where three faces meet – the tetrahedron, the hexahedron (cube), and the dodecahedron. Furthermore, only three different types of polygons comprise the faces of the five Platonic solids – the triangle, the square, and the pentagon. There are only three distinct 4×4 panmagic squares. According to Pythagoras and the Pythagorean school, the number 3, which they called triad, is the noblest of all digits, as it is the only number to equal the sum of all the terms below it, and the only number whose sum with those below equals the product of them and itself.There are three finite convex uniform polytope groups in three dimensions, aside from the infinite families of prisms and antiprisms: the tetrahedral group, the octahedral group, and the icosahedral group. In dimensions n {\displaystyle n} ⩾ 5, there are only three regular polytopes: the n {\displaystyle n} -simplexes, n {\displaystyle n} -cubes, and n {\displaystyle n} -orthoplexes. In dimensions n {\displaystyle n} ⩾ 9, the only three uniform polytope families, aside from the numerous infinite proprismatic families, are the A n {\displaystyle \mathrm {A} _{n}} simplex, B n {\displaystyle \mathrm {B} _{n}} cubic, and D n {\displaystyle \mathrm {D} _{n}} demihypercubic families. For paracompact hyperbolic honeycombs, there are three groups in dimensions 6 and 9, or equivalently of ranks 7 and 10, with no other forms in higher dimensions. Of the final three groups, the largest and most important is T ¯ 9 {\displaystyle {\bar {T}}_{9}} , that is associated with an important Kac–Moody Lie algebra E 10 {\displaystyle \mathrm {E} _{10}} .The trisection of the angle was one of the three famous problems of antiquity. Numeral systems There is some evidence to suggest that early man may have used counting systems which consisted of "One, Two, Three" and thereafter "Many" to describe counting limits. Early peoples had a word to describe the quantities of one, two, and three but any quantity beyond was simply denoted as "Many". This is most likely based on the prevalence of this phenomenon among people in such disparate regions as the deep Amazon and Borneo jungles, where western civilizations explorers have historical records of their first encounters with these indigenous people. List of basic calculations Science The Roman numeral III stands for giant star in the Yerkes spectral classification scheme. Three is the atomic number of lithium. Three is the ASCII code of "End of Text". Three is the number of dimensions that humans can perceive. Humans perceive the universe to have three spatial dimensions, but some theories, such as string theory, suggest there are more. Three is the number of elementary fermion generations according to the Standard Model of particle physics. The triangle, a polygon with three edges and three vertices, is the most stable physical shape. For this reason it is widely utilized in construction, engineering and design. The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans being trichromatic, the retina contains three types of color receptor cells, or cones. There are three primary colors in the additive and subtractive models. Protoscience In European alchemy, the three primes (Latin: tria prima) were salt (), sulfur () and mercury (). The three doshas (weaknesses) and their antidotes are the basis of Ayurvedic medicine in India. Pseudoscience Three is the symbolic representation for Mu, Augustus Le Plongeons and James Churchwards lost continent. In Pythagorean numerology the number 3 is the digit that represents the communication. It encourages the expansion of creativity, sociability between people and movement. For Pythagoras, the number 3 was a perfect number, representing harmony, perfection, and divine proportion. Philosophy Philosophers such as Aquinas, Kant, Hegel, C. S. Peirce, and Karl Popper have made threefold divisions, or trichotomies, which have been important in their work. Hegels dialectic of Thesis + Antithesis = Synthesis creates three-ness from two-ness. Religion Many world religions contain triple deities or concepts of trinity, including: The Hindu Trimurti The Hindu Tridevi The Three Jewels of Buddhism The Three Pure Ones of Taoism The Christian Holy Trinity The Triple Goddess of Wicca Christianity The threefold office of Christ is a Christian doctrine which states that Christ performs the functions of prophet, priest, and king. The ministry of Jesus lasted approximately three years. During the Agony in the Garden, Christ asked three times for the cup to be taken from him. Jesus rose from the dead on the third day after his death. The devil tempted Jesus three times. Saint Peter thrice denied Jesus and thrice affirmed his faith in Jesus. The Magi – wise men who were astronomers/astrologers from Persia – gave Jesus three gifts. There are three Synoptic Gospels and three epistles of John. Paul the Apostle went blind for three days after his conversion to Christianity. Judaism Noah had three sons: Ham, Shem and Japheth The Three Patriarchs: Abraham, Isaac and Jacob The prophet Balaam beat his donkey three times. The prophet Jonah spent three days and nights in the belly of a large fish Three divisions of the Written Torah: Torah (Five Books of Moses), Neviim (Prophets), Ketuvim (Writings) Three divisions of the Jewish people: Kohen, Levite, Yisrael Three daily prayers: Shacharit, Mincha, Maariv Three Shabbat meals Shabbat ends when three stars are visible in the night sky Three Pilgrimage Festivals: Passover, Shavuot, Sukkot Three matzos on the Passover Seder table The Three Weeks, a period of mourning bridging the fast days of Seventeenth of Tammuz and Tisha BAv Three cardinal sins for which a Jew must die rather than transgress: idolatry, murder, sexual immorality Upsherin, a Jewish boys first haircut at age 3 A Beth din is composed of three members Potential converts are traditionally turned away three times to test their sincerity In the Jewish mystical tradition of the Kabbalah, it is believed that the soul consists of three parts, with the highest being neshamah ("breath"), the middle being ruach ("wind" or "spirit") and the lowest being nefesh ("repose"). Sometimes the two elements of Chayah ("life" or "animal") and Yechidah ("unit") are additionally mentioned. In the Kabbalah, the Tree of Life (Hebrew: Etz ha-Chayim, עץ החיים) refers to a latter 3-pillar diagrammatic representation of its central mystical symbol, known as the 10 Sephirot. Islam The three core principles in Shia tradition: Tawhid (Oneness of God), Nabuwwa (Concept of Prophethood), Imama (Concept of Imam) Buddhism The Triple Bodhi (ways to understand the end of birth) are Budhu, Pasebudhu, and Mahaarahath. The Three Jewels, the three things that Buddhists take refuge in. Shinto The Imperial Regalia of Japan of the sword, mirror, and jewel. Daoism The Three Treasures (Chinese: 三寶; pinyin: sānbǎo; Wade–Giles: san-pao), the basic virtues in Taoism. The Three Dantians Three Lines of a Trigram Three Sovereigns: Heaven Fu Xi (Hand – Head – 3º Eye), Humanity Shen Nong (Unit 69), Hell Nüwa (Foot – Abdomen – Umbiculus). Hinduism The Trimurti: Brahma the Creator, Vishnu the Preserver, and Shiva the Destroyer. The three Gunas found in Samkhya school of Hindu philosophy. The three paths to salvation in the Bhagavad Gita named Karma Yoga, Bhakti Yoga and Jnana Yoga. Zoroastrianism The three virtues of Humata, Hukhta and Huvarshta (Good Thoughts, Good Words and Good Deeds) are a basic tenet in Zoroastrianism. Norse mythology Three is a very significant number in Norse mythology, along with its powers 9 and 27. Prior to Ragnarök, there will be three hard winters without an intervening summer, the Fimbulwinter. Odin endured three hardships upon the World Tree in his quest for the runes: he hanged himself, wounded himself with a spear, and suffered from hunger and thirst. Bor had three sons, Odin, Vili, and Vé. Other religions The Wiccan Rule of Three. The Triple Goddess: Maiden, Mother, Crone; the three fates. The sons of Cronus: Zeus, Poseidon, and Hades. The Slavic god Triglav has three heads. Esoteric tradition The Theosophical Society has three conditions of membership. Gurdjieffs Three Centers and the Law of Three. Liber AL vel Legis, the central scripture of the religion of Thelema, consists of three chapters, corresponding to three divine narrators respectively: Nuit, Hadit and Ra-Hoor-Khuit. The Triple Greatness of Hermes Trismegistus is an important theme in Hermeticism. As a lucky or unlucky number Three (三, formal writing: 叁, pinyin sān, Cantonese: saam1) is considered a good number in Chinese culture because it sounds like the word "alive" (生 pinyin shēng, Cantonese: saang1), compared to four (四, pinyin: sì, Cantonese: sei1), which sounds like the word "death" (死 pinyin sǐ, Cantonese: sei2). Counting to three is common in situations where a group of people wish to perform an action in synchrony: Now, on the count of three, everybody pull! Assuming the counter is proceeding at a uniform rate, the first two counts are necessary to establish the rate, and the count of "three" is predicted based on the timing of the "one" and "two" before it. Three is likely used instead of some other number because it requires the minimal amount counts while setting a rate. There is another superstition that it is unlucky to take a third light, that is, to be the third person to light a cigarette from the same match or lighter. This superstition is sometimes asserted to have originated among soldiers in the trenches of the First World War when a sniper might see the first light, take aim on the second and fire on the third.The phrase "Third times the charm" refers to the superstition that after two failures in any endeavor, a third attempt is more likely to succeed. This is also sometimes seen in reverse, as in "third man [to do something, presumably forbidden] gets caught".Luck, especially bad luck, is often said to "come in threes". Sports In American and Canadian football, a field goal is worth three points. In association football: For purposes of league standings, since the mid-1990s almost all leagues have awarded three points for a win. A team that wins three trophies in a season is said to have won a treble. A player who scores three goals in a match is said to have scored a hat-trick. In baseball: A batter strikes out upon the third strike in any single batting appearance. Each teams half of an inning ends once the defense has recorded three outs (unless the home team has a walk-off hit in the ninth inning or any extra inning). In scorekeeping, "3" denotes the first baseman. In basketball: Three points are awarded for a basket made from behind a designated arc on the floor. The "3 position" is the small forward. In bowling, three strikes bowled consecutively is known as a "turkey". In cricket, a bowler who is credited with dismissals of batsmen on three consecutive deliveries has achieved a "hat-trick". In Gaelic games (Gaelic football for men and women, hurling, and camogie), three points are awarded for a goal, scored when the ball passes underneath the crossbar and between the goal posts. In ice hockey: Scoring three goals is called a "hat trick" (usually not hyphenated in North America). A team will typically have three forwards on the ice at any given time. In professional wrestling, a pin is when one holds the opponents shoulders against the mat for a count of three. In rugby union: A successful penalty kick for goal or drop goal is worth three points. In the French variation of the bonus points system, a team receives a bonus point in the league standings if it wins a match while scoring at least three more tries than its opponent. The starting tighthead prop wears the jersey number 3. In rugby league: One of the two starting centres wears the jersey number 3. (An exception to this rule is the Super League, which uses static squad numbering.) A "threepeat" is a term for winning three consecutive championships. A triathlon consists of three events: swimming, bicycling, and running. In many sports a competitor or team is said to win a Triple Crown if they win three particularly prestigious competitions. In volleyball, once the ball is served, teams are allowed to touch the ball three times before being required to return the ball to the other side of the court, with the definition of "touch" being slightly different between indoor and beach volleyball. Film A number of film versions of the novel The Three Musketeers by Alexandre Dumas: (1921, 1933, 1948, 1973, 1992, 1993 and 2011). 3 Days of the Condor (1975), starring Robert Redford, Faye Dunaway, Cliff Robertson, and Max von Sydow. Three Amigos (1986), comedy film starring Steve Martin, Chevy Chase, and Martin Short. Three Kings (1999), starring George Clooney, Mark Wahlberg, Ice Cube, and Spike Jonze. 3 Days to Kill (2014), starring Kevin Costner. Three Billboards Outside Ebbing, Missouri (2017), starring Frances McDormand, Woody Harrelson, Sam Rockwell. See also Cube (algebra) – (3 superscript) Third Triad Rule of three List of highways numbered 3 References Wells, D. The Penguin Dictionary of Curious and Interesting Numbers London: Penguin Group. (1987): 46–48 External links Tricyclopedic Book of Threes by Michael Eck Threes in Human Anatomy by Dr. John A. McNulty Grime, James. "3 is everywhere". Numberphile. Brady Haran. Archived from the original on 2013-05-14. Retrieved 2013-04-13. The Number 3 The Positive Integer 3 Prime curiosities: 3
Myoglobinuria
Myoglobinuria is the presence of myoglobin in the urine, which usually results from rhabdomyolysis or muscle injury. Myoglobin is present in muscle cells as a reserve of oxygen. Signs and symptoms Signs and symptoms of myoglobinuria are usually nonspecific and needs some clinical prudence. Therefore, among the possible signs and symptoms to look for would be: Swollen and painful muscles Fever, nausea Delirium (elderly individuals) Myalgia Dark urine Calcium ion loss Causes Trauma, vascular problems, malignant hyperthermia, certain drugs and other situations can destroy or damage the muscle, releasing myoglobin to the circulation and thus to the kidneys. Under ideal situations myoglobin will be filtered and excreted with the urine, but if too much myoglobin is released into the circulation or in case of kidney problems, it can occlude the kidneys filtration system leading to acute tubular necrosis and acute kidney injury. Other causes of myoglobinuria include: McArdles disease Phosphofructokinase deficiency Carnitine palmitoyltransferase II deficiency Malignant hyperthermia Polymyositis Lactate dehydrogenase deficiency Adenosine monophosphate deaminase deficiency type 1 Thermal or electrical burn Pathophysiology Myoglobinuria pathophysiology consists of a series of metabolic actions in which damage to muscle cells affect calcium mechanisms, thereby increasing free ionized calcium in the cytoplasm of the myocytes (concurrently decreasing free ionized calcium in the bloodstream). This, in turn, affects several intracellular enzymes that are calcium-dependent, thereby compromising the cell membrane, which in turn causes the release of myoglobin. Diagnosis After centrifuging, the urine of myoglobinuria is red, where the urine of hemoglobinuria after centrifuge is pink to clear. Treatment Hospitalization and IV hydration should be the first step in any patient suspected of having myoglobinuria or rhabdomyolysis. The goal is to induce a brisk diuresis to prevent myoglobin precipitation and deposition, which can cause acute kidney injury. Mannitol can be added to assist with diuresis. Adding sodium bicarbonate to the IV fluids will cause alkalinization of the urine, believed to reduce the breakdown of myoglobin into its nephrotoxic metabolites, thus preventing renal damage. Often, IV normal saline is all that is needed to induce diuresis and alkalinize the urine. Epidemiology See also Pigmenturia References Further reading Pedley, edited by Lewis P. Rowland, Timothy A. (2010). Merritts neurology (12th ed.). Philadelphia, PA: Lippincott Williams & Wilkins. p. 885. ISBN 978-0781791861. Retrieved 10 September 2015. {{cite book}}: |first1= has generic name (help) Nyhan, edited by Georg F. Hoffmann, Johannes Zschocke, William L. (2010). Inherited metabolic diseases : a clinical approach. Heidelberg: Springer. p. 165. ISBN 978-3-540-74722-2. Retrieved 10 September 2015. {{cite book}}: |first1= has generic name (help) External links Overview on the Neuromuscular disease center website.
Shock
Shock may refer to: Common uses Collective noun Shock, a historic commercial term for a group of 60, see English numerals#Special names Stook, or shock of grain, stacked sheaves Healthcare Shock (circulatory), circulatory medical emergency Cardiogenic shock, resulting from dysfunction of the heart Distributive shock, resulting from an abnormal distribution of blood flow Septic shock, a result of severe infection Toxic shock syndrome, a specific type of severe infection Anaphylactic shock Hemorrhagic shock, from a large volume of blood loss Neurogenic shock, due to a high spinal cord injury disrupting the sympathetic nervous system Cold shock response of organisms to sudden cold, especially cold water Electric shock Defibrillation, electric shock to restore heart rhythm Electroconvulsive therapy or shock treatment, psychiatric treatment Hydrostatic shock, from ballistic impact Insulin shock or diabetic hypoglycemia, from too much insulin Insulin shock therapy, purposely induced insulin shock, obsolete therapy Osmotic shock, caused by solute concentration around a cell Psychological shock or acute stress reaction, to terrifying events Shell shock, soldiers reaction to battle trauma Physical sciences Shock (mechanics), a sudden acceleration or deceleration Shock absorber Shock mount Shock wave Oblique shock Shock (fluid dynamics), an abrupt discontinuity in the flow field Bow shock, in planetary science and astronomy Electric shock Shock chlorination of water to reduce bacteria and algae Shocks and discontinuities (magnetohydrodynamics) Thermal shock Social sciences Shock (economics), an unpredicted event that affects an economy Demand shock Supply shock Culture shock, in social psychology Shock value, in popular psychology Places Shock, West Virginia, an unincorporated US community People People with the given name or nickname Shock or Harry Del Rios (born 1973), American professional wrestler Shock G or Gregory E. Jacobs (1963–2021), American musician and rapper People with the surname Maurice Shock (born 1936), British educator Ron Shock (1942–2012), American comedian and storyteller Stefie Shock (born 1969), Canadian musician Susy Shock (born 1968), Argentine actress, writer, and singer Arts, entertainment, and media Categories and genres Shock art Shock jock, deliberately offensive broadcaster Shock rock, a genera of rock music Shock site, deliberately offensive website Films The Shock (film), a 1923 silent film Shock (1934 film), starring Ralph Forbes Shock (1946 film), starring Vincent Price Shock (1977 film), an Italian film Shock (2004 film), a Tamil film starring Prashanth Thyagarajan Shock (2006 film), a Telugu film Music Groups and labels Shock (troupe), an English music/mime/dance group Shock Records, an Australian record label Albums Shock (The Motels album), 1985 Shock (Tesla album), 2019 Songs "Shock" (Beast song), a 2010 song by South Korean boy band Beast "Shock" (Fear Factory song), a 1998 song by Fear Factory "Shock" (The Motels song), 1985 "Shock", a 1987 song by The Psychedelic Furs from Midnight to Midnight "Shock!", a 2010 song by the Japanese band Cute "Shock (Unmei)", a 2009 song by Meisa Kuroki Other arts, entertainment, and media Shock: Social Science Fiction, role-playing game Shock Theater, a 1950s and 1960s American television film series Shock (novel), a 2001 novel by Robin Cook Shock (comics), a Marvel Comics supervillain Shock (journal), a medical journal Shock (musical), a Japanese stage musical series The Shock (TV program), an Arabic-language hidden-camera show Shock Gibson, a fictional comic book superhero Military Shock and awe, display of force to destroy an opponents will to fight Shock tactics, a close quarter battle tactic Shock troops, who apply shock tactics Sports and teams Spokane Shock, an arena football team based in Spokane, Washington, US Tulsa Shock, a WNBA professional womens basketball team Detroit Shock, previous name of Tulsa Shock San Francisco Shock, a professional Overwatch League team See also Schock (disambiguation) Shock therapy (disambiguation) Shocked (disambiguation) Shocker (disambiguation) Shocking (disambiguation) All pages with titles beginning with Shock All pages with titles containing Shock
Neck pain
Neck pain, also known as cervicalgia, is a common problem, with two-thirds of the population having neck pain at some point in their lives.Neck pain, although felt in the neck, can be caused by numerous other spinal problems. Neck pain may arise due to muscular tightness in both the neck and upper back, or pinching of the nerves emanating from the cervical vertebrae. Joint disruption in the neck creates pain, as does joint disruption in the upper back. The head is supported by the lower neck and upper back, and it is these areas that commonly cause neck pain. The top three joints in the neck allow for most movement of the neck and head. The lower joints in the neck and those of the upper back create a supportive structure for the head to sit on. If this support system is affected adversely, then the muscles in the area will tighten, leading to neck pain. Neck pain affects about 5% of the global population as of 2010. Differential diagnosis Neck pain may come from any of the structures in the neck including: vascular, nerve, airway, digestive, and musculature / skeletal, or be referred from other areas of the body.Major and severe causes of neck pain (roughly in order of severity) include: Carotid artery dissection Referred pain from acute coronary syndrome Head and neck cancer Infections, including: Meningitis of several types including sudden onset of severe neck or back pain particularly in teens and young adults which may be fatal if not treated quickly Retropharyngeal abscess Epiglottitis Spinal disc herniation – protruding or bulging discs, or if severe prolapse. Spondylosis - degenerative arthritis and osteophytes Spinal stenosis – a narrowing of the spinal canalMore common and lesser neck pain causes include: Stress – physical and emotional stresses Prolonged postures – many people fall asleep on sofas and chairs and wake up with sore necks. Minor injuries and falls – car accidents, sporting events, and day to day injuries that are really minor. Referred pain – mostly from upper back problems Over-use – muscular strain is one of the most common causes Whiplash Pinched nerveAlthough the causes are numerous, most are easily rectified by either professional help or using self help advice and techniques. More causes can include: poor sleeping posture, torticollis, head injury, rheumatoid arthritis, Carotidynia, congenital cervical rib, mononucleosis, rubella, certain cancers, ankylosing spondylitis, cervical spine fracture, esophageal trauma, subarachnoid hemorrhage, lymphadenitis, thyroid trauma, and tracheal trauma. Treatment Treatment of neck pain depends on the cause. For the vast majority of people, neck pain can be treated conservatively. Recommendations in which it helps alleviate symptoms include applying heat or cold. Other common treatments could include medication, body mechanics training, ergonomic reform, and physical therapy. Treatments may also include patient education, but existing evidence shows a lack of effectiveness. Medication Analgesics such as acetaminophen or NSAIDs are generally recommended for pain. A 2017 review, however found that paracetamol was not efficacious and that NSAIDs are minimally effective.Muscle relaxants may also be recommended. However, one study showed that one muscle relaxant called cyclobenzaprine was not effective for treatment of acute cervical strain (as opposed to neck pain from other etiologies or chronic neck pain). Surgery Surgery is usually not indicated for mechanical causes of neck pain. If neck pain is the result of instability, cancer, or other disease process surgery may be necessary. Surgery is usually not indicated for "pinched nerves" or herniated discs unless there is spinal cord compression or pain and disability have been protracted for many months and refractory to conservative treatment such as physical therapy. Alternative medicine Exercise plus joint manipulation has been found to be beneficial in both acute and chronic mechanical neck disorders. In particular, specific strengthening exercise may improve function and pain. Motor control using cranio-cervical flexion exercises has been shown to be effective for non-specific chronic neck pain. Both cervical manipulation and cervical mobilization produce similar immediate-, and short-term changes. Multiple cervical manipulation sessions may provide better pain relief and functional improvement than certain medications at immediate to long-term follow-up. Thoracic manipulation may also improve pain and function. Low-level laser therapy has been shown to reduce pain immediately after treatment in acute neck pain and up to 22 weeks after completion of treatment in patients that experience chronic neck pain. Low quality evidence suggests that cognitive-behavioural therapy may be effective at reducing pain in the short-term. Massaging the area may provide immediate and short-lived benefits, but long term effects are unknown. There is a lack of high-quality evidence to support the use of mechanical traction, and side effects include headaches, nausea and injury to tissue. Radiofrequency denervation may provide temporary relief for specific affected areas in the neck. Transcutaneous electrical nerve stimulation (TENS), the noninvasive use of electrical stimulation on the skin, is of unclear benefit in chronic neck pain. Epidemiology Neck pain affects about 330 million people globally as of 2010 (4.9% of the population). It is more common in women (5.7%) than men (3.9%). It is less common than low back pain. Prognosis About one-half of episodes resolve within one year and around 10% become chronic. Prevention Prevalence of neck pain in the population suggests it is a common condition. For cervicalgia associated with bad posture the treatment is usually corrective in nature (i.e. ensure shoulders are in one line above the hips) and relating to interventions that provide ergonomic improvement. There is also growing research in how neck pain caused by mobile devices (see iHunch) can be prevented using embedded warning systems. References External links 6 Ways to Ease Neck Pain - Harvard Medical School Neck pain - Symptoms and causes - Mayo Clinic
Suicide attempt
A suicide attempt is an attempt to die by suicide that results in survival. It may be referred to as a "failed" or "unsuccessful" suicide attempt, though these terms are discouraged by mental health professionals for implying that a suicide resulting in death is a successful and positive outcome. Epidemiology In the United States, the National Institute of Mental Health reports there are 11 nonfatal suicide attempts for every suicide death. The American Association of Suicidology reports higher numbers, stating that there are 25 suicide attempts for every suicide completion. The ratio of suicide attempts to suicide death is about 25:1 in youths, compared to about 4:1 in elderly. A 2008 review found that nonfatal self-injury is more common in women, and a separate study from 2008/2009 found suicidal thoughts higher among females, as well as significant differences between genders for suicide planning and suicide attempts.Suicide attempts are more common among adolescents in developing countries than developed ones. A 12-month prevalence of suicide attempt in developing countries between 2003 and 2015 was reported as 17%. Parasuicide and self-injury Without commonly agreed-upon operational definitions, some suicidology researchers regard many suicide attempts as parasuicide (para=near) or self harm behavior, rather than "true" suicide attempts, as in lacking suicidal intent. Methods Some suicide methods have higher rates of lethality than others. The use of firearms results in death 90% of the time. Wrist-slashing has a much lower lethality rate, comparatively. 75% of all suicide attempts are by drug overdose, a method that is often thwarted because the drug is nonlethal, or is used at a nonlethal dosage. These people survive 97% of the time. Repetition A nonfatal suicide attempt is the strongest known clinical predictor of eventual suicide. Suicide risk among self-harm patients is hundreds of times higher than in the general population. It is often estimated that about 10–15% of people who attempt suicide eventually die by suicide. The mortality risk is highest during the first months and years after the attempt: almost 1% of individuals who attempt suicide will die by suicide if the attempt is repeated within one year. Recent meta-analytic evidence suggests that the association between suicide attempt and suicidal death may not be as strong as it was thought before. Outcomes Suicide attempts can result in serious and permanent injuries and/or disabilities. 700,000 (or more) Americans survive a suicide attempt each year. People who attempt either hanging or carbon monoxide poisoning and survive can face permanent brain damage due to cerebral anoxia. People who take a drug overdose and survive can face severe organ damage (e.g., liver failure). Individuals who jump from a height and survive may face irreversible damage to multiple organs, as well as the spine and brain. While a majority sustain injuries that allow them to be released following emergency room treatment, a significant minority—about 116,000—are hospitalized, of whom 110,000 are eventually discharged alive. Their average hospital stay is 79 days. Some 89,000, 17% of these people, are permanently disabled. Criminalization of attempted suicide Historically in the Christian church, people who attempted suicide were excommunicated because of the religiously polarizing nature of the topic. While previously criminally punishable, attempted suicide no longer is in most Western countries. It remains a criminal offense in most Islamic countries. In the late 19th century in Great Britain, attempted suicide was deemed to be equivalent to attempted murder and could be punished by hanging. In the United States, suicide is not illegal and almost no country in Europe currently considers attempted suicide to be a crime.In India, attempted suicide was decriminalized by the Mental Healthcare Act, 2017, while Singapore removed attempted suicide from their criminal code in 2020; previously it had been punishable by up to one-year in prison.Many other countries still prosecute suicide attempts. As of 2012, attempted suicide is a criminal offense in Uganda, and as of 2013, it is criminalized in Ghana.Despite having its own laws, Maryland still reserves the right to prosecute people under the English Common laws that were in place when America declared independence in 1776. These laws were used to convict a man for attempted suicide in 2018, resulting in a three-year suspended sentence and two years of supervised probation. See also International Survivors of Suicide Loss Day Suicidal ideation World Suicide Prevention Day == References ==
Craft
A craft or trade is a pastime or an occupation that requires particular skills and knowledge of skilled work. In a historical sense, particularly the Middle Ages and earlier, the term is usually applied to people occupied in small scale production of goods, or their maintenance, for example by tinkers. The traditional term craftsman is nowadays often replaced by artisan and by craftsperson (craftspeople). Historically, the more specialized crafts with high-value products tended to concentrate in urban centers and formed guilds. The skill required by their professions and the need to be permanently involved in the exchange of goods often demanded a generally higher level of education, and craftsmen were usually in a more privileged position than the peasantry in societal hierarchy. The households of craftsmen were not as self-sufficient as those of people engaged in agricultural work, and therefore had to rely on the exchange of goods. Some crafts, especially in areas such as pottery, woodworking, and various stages of textile production, could be practiced on a part-time basis by those also working in agriculture, and often formed part of village life. When an apprentice finished his apprenticeship, he became a journeyman searching for a place to set up his own shop and make a living. After he set up his own shop, he could then call himself a master of his craft. This stepwise approach to mastery of a craft, which includes the attainment of some education and skill, has survived in some countries until today. But crafts have undergone deep structural changes since and during the era of the Industrial Revolution. The mass production of goods by large-scale industry has limited crafts to market segments in which industrys modes of functioning or its mass-produced goods do not satisfy the preferences of potential buyers. As an outcome of these changes, craftspeople today increasingly make use of semi-finished components or materials and adapt these to their customers requirements or demands. Thus, they participate in a certain division of labour between industry and craft. Classification There are three aspects to human creativity - art, crafts, and science. Roughly determined, art relies upon intuitive sensing, vision and expression, crafts upon sophisticated technique and science upon knowledge. Handicraft Handicraft is the "traditional" main sector of the crafts, it is a type of work where useful and decorative devices are made completely by hand or by using only simple tools. The term is usually applied to traditional means of making goods. The individual artisanship of the items is a paramount criterion, such items often have cultural and/or religious significance. Items made by mass production or machines are not handicraft goods. Handicraft goods are made with craft production processes. The beginning of crafts in areas like the Ottoman Empire involved the governing bodies requiring members of the city who were skilled at creating goods to open shops in the center of town. These people slowly stopped acting as subsistence farmers (who created goods in their own homes to trade with neighbors) and began to represent what we think of a "craftsman" today.In recent years, crafts and craftspeople have slowly been gaining momentum as a subject of academic study. For example, Stephanie Bunn was an artist before she became an anthropologist, and she went on to develop an academic interest in the process of craft - arguing that what happens to an object before it becomes a product is an area worthy of study. The Arts and Crafts Movement The term crafts is often used to describe the family of artistic practices within the family of decorative arts that traditionally are defined by their relationship to functional or utilitarian products (such as sculptural forms in the vessel tradition) or by their use of such natural media as wood, clay, ceramics, glass, textiles, and metal. The Arts and Crafts Movement originated in Britain during the late 19th century and was characterized by a style of decoration reminiscent of medieval times. The primary artist associated with the movement is William Morris, whose work was reinforced with writings from John Ruskin. The movement placed a high importance on the quality of craftsmanship, while emphasizing the importance for the arts to contribute to economic reform. Studio crafts Crafts practiced by independent artists working alone or in small groups are often referred to as studio craft. Studio craft includes studio pottery, metalwork, weaving, woodturning, paper and other forms of woodworking, glassblowing, and glass art. Craft fairs A craft fair is an organized event to display and sell crafts. There are craft stores where such goods are sold and craft communities, such as Craftster, where expertise is shared. Tradesperson A tradesperson is a skilled manual worker in a particular trade or craft. Economically and socially, a tradespersons status is considered between a laborer and a professional, with a high degree of both practical and theoretical knowledge of their trade. In cultures where professional careers are highly prized, there can be a shortage of skilled manual workers, leading to lucrative niche markets in the trades. See also References External links Media related to Crafts at Wikimedia Commons
Hepatorenal syndrome
Hepatorenal syndrome (often abbreviated HRS) is a life-threatening medical condition that consists of rapid deterioration in kidney function in individuals with cirrhosis or fulminant liver failure. HRS is usually fatal unless a liver transplant is performed, although various treatments, such as dialysis, can prevent advancement of the condition. HRS can affect individuals with cirrhosis, severe alcoholic hepatitis, or liver failure, and usually occurs when liver function deteriorates rapidly because of a sudden insult such as an infection, bleeding in the gastrointestinal tract, or overuse of diuretic medications. HRS is a relatively common complication of cirrhosis, occurring in 18% of people within one year of their diagnosis, and in 39% within five years of their diagnosis. Deteriorating liver function is believed to cause changes in the circulation that supplies the intestines, altering blood flow and blood vessel tone in the kidneys. The kidney failure of HRS is a consequence of these changes in blood flow, rather than direct damage to the kidney. The diagnosis of hepatorenal syndrome is based on laboratory tests of individuals susceptible to the condition. Two forms of hepatorenal syndrome have been defined: Type 1 HRS entails a rapidly progressive decline in kidney function, while type 2 HRS is associated with ascites (fluid accumulation in the abdomen) that does not improve with standard diuretic medications. The risk of death in hepatorenal syndrome is very high; the mortality of individuals with type 1 HRS is over 50% over the short term, as determined by historical case series. The only long-term treatment option for the condition is liver transplantation. While awaiting transplantation, people with HRS often receive other treatments that improve the abnormalities in blood vessel tone, including supportive care with medications, or the insertion of a transjugular intrahepatic portosystemic shunt (TIPS), which is a small shunt placed to reduce blood pressure in the portal vein. Some patients may require hemodialysis to support kidney function, or a newer technique called liver dialysis which uses a dialysis circuit with albumin-bound membranes to bind and remove toxins normally cleared by the liver, providing a means of extracorporeal liver support until transplantation can be performed. Classification Hepatorenal syndrome is a particular and common type of kidney failure that affects individuals with liver cirrhosis or, less commonly, with fulminant liver failure. The syndrome involves constriction of the blood vessels of the kidneys and dilation of blood vessels in the splanchnic circulation, which supplies the intestines. The classification of hepatorenal syndrome identifies two categories of kidney failure, termed type 1 and type 2 HRS, which both occur in individuals with either cirrhosis or fulminant liver failure. In both categories, the deterioration in kidney function is quantified either by an elevation in creatinine level in the blood, or by decreased clearance of creatinine in the urine. Type 1 hepatorenal syndrome Type 1 HRS is characterized by rapidly progressive kidney failure, with a doubling of serum creatinine to a level greater than 221 μmol/L (2.5 mg/dL) or a halving of the creatinine clearance to less than 20 mL/min over a period of less than two weeks. The prognosis of individuals with type 1 HRS is particularly grim, with a mortality rate exceeding 50% after one month. Patients with type 1 HRS are usually ill, may have low blood pressure, and may require therapy with drugs to improve the strength of heart muscle contraction (inotropes) or other drugs to maintain blood pressure (vasopressors). Unlike type II, in type I hepatorenal syndrome the kidney failure improves with treatment and stabilizes. Vasoconstrictors and volume expanders are the mainstay of treatment. Type 2 hepatorenal syndrome In contrast, type 2 HRS is slower in onset and progression, and is not associated with an inciting event. It is defined by an increase in serum creatinine level to >133 μmol/L (1.5 mg/dL) or a creatinine clearance of less than 40 mL/min, and a urine sodium < 10 μmol/L. It also carries a poor outlook, with a median survival of approximately six months unless the affected individual undergoes liver transplantation. Type 2 HRS is thought to be part of a spectrum of illness associated with increased pressures in the portal vein circulation, which begins with the development of fluid in the abdomen (ascites). The spectrum continues with diuretic-resistant ascites, where the kidneys are unable to excrete sufficient sodium to clear the fluid even with the use of diuretic medications. Most individuals with type 2 HRS have diuretic-resistant ascites before they develop deterioration in kidney function. Signs and symptoms Both types of hepatorenal syndrome share three major components: altered liver function, abnormalities in circulation, and kidney failure. As these phenomena may not necessarily produce symptoms until late in their course, individuals with hepatorenal syndrome are typically diagnosed with the condition on the basis of altered laboratory tests. Most people who develop HRS have cirrhosis, and may have signs and symptoms of the same, which can include jaundice, altered mental status, evidence of decreased nutrition, and the presence of ascites. Specifically, the production of ascites that is resistant to the use of diuretic medications is characteristic of type 2 HRS. Oliguria, which is a decrease in urine volume, may occur as a consequence of kidney failure; however, some individuals with HRS continue to produce a normal amount of urine. As these signs and symptoms may not necessarily occur in HRS, they are not included in the major and minor criteria for making a diagnosis of this condition; instead HRS is diagnosed in an individual at risk for the condition on the basis of the results of laboratory tests, and the exclusion of other causes. Causes Hepatorenal syndrome usually affects individuals with cirrhosis and elevated pressures in the portal vein system (termed portal hypertension). While HRS may develop in any type of cirrhosis, it is most common in individuals with alcoholic cirrhosis, particularly if there is concomitant alcoholic hepatitis identifiable on liver biopsies. HRS can also occur in individuals without cirrhosis, but with acute onset of liver failure, termed fulminant liver failure.Certain precipitants of HRS have been identified in vulnerable individuals with cirrhosis or fulminant liver failure. These include bacterial infection, acute alcoholic hepatitis, or bleeding in the upper gastrointestinal tract. Spontaneous bacterial peritonitis, which is the infection of ascites fluid, is the most common precipitant of HRS in cirrhotic individuals. HRS can sometimes be triggered by treatments for complications of liver disease: iatrogenic precipitants of HRS include the aggressive use of diuretic medications or the removal of large volumes of ascitic fluid by paracentesis from the abdominal cavity without compensating for fluid losses by intravenous replacement. Diagnosis There can be many causes of kidney failure in individuals with cirrhosis or fulminant liver failure. Consequently, it is a challenge to distinguish hepatorenal syndrome from other entities that cause kidney failure in the setting of advanced liver disease. As a result, additional major and minor criteria have been developed to assist in the diagnosis of hepatorenal syndrome.The major criteria include liver disease with portal hypertension; kidney failure; the absence of shock, infection, recent treatment with medications that affect the function of the kidney (nephrotoxins), and fluid losses; the absence of sustained improvement in kidney function despite treatment with 1.5 litres of intravenous normal saline; the absence of proteinuria (protein in the urine); and, the absence of kidney disease or obstruction of kidney outflow as seen on ultrasound.The minor criteria are the following: a low urine volume (less than 500 mL (18 imp fl oz; 17 US fl oz) per day), low sodium concentration in the urine, a urine osmolality that is greater than that in the blood, the absence of red blood cells in the urine, and a serum sodium concentration of less than 130 mmol/L.Many other diseases of the kidney are associated with liver disease and must be excluded before making a diagnosis of hepatorenal syndrome. Individuals with pre-renal kidney failure do not have damage to the kidneys, but as in individuals with HRS, have kidney dysfunction due to decreased blood flow to the kidneys. Also, similarly to HRS, pre-renal kidney failure causes the formation of urine that has a very low sodium concentration. In contrast to HRS, however, pre-renal kidney failure usually responds to treatment with intravenous fluids, resulting in reduction in serum creatinine and increased excretion of sodium. Acute tubular necrosis (ATN) involves damage to the tubules of the kidney, and can be a complication in individuals with cirrhosis, because of exposure to toxic medications or the development of decreased blood pressure. Because of the damage to the tubules, ATN affected kidneys usually are unable to maximally resorb sodium from the urine. As a result, ATN can be distinguished from HRS on the basis of laboratory testing, as individuals with ATN will have urine sodium measurements that are much higher than in HRS; however, this may not always be the case in cirrhotics. Individuals with ATN also may have evidence of hyaline casts or muddy-brown casts in the urine on microscopy, whereas the urine of individuals with HRS is typically devoid of cellular material, as the kidneys have not been directly injured. Some viral infections of the liver, including hepatitis B and hepatitis C can also lead to inflammation of the glomerulus of the kidney. Other causes of kidney failure in individuals with liver disease include drug toxicity (notably, the antibiotic gentamicin) or contrast nephropathy, caused by intravenous administration of contrast agents used for medical imaging tests. Pathophysiology The kidney failure in hepatorenal syndrome is believed to arise from abnormalities in blood vessel tone in the kidneys. The predominant theory (termed the underfill theory) is that blood vessels in the kidney circulation are constricted because of the dilation of blood vessels in the splanchnic circulation (which supplies the intestines), which is mediated by factors released by liver disease. Nitric oxide, prostaglandins, and other vasoactive substances have been hypothesized as powerful mediators of splanchnic vasodilation in cirrhosis. The consequence of this phenomenon is a decrease in the "effective" volume of blood sensed by the juxtaglomerular apparatus, leading to the secretion of renin and the activation of the renin–angiotensin system, which results in the vasoconstriction of vessels systemically and in the kidney specifically. However, the effect of this is insufficient to counteract the mediators of vasodilation in the splanchnic circulation, leading to persistent "underfilling" of the kidney circulation and worsening kidney vasoconstriction, leading to kidney failure.Studies to quantify this theory have shown that there is an overall decreased systemic vascular resistance in hepatorenal syndrome, but that the measured femoral and kidney fractions of cardiac output are respectively increased and reduced, suggesting that splanchnic vasodilation is implicated in the kidney failure. Many vasoactive chemicals have been hypothesized as being involved in mediating the systemic hemodynamic changes, including atrial natriuretic factor, prostacyclin, thromboxane A2, and endotoxin. In addition to this, it has been observed that the administration of medications to counteract splanchnic vasodilation (such as ornipressin, terlipressin, and octreotide) leads to improvement in glomerular filtration rate (which is a quantitative measure of kidney function) in patients with hepatorenal syndrome, providing further evidence that splanchnic vasodilation is a key feature of its pathogenesis. The underfill theory involves activation of the renin–angiotensin–aldosterone system, which leads to an increase in absorption of sodium from the kidney tubule (termed renal sodium avidity) mediated by aldosterone, which acts on mineralocorticoid receptors in the distal convoluted tubule. This is believed to be a key step in the pathogenesis of ascites in cirrhotics as well. It has been hypothesized that the progression from ascites to hepatorenal syndrome is a spectrum where splanchnic vasodilation defines both resistance to diuretic medications in ascites (which is commonly seen in type 2 HRS) and the onset of kidney vasoconstriction (as described above) leading to hepatorenal syndrome. Prevention The risk of death in hepatorenal syndrome is very high; consequently, there is a significant emphasis on the identification of patients who are at risk for HRS, and prevention of triggers for onset of HRS. As infection (specifically spontaneous bacterial peritonitis) and gastrointestinal hemorrhage are both complications in individuals with cirrhosis, and are common triggers for HRS, specific care is made in early identification and treatment of cirrhotics with these complications to prevent HRS. Some of the triggers for HRS are induced by treatment of ascites and can be preventable. The aggressive use of diuretic medications should be avoided. In addition, many medications that are either used to treat cirrhotic complications (such as some antibiotics) or other conditions may cause sufficient impairment in kidney function in the cirrhotic to lead to HRS. Also, large volume paracentesis—which is the removal of ascites fluid from the abdomen using a needle or catheter in order to relieve discomfort—may cause enough alteration in hemodynamics to precipitate HRS, and should be avoided in individuals at risk. The concomitant infusion of albumin can avert the circulatory dysfunction that occurs after large-volume paracentesis and may prevent HRS. Conversely, in individuals with very tense ascites, it has been hypothesized that removal of ascitic fluid may improve kidney function if it decreases the pressure on the renal veins.Individuals with ascites that have become infected spontaneously (termed spontaneous bacterial peritonitis or SBP) are at an especially high risk for the development of HRS. In individuals with SBP, one randomized controlled trial found that the administration of intravenous albumin on the day of admission and on the third day in hospital reduced both the rate of kidney insufficiency and the mortality rate. Treatment Transplantation The definitive treatment for hepatorenal syndrome is liver transplantation, and all other therapies can best be described as bridges to transplantation. While liver transplantation is by far the best available management option for HRS, the mortality of individuals with HRS has been shown to be as high as 25% within the first month after transplantation. Individuals with HRS and evidence of greater hepatic dysfunction (quantified as MELD scores above 36) have been found to be at greatest risk of early mortality after liver transplantation. A further deterioration of kidney function even after liver transplantation in individuals with HRS has been demonstrated in several studies; however, this is transient and thought to be due to the use of medications with toxicity to the kidneys, and specifically the introduction of immunosuppressants such as tacrolimus and cyclosporine that are known to worsen kidney function. Over the long-term, however, individuals with HRS who are the recipients of liver transplants almost universally recover kidney function, and studies show that their survival rates at three years are similar to those who have received liver transplants for reasons other than HRS.In anticipation of liver transplantation (which may be associated with considerable in-hospital delay), several other strategies have been found to be beneficial in preserving kidney function. These include the use of intravenous albumin infusion, medications (for which the best evidence is for analogues of vasopressin, which causes splanchnic vasoconstriction), radiological shunts to decrease pressure in the portal vein, dialysis, and a specialized albumin-bound membrane dialysis system termed molecular adsorbents recirculation system (MARS) or liver dialysis. Medical therapy Many major studies showing improvement in kidney function in patients with hepatorenal syndrome have involved expansion of the volume of the plasma with albumin given intravenously. The quantity of albumin administered intravenously varies: one cited regimen is 1 gram of albumin per kilogram of body weight intravenously on the first day, followed by 20 to 40 grams daily. Notably, studies have shown that treatment with albumin alone is inferior to treatment with other medications in conjunction with albumin; most studies evaluating pre-transplant therapies for HRS involve the use of albumin in conjunction with other medical or procedural treatment.Midodrine is an alpha-agonist and octreotide is an analogue of somatostatin, a hormone involved in regulation of blood vessel tone in the gastrointestinal tract. The medications are respectively systemic vasoconstrictors and inhibitors of splanchnic vasodilation, and were not found to be useful when used individually in treatment of hepatorenal syndrome. However, one study of 13 patients with hepatorenal syndrome showed significant improvement in kidney function when the two were used together (with midodrine given orally, octreotide given subcutaneously and both dosed according to blood pressure), with three patients surviving to discharge. Another nonrandomized, observational study of individuals with HRS treated with subcutaneous octreotide and oral midodrine showed that there was increased survival at 30 days.The vasopressin analogue ornipressin was found in a number of studies to be useful in improvement of kidney function in patients with hepatorenal syndrome, but has been limited in its use, as it can cause severe ischemia to major organs. Terlipressin is a vasopressin analogue that has been found in one large study to be useful for improving kidney function in patients with hepatorenal syndrome with a lesser incidence of ischemia but is not available in the United States. A randomized control trial led by Florence Wong demonstrated improved renal function in individuals with Type 1 HRS treated with terlipressin and albumin over placebo. A key criticism of all of these medical therapies has been heterogeneity in the populations investigated and the use of kidney function, instead of mortality, as an outcome measure.Other agents that have been investigated for use in treatment of HRS include pentoxifylline, acetylcysteine, and misoprostol. The evidence for all of these therapies is based on either case series, or in the case of pentoxifylline, extrapolated from a subset of patients treated for alcoholic hepatitis. Procedural treatments A transjugular intrahepatic portosystemic shunt (TIPS) involves the decompression of the high pressures in the portal circulation by placing a small stent between a portal and hepatic vein. This is done through radiologically guided catheters which are passed into the hepatic vein either through the internal jugular vein or the femoral vein. Theoretically, a decrease in portal pressures is thought to reverse the hemodynamic phenomena that ultimately lead to the development of hepatorenal syndrome. TIPS has been shown to improve kidney function in patients with hepatorenal syndrome. Complications of TIPS for treatment of HRS include the worsening of hepatic encephalopathy (as the procedure involves the forced creation of a porto-systemic shunt, effectively bypassing the ability of the liver to clear toxins), inability to achieve adequate reduction in portal pressure, and bleeding.Liver dialysis involves extracorporeal dialysis to remove toxins from the circulation, usually through the addition of a second dialysis circuit that contains an albumin-bound membrane. The molecular adsorbents recirculation system (MARS) has shown some utility as a bridge to transplantation in patients with hepatorenal syndrome, yet the technique is still nascent.Renal replacement therapy may be required to bridge individuals with hepatorenal syndrome to liver transplantation, although the condition of the patient may dictate the modality used. The use of dialysis, however, does not lead to recuperation or preservation of kidney function in patients with HRS, and is essentially only used to avoid complications of kidney failure until transplantation can take place. In patients who undergo hemodialysis, there may even be an increased risk of mortality due to low blood pressure in patients with HRS, although appropriate studies have yet to be performed. As a result, the role of renal replacement therapy in patients with HRS remains unclear. Epidemiology As the majority of individuals with hepatorenal syndrome have cirrhosis, much of the epidemiological data on HRS comes from the cirrhotic population. The condition is quite common: approximately 10% of individuals admitted to hospital with ascites have HRS. A retrospective case series of cirrhotic patients treated with terlipressin suggested that 20.0% of acute kidney failure in cirrhotics was due to type 1 HRS, and 6.6% was due to type 2 HRS. It is estimated that 18% of individuals with cirrhosis and ascites will develop HRS within one year of their diagnosis with cirrhosis, and 39% of these individuals will develop HRS within five years of diagnosis. Three independent risk factors for the development of HRS in cirrhotics have been identified: liver size, plasma renin activity, and serum sodium concentration.The prognosis of these patients is grim with untreated patients having an extremely short survival. The severity of liver disease (as evidenced by the MELD score) has been shown to be a determinant of outcome. Some patients without cirrhosis develop HRS, with an incidence of about 20% seen in one study of ill patients with alcoholic hepatitis. History The first reports of kidney failure occurring in individuals with chronic liver diseases were from the late 19th century by Frerichs and Flint. However, the hepatorenal syndrome was first defined as acute kidney failure that occurred in the setting of biliary surgery. The syndrome was soon re-associated with advanced liver disease, and, in the 1950s, was clinically defined by Sherlock, Hecker, Papper, and Vessin as being associated with systemic hemodynamic abnormalities and high mortality. Hecker and Sherlock specifically identified that individuals with HRS had low urinary output, very low sodium in the urine, and no protein in the urine. Murray Epstein was the first to characterize splanchnic vasodilation and kidney vasoconstriction as the key alterations in hemodynamics in patients with the syndrome. The functional nature of the kidney impairment in HRS was crystallized by studies demonstrating that kidneys transplanted from patients with hepatorenal syndrome returned to function in the new host, leading to the hypothesis that hepatorenal syndrome was a systemic condition and not a kidney disease. The first systematic attempt to define hepatorenal syndrome was made in 1994 by the International Ascites Club, a group of liver specialists. The more recent history of HRS has involved elucidation of the various vasoactive mediators that cause the splanchnic and kidney blood flow abnormalities of the condition. == References ==
Tetany
Tetany or tetanic seizure is a medical sign consisting of the involuntary contraction of muscles, which may be caused by disorders that increase the action potential frequency of muscle cells or the nerves that innervate them. Muscle cramps caused by the disease tetanus are not classified as tetany; rather, they are due to a lack of inhibition to the neurons that supply muscles. Tetanic contractions (physiologic tetanus) are a broad range of muscle contraction types, of which tetany is only one. Signs and symptoms Tetany is characterized by contraction of distal muscles of the hands (carpal spasm with extension of interphalangeal joints and adduction and flexion of the metacarpophalangeal joints) and feet (pedal spasm) and is associated with tingling around the mouth and distally in the limbs. Causes The usual cause of tetany is a deficiency of calcium. An excess of phosphate (high phosphate-to-calcium ratio) can also trigger the spasms. Underfunction of the parathyroid gland can lead to tetany. Low levels of carbon dioxide cause tetany by altering the albumin binding of calcium such that the ionized (physiologically influencing) fraction of calcium is reduced; one common reason for low carbon dioxide levels is hyperventilation. Low levels of magnesium can lead to tetany. Clostridium tetani toxin, via inhibition of glycine-mediated and GABA-ergic neurotransmission, may lead to tetany.An excess of potassium in grass hay or pasture can trigger winter tetany, or grass tetany, in ruminants. Osteomalacia and rickets due to deficiency of vitamin DMetabolic alkalosis with hypokalemia like Gitelman syndrome and Bartters syndrome can cause tetany. Vomiting induced alkalosis and hyperventilation induced respiratory alkalosis also cause tetany because of neuronal irritability. Pathophysiology Hypocalcemia is the primary cause of tetany. Low ionized calcium levels in the extracellular fluid increase the permeability of neuronal membranes to sodium ion, causing a progressive depolarization, which increases the possibility of action potentials. This occurs because calcium ions interact with the exterior surface of sodium channels in the plasma membrane of nerve cells and hypocalcemia effectively increases resting potential (rendering the cells more excitable) since less positive charge is present extracellularly. When calcium ions are absent the voltage level required to open voltage gated sodium channels is significantly altered (less excitation is required). If the plasma Ca2+ decreases to less than 50% of the normal value of 9.4 mg/dl, action potentials may be spontaneously generated, causing contraction of peripheral skeletal muscles. Hypocalcemia is not a term for tetany but is rather a cause of tetany. Diagnosis French Professor Armand Trousseau (1801–1867) devised the maneuver of occluding the brachial artery by squeezing, to trigger cramps in the fingers. This is now known as the Trousseau sign of latent tetany.Also, tetany can be demonstrated by tapping anterior to the ear, at the emergence of the facial nerve. A resultant twitch of the nose or lips suggests low calcium levels. This is now known as the Chvostek sign.EMG studies reveal single or often grouped motor unit discharges at low discharge frequency during tetany episodes. References == External links ==
Haverhill fever
Haverhill fever (or epidemic arthritic erythema) is a systemic illness caused by the bacterium Streptobacillus moniliformis, an organism common in rats and mice. If untreated, the illness can have a mortality rate of up to 13%. Among the two types of rat-bite fever, Haverhill fever caused by Streptobacillus moniliformis is most common in North America. The other type of infection caused by Spirillum minus is more common in Asia and is also known as Sodoku.The initial non-specific presentation of the disease and hurdles in culturing the causative microorganism are at times responsible for a delay or failure in the diagnosis of the disease. Although non-specific in nature, initial symptoms like relapsing fever, rash and migratory polyarthralgia are the most common symptoms of epidemic arthritic erythema.Bites and scratches from rodents carrying the bacteria are generally responsible for the affliction. However, the disease can be spread even without physical lacerations by rodents. In fact, the disease was first recognized from a milk-associated outbreak which occurred in Haverhill, Massachusetts in January, 1926. The organism S. moniliformis was isolated from the patients and epidemiologically, consumption of milk from one particular dairy was implicated in association with the infection. Hence, ingestion of food and drink contaminated with the bacteria can also result in the development of the disease. Symptoms and signs The illness resembles a severe influenza, with a moderate fever (38-40 °C, or 101-104 °F), sore throat, chills, myalgia, headache, vomiting, and a diffuse red rash (maculopapular, petechial, or purpuric), located mostly on the hands and feet. The incubation period for the bacteria generally lasts from three-ten days. As the disease progresses. almost half the patients experience migratory polyarthralgias. Mechanism Although the specific form of pathogenesis is still a subject of ongoing research, the bacteria has been observed to result in morphological symptoms that are atypical of bacterial infection. Autopsy of the victims vividly exhibit erythrophagocytosis, hepatosplenomegaly, interstitial pneumonia, and lymph node sinus hyperplasia. In addition, Myocarditis and Endocarditis have also been demonstrated in such patients. Synovial and serosal surfaces may be more suited for the growth of the bacteria within the body. Furthermore, leukocytoclastic vasculitis has been observed in the skin lesions. Diagnosis The microaerophilic conditions needed for the bacteria to grow, makes its detection incredibly difficult. Trypticase soy agar or broth enriched with 20% blood, serum, or ascitic fluid is necessary for the optimal growth of the bacteria under laboratory conditions. The organism may take up to seven days to grow and the colonies generally have a circular, grayish and shiny appearance on agar. Once the microbe has grown, primary identification can be carried out via biochemical and carbohydrate fermentation analysis. Biochemical tests such as oxidase, catalase, indole, and nitrate can be used to detect the bacteria. However, S. moniliformis can be biochemically differentiated from similar bacteria by their negative production of indole, catalase, and oxidase, while reduction of nitrate to nitrite. PCR assay specific for Streptobacillus moniliformis can also be used to detect the bacteria in a patient sample with high accuracy. The PCR assay utilizes primers based on the 16S rRNA gene base sequences of human and rodent strains of S. moniliformis (forward primer, 5′ GCT TAA CAC ATG CAA ATC TAT 3′ and reverse primer, 5′ AGT AAG GGC CGT ATC TCA 3′). These primers exhibit 100% complementarity to S. moniliformis ATCC 14674T and S. moniliformis ANL 370-1. The PCR assay generates a 296-bp product which upon treatment with BfaI restriction enzyme, leads to the generation of three distinct fragments (128, 92, and 76 bp), which are specific to S. moniliformis. Hence, this assay can be used to detect S. moniliformis with great accuracy. Prevention Although rare, the disease is certainly seeing a spike in the number of incidences because of various reasons. The most important among them is the fact that rodents are increasingly finding their way into our homes either as a pet or as a pest. In fact, the number of children who are affected by rat bite fever has been specifically on the rise. Therefore, wild rats should not be brought home and if there is an infestation, appropriate measures for extermination must be undertaken to prevent the disease from spreading. Treatment The bacteria are susceptible to a number of antibiotics. They are: cephalosporins, carbapenems, aztreonam, clindamycin, erythromycin, nitrofurantoin, bacitracin, doxycycline, tetracycline, teicoplanin, and vancomycin. However, the data suggests that treatment with erythromycin can be less efficient. Intravenous penicillin G (400000–600000 IU/day) should be administered for 7 days and then a dosage of oral penicillin must be prescribed. Children should receive a much lesser dose: 20,000–50,000 IU, per kg of body weight per day. However, if somebody is allergic to penicillin, streptomycin and tetracycline can be administered as they have also been observed to provide efficacious results. In case of complications such as endocarditis, a combination therapy with both intravenous penicillin G and streptomycin or gentamicin is necessary. References == External links ==
Osteomalacia
Osteomalacia is a disease characterized by the softening of the bones caused by impaired bone metabolism primarily due to inadequate levels of available phosphate, calcium, and vitamin D, or because of resorption of calcium. The impairment of bone metabolism causes inadequate bone mineralization. Osteomalacia in children is known as rickets, and because of this, use of the term "osteomalacia" is often restricted to the milder, adult form of the disease. Signs and symptoms can include diffuse body pains, muscle weakness, and fragility of the bones. In addition to low systemic levels of circulating mineral ions (for example, caused by vitamin D deficiency or renal phosphate wasting) that result in decreased bone and tooth mineralization, accumulation of mineralization-inhibiting proteins and peptides (such as osteopontin and ASARM peptides), and small inhibitory molecules (such as pyrophosphate), can occur in the extracellular matrix of bones and teeth, contributing locally to cause matrix hypomineralization (osteomalacia/odontomalacia). A relationship describing local, physiologic double-negative (inhibiting inhibitors) regulation of mineralization has been termed the Stenciling Principle of mineralization, whereby enzyme-substrate pairs imprint mineralization patterns into the extracellular matrix (most notably described for bone) by degrading mineralization inhibitors (e.g. TNAP/TNSALP/ALPL enzyme degrading the pyrophosphate inhibition, and PHEX enzyme degrading the osteopontin inhibition). The Stenciling Principle for mineralization is particularly relevant to the osteomalacia and odontomalacia observed in hypophosphatasia (HPP) and X-linked hypophosphatemia (XLH). The most common cause of osteomalacia is a deficiency of vitamin D, which is normally derived from sunlight exposure and, to a lesser extent, from the diet. The most specific screening test for vitamin D deficiency in otherwise healthy individuals is a serum 25(OH)D level. Less common causes of osteomalacia can include hereditary deficiencies of vitamin D or phosphate (which would typically be identified in childhood) or malignancy. Vitamin D and calcium supplements are measures that can be used to prevent and treat osteomalacia. Vitamin D should always be administered in conjunction with calcium supplementation (as the pair work together in the body) since most of the consequences of vitamin D deficiency are a result of impaired mineral ion homeostasis.Nursing home residents and the homebound elderly population are at particular risk for vitamin D deficiency, as these populations typically receive little sun exposure. In addition, both the efficiency of vitamin D synthesis in the skin and the absorption of vitamin D from the intestine decline with age, thus further increasing the risk in these populations. Other groups at risk include individuals with malabsorption secondary to gastrointestinal bypass surgery or celiac disease, and individuals who immigrate from warm climates to cold climates, especially women who wear traditional veils or dresses that prevent sun exposure. Signs and symptoms Osteomalacia is a generalized bone condition in which there is inadequate mineralization of the bone. Many of the effects of the disease overlap with the more common osteoporosis, but the two diseases are significantly different. There are two main causes of osteomalacia: insufficient calcium absorption from the intestine because of lack of dietary calcium or a deficiency of, or resistance to, the action of vitamin D, or due to undiagnosed celiac disease. phosphate deficiency caused by increased renal losses.Symptoms: Osteomalacia in adults starts insidiously as aches and pains in the lumbar (lower back) region and thighs before spreading to the arms and ribs. The pain is symmetrical, non-radiating and accompanied by sensitivity in the involved bones. Proximal muscles are weak, and there is difficulty in climbing upstairs and getting up from a squatting position. As a result of demineralization, the bones become less rigid. Physical signs include deformities like triradiate pelvis and lordosis. The patient has a typical "waddling" gait. However, these physical signs may derive from a previous osteomalacial state, since bones do not regain their original shape after they become deformed. Pathologic fractures due to weight bearing may develop. Most of the time, the only alleged symptom is chronic fatigue, while bone aches are not spontaneous but only revealed by pressure or shocks. It differs from renal osteodystrophy, where the latter shows hyperphosphatemia. Causes The causes of adult osteomalacia are varied, but ultimately result in a vitamin D deficiency: Diagnosis Biochemical findings Biochemical features are similar to those of rickets. The major factor is an abnormally low vitamin D concentration in blood serum. Major typical biochemical findings include: Low serum and urinary calcium Low serum phosphate, except in cases of renal osteodystrophy Elevated serum alkaline phosphatase (due to an increase in compensatory osteoblast activity) Elevated parathyroid hormone (due to low calcium)Furthermore, a technetium bone scan will show increased activity (also due to increased osteoblasts). Radiographic characteristics Radiological appearances include: Pseudofractures, also called Loosers zones. Protrusio acetabuli, a hip joint disorder Prevention Prevention of osteomalacia rests on having an adequate intake of vitamin D and calcium, or other treatments if the osteomalacia hereditary (genetic). Vitamin D3 Supplementation is often needed due to the scarcity of Vitamin D sources in the modern diet. Treatment Nutritional osteomalacia responds well to administration of 2,000-10,000 IU of vitamin D3 by mouth daily. Vitamin D3 (cholecalciferol) is typically absorbed more readily than vitamin D2 (ergocalciferol). Osteomalacia due to malabsorption may require treatment by injection or daily oral dosing of significant amounts of vitamin D3. Etymology Osteomalacia is derived from Greek: osteo- which means "bone", and malacia which means "softness". In the past, the disease was also known as malacosteon and its Latin-derived equivalent, mollities ossium. Osteomalacia is associated with increase in osteoid maturation time. See also Osteopetrosis References == External links ==
Herpesviral encephalitis
Herpesviral encephalitis, or herpes simplex encephalitis (HSE), is encephalitis due to herpes simplex virus. It is estimated to affect at least 1 in 500,000 individuals per year, and some studies suggest an incidence rate of 5.9 cases per 100,000 live births.About 90% of cases of herpes encephalitis are caused by herpes simplex virus-1 (HSV-1), the same virus that causes cold sores. According to a 2006 estimate, 57% of American adults were infected with HSV-1, which is spread through droplets, casual contact and sometimes sexual contact, though most infected people never have cold sores. The rest of cases are due to HSV-2, which is typically spread through sexual contact and is the cause of genital herpes. Two-thirds of HSE cases occur in individuals already seropositive for HSV-1, few of whom (only 10%) have history of recurrent orofacial herpes, while about one third of cases results from an initial infection by HSV-1, predominantly occurring in individuals under the age of 18. Approximately half of individuals who develop HSE are over 50 years of age. Signs and symptoms Most individuals with HSE show a decrease in their level of consciousness and an altered mental state presenting as confusion, and changes in personality. Increased numbers of white blood cells can be found in patients cerebrospinal fluid, without the presence of pathogenic bacteria and fungi. Patients typically have a fever and may have seizures. The electrical activity of the brain changes as the disease progresses, first showing abnormalities in one temporal lobe of the brain, which spread to the other temporal lobe 7–10 days later. Imaging by CT or MRI shows characteristic changes in the temporal lobes (see Figure). Definite diagnosis requires testing of the cerebrospinal fluid (CSF) by a lumbar puncture (spinal tap) for presence of the virus. The testing takes several days to perform, and patients with suspected Herpes encephalitis should be treated with acyclovir immediately while waiting for test results. Atypical stroke-like presentation of HSV encephalitis has been described as well and the clinicians should be aware that HSV encephalitis can mimic a stroke. Associated conditions Herpesviral encephalitis can serve as a trigger of anti-NMDA receptor encephalitis. About 30% of HSE patients develop this secondary immunologic reaction, which is associated with impaired neurocognitive recovery. Epidemiology The annual incidence of herpesvial encephalitis is from 2 to 4 cases per 1 million population. Pathophysiology HSE is thought to be caused by the transmission of virus from a peripheral site on the face following HSV-1 reactivation, along a nerve axon, to the brain. The virus lies dormant in the ganglion of the trigeminal cranial nerve, but the reason for reactivation, and its pathway to gain access to the brain, remains unclear, though changes in the immune system caused by stress clearly play a role in animal models of the disease. The olfactory nerve may also be involved in HSE, which may explain its predilection for the temporal lobes of the brain, as the olfactory nerve sends branches there. In horses, a single-nucleotide polymorphism is sufficient to allow the virus to cause neurological disease; but no similar mechanism has been found in humans. Diagnosis Brain CT scan (with/without contrast). Complete prior to lumbar puncture to exclude significantly increased ICP, obstructive hydrocephalus, mass effectBrain MRI—Increased T2 signal intensity in frontotemporal region → viral (HSV) encephalitis Treatment Herpesviral encephalitis can be treated with high-dose intravenous acyclovir, which should be infused 10 mg/kg(adult) over 1 hour to avoid kidney failure. Without treatment, HSE results in rapid death in approximately 70% of cases; survivors suffer severe neurological damage. When treated, HSE is still fatal in one-third of cases, and causes serious long-term neurological damage in over half of survivors. Twenty percent of treated patients recover with minor damage. Only a small population of untreated survivors (2.5%) regain completely normal brain function. Many amnesic cases in the scientific literature have etiologies involving HSE. Earlier treatment (within 48 hours of symptom onset) improves the chances of a good recovery. Rarely, treated individuals can have relapse of infection weeks to months later. There is evidence that aberrant inflammation triggered by herpes simplex can result in granulomatous inflammation in the brain, which responds to steroids. While the herpes virus can be spread, encephalitis itself is not infectious. Other viruses can cause similar symptoms of encephalitis, though usually milder (Herpesvirus 6, varicella zoster virus, Epstein-Barr, cytomegalovirus, coxsackievirus, etc.). References == External links ==
Admission
Admission may refer to: Arts and media "Admissions" (CSI: NY), an episode of CSI: NY Admissions (film), a 2011 short film starring James Cromwell Admission (film), a 2013 comedy film Admission, a 2019 album by Florida sludge metal band Torche Admission (novel), a 2020 novel by Julie Buxbaum Legal proceedings Admission (law), a statement that may be used in court against the person making it Acceptance of admissible evidence in court The process of official inclusion in a state, the opposite of secession Status granted to a person University and college admission Admission to the bar, change in status allowing an applicant to become part of a profession Other uses The process by which patients enter into inpatient care Admittance, the inverse of impedance See also Admissibility (disambiguation) List of U.S. states by date of admission to the Union
Bullous impetigo
Bullous impetigo is a bacterial skin infection caused by Staphylococcus aureus that results in the formation of large blisters called bullae, usually in areas with skin folds like the armpit, groin, between the fingers or toes, beneath the breast, and between the buttocks. It accounts for 30% of cases of impetigo, the other 70% being non-bullous impetigo.The bullae are caused by exfoliative toxins produced by Staphylococcus aureus that cause the connections between cells in the uppermost layer of the skin to fall apart. Bullous impetigo in newborns, children, or adults who are immunocompromised and/or are experiencing kidney failure, can develop into a more severe and generalized form called staphylococcal scalded skin syndrome (SSSS). The mortality rate is less than 3% for infected children, but up to 60% in adults. Signs and symptoms Bullous impetigo can appear around the diaper region, axilla, or neck. The bacteria causes a toxin to be produced that reduces cell-to-cell stickiness (adhesion), causing for the top layer of skin (epidermis), and lower layer of skin (dermis) to separate. Vesicles rapidly enlarge and form the bullae which is a blister more than 5mm across. Bullae is also known as staphylococcal scalded skin syndrome. Other associated symptoms are itching, swelling of nearby glands, fever and diarrhea. Pain is very rare.Long-term effects: once the scabs on the bullous have fallen off, scarring is minimal. Possible long-term effects are kidney disease. Cause Exposure is most commonly seen in hospital wards and nurseries, and can be passed from person to person in other settings, such as close contact sports. Therefore, the patient is advised to try to limit human contact as much as possible to minimize the risk of spreading the infection. Infectious period After 48 hours the disease is considered no longer contagious assuming the proper antibiotic treatments have been administered. Pathogenesis Exfoliating toxins are serine proteases that specifically bind to and cleave desmoglein 1 (Dsg1). Previous studies suggested that exfoliating toxins bind to gangliosides, causing a release of protease by keratinocytes acting as superantigens in stimulating the skins immune system. A more recent proposal states there are three known exfoliating toxins; ETA, ETB, and ETD which act as a glutamic acid-specific serine protease with concentrated specificity. Which results in the cleavage of human Dsg1 at a unique site after glutamic acid residues causing deactivation. Proteolysis of the peptide bond leading up to the dysfunction of Dsg1 and the desmosome allows for an understanding as to why the bullous forms, making the peptide bond crucial for proper function if Dsg1. S. aureus A phyogenic non-motile Gram-positive cocci which forms into grape like clusters. Just like other forms of staph, S. aureus has a variety of virulence factors which include surface proteins involved in adherence, secretion of enzymes that degrade proteins, and secrete toxins which damage the hosts cells.S. aureus expresses surface receptors for fibrinogen, fibronectin, and vitronectin. These surface receptors allow a bridge to be formed which binds to host endothelial cells. Lipases allow for the degradation of lipids on the skin surface and its expression can be directly correlated with its ability of the bacteria to produce abscesses. Diagnosis Observing the skins physical appearance, or swabbing a culture of the lesion for S. aureus. Nasal swabs from the patients immediate family members are necessary to identify them as being asymptomatic nasal carriers of S. aureus. Histology The epidermis is composed of four layers, stratum basale, stratum spinosum, stratum granulosum, and stratum corneum. The cleavage plane can be found either subcorneally or within the upper stratum granulosum. The roof of the pustule is parakeratotic stratum corneum, and the floor is formed of keratinocytes, which may or may not be acantholytic. Neutrophils begin to fill the pustule. Toxins are produced by S. aureus and target desmoglein, which is a desmosomal cell-cell adhesion molecule found in the upper levels of the epidermis. This correlates with the subcorneal localization of the bullae. Uncommon variants Erythema multiforme Systemic lupus erythematosus Stevens–Johnson syndrome Pemphigus vulgaris Differential HPV Insect bites Burns Herpes simplex 1/2 Prevention Since the common pathogens involved with impetigo are bacteria naturally found on the skin, most prevention (especially in children), is targeted towards appropriate hygiene, wound cleaning, and minimizing scratching (i.e. by keeping nails trimmed and short). Avoiding close contact and sharing of items such as towels with potentially infected individuals is also recommended. Management Antibiotic creams are the preferred treatment for mild cases of impetigo, despite their limited systemic absorption. Such prescribed ointments include neosporin, fusidic acid, chloramphenicol and mupirocin. More severe cases of impetigo however (especially bullous impetigo) will likely require oral agents with better systemic bioavailability, such as cephalexin. Cases that do not resolve with initial antibiotic therapy or require hospitalization may also be indicative an MRSA infection, which would require consultation with a local microbiologist. Antibiotic treatment typically last 7–10 days, and although highly effective some cases of methicillin resistant S. aureus (MRSA) may require longer therapy depending on the severity of infection and how much it has spread. See also Impetigo contagiosa Skin lesion List of conditions caused by problems with junctional proteins References == External links ==
Compartment syndrome
Compartment syndrome is a condition in which increased pressure within one of the bodys anatomical compartments results in insufficient blood supply to tissue within that space. There are two main types: acute and chronic. Compartments of the leg or arm are most commonly involved.Symptoms of acute compartment syndrome (ACS) can include severe pain, poor pulses, decreased ability to move, numbness, or a pale color of the affected limb. It is most commonly due to physical trauma such as a bone fracture (up to 75% of cases) or crush injury, but it can also be caused by acute exertion during sport. It can also occur after blood flow returns following a period of poor blood flow. Diagnosis is generally based upon a persons symptoms and may be supported by measurement of intracompartmental pressure before, during, and after activity. Normal compartment pressure should be within 12-18 mmHg; anything greater than that is considered abnormal and would need treatment. Treatment is by surgery to open the compartment, completed in a timely manner. If not treated within six hours, permanent muscle or nerve damage can result.In chronic compartment syndrome (aka chronic exertional compartment syndrome), there is generally pain with exercise but the pain dissipates once activity ceases. Other symptoms may include numbness. Symptoms typically resolve with rest. Common activities that trigger chronic compartment syndrome include running and biking. Generally, this condition does not result in permanent damage. Other conditions that may present similarly include stress fractures and tendinitis. Treatment may include physical therapy or—if that is not effective—surgery.Acute compartment syndrome occurs in about 3% of those who have a midshaft fracture of the forearm. Rates in other areas of the body and for chronic cases are unknown. The condition occurs more often in males and people under the age of 35, in line with the occurrence of trauma. Compartment syndrome was first described in 1881 by German surgeon Richard von Volkmann. Untreated, acute compartment syndrome can result in Volkmanns contracture. Signs and symptoms Compartment syndrome usually presents within a few hours of an inciting event, but may present anytime up to 48 hours after. The limb affected by compartment syndrome is often associated with a firm, wooden feeling or a deep palpation, and is usually described as feeling tight. There may also be decrease pulses in the limb along with associated paresthesia. Usually, the pain cannot be relieved by NSAIDs. Range of motion may be limited while the compartment pressure is high. In acute compartment syndrome, the pain will not be relieved with rest. In chronic exertional compartment syndrome the pain will dissipate with rest. Acute There are five characteristic signs and symptoms related to acute compartment syndrome: pain, paraesthesia (reduced sensation), paralysis, pallor, and pulselessness. Pain and paresthesia are the early symptoms of compartment syndrome. CommonPain – A person may experience pain disproportionate to the findings of the physical examination. This pain may not be relieved by strong analgesic medications. The pain is aggravated by passively stretching the muscle group within the compartment. However, such pain may disappear in the late stages of the compartment syndrome. The role of local anesthesia in delaying the diagnosis of compartment syndrome is still being debated. Paresthesia (altered sensation) – A person may complain of "pins & needles", numbness, and a tingling sensation. This may progress to loss of sensation (anesthesia) if no intervention is made.UncommonParalysis – Paralysis of the limb is a rare, late finding. It may indicate both a nerve or muscular lesion. Pallor and pulselessness – A lack of pulse rarely occurs in patients, as pressures that cause compartment syndrome are often well below arterial pressures. Absent pulses only occur when there is arterial injury or during the late stages of the compartment syndrome, when compartment pressures are very high. Pallor can also result from arterial occlusion. Chronic The symptoms of chronic exertional compartment syndrome, CECS, may involve pain, tightness, cramps, weakness, and diminished sensation. This pain can occur for months, and in some cases over a period of years, and may be relieved by rest. Moderate weakness in the affected region can also be observed. These symptoms are brought on by exercise and consist of a sensation of extreme tightness in the affected muscles followed by a painful burning sensation if exercise is continued. After exercise is ceased, the pressure in the compartment will decrease within a few minutes, relieving painful symptoms. Symptoms will occur at a certain threshold of exercise which varies from person to person but is rather consistent for a given individual. This threshold can range anywhere from 30 seconds of running to 2–3 miles of running. CECS most commonly occurs in the lower leg, with the anterior compartment being the most frequently affected compartment. Foot drop is a common symptom of CECS. Complications Failure to relieve the pressure can result in the death of tissues (necrosis) in the affected anatomical compartment, since the ability of blood to enter the smallest vessels in the compartment (capillary perfusion pressure) will fall. This, in turn, leads to progressively increasing oxygen deprivation of the tissues dependent on this blood supply. Without sufficient oxygen, the tissue will die. On a large scale, this can cause Volkmanns contracture in affected limbs, a permanent and irreversible process. Other reported complications include neurological deficits of the affected limb, gangrene, and chronic regional pain syndrome. Rhabdomyolysis and subsequent kidney failure are also possible complications. In some case series, rhabdomyolysis is reported in 23% of patients with ACS. Causes Acute Acute compartment syndrome (ACS) is a medical emergency that can develop after traumatic injuries, such as in automobile accidents or dynamic sporting activities – for example, a severe crush injury or an open or closed fracture of an extremity. Rarely, ACS can develop after a relatively minor injury, or due to another medical issue. The lower legs and the forearms are the most frequent sites affected by compartment syndrome. Other areas of the body such as thigh, buttock, hand, abdomen, and foot can also be affected. The most common cause of acute compartment syndrome is fracture of a bone, most commonly the tibia. There is no difference between acute compartment syndrome originating from an open or closed fracture. Leg compartment syndrome is found in 2% to 9% of tibial fractures. It is strongly related to fractures involving the tibial diaphysis as well as other sections of the tibia. Direct injury to blood vessels can lead to compartment syndrome by reducing the downstream blood supply to soft tissues. This reduction in blood supply can cause a series of inflammatory reactions that promote the swelling of the soft tissues. Such inflammation can be further worsened by reperfusion therapy. Because the fascia layer that defines the compartment of the limbs does not stretch, a small amount of bleeding into the compartment, or swelling of the muscles within the compartment, can cause the pressure to rise greatly. Intravenous drug injection, casts, prolonged limb compression, crush injuries, anabolic steroid use, vigorous exercise, and eschar from burns can also cause compartment syndrome. Patients on anticoagulant therapy have an increased risk of bleeding into a closed compartment.Abdominal compartment syndrome occurs when the intra-abdominal pressure exceeds 20 mmHg and abdominal perfusion pressure is less than 60 mmHg. This disease process is associated with organ dysfunction and multiple organ failures. There are many causes, which can be broadly grouped into three mechanisms: primary (internal bleeding and swelling); secondary (vigorous fluid replacement as an unintended complication of resuscitative medical treatment, leading to the acute formation of ascites and a rise in intra-abdominal pressure); and recurrent (compartment syndrome that has returned after the initial treatment of secondary compartment syndrome).Compartment syndrome after snake bite is rare. Its incidence varies from 0.2 to 1.36% as recorded in case reports. Compartment syndrome is more common in children possibly due to inadequate volume of the bodily fluid to dilute the snake venom. Increased white blood cell count of more than 1,650/μL and aspartate transaminase (AST) level of more than 33.5 U/L could increase the risk of developing compartment syndrome. Otherwise, those bitten by venomous snake should be observed for 48 hours to exclude the possibility of compartment syndrome.Acute compartment syndrome due to severe/uncontrolled hypothyroidism is rare. Chronic When compartment syndrome is caused by repetitive use of the muscles, it is known as chronic compartment syndrome (CCS). This is usually not an emergency, but the loss of circulation can cause temporary or permanent damage to nearby nerves and muscles. A subset of chronic compartment syndrome is chronic exertional compartment syndrome (CECS), often called exercise-induced compartment syndrome (EICS). Oftentimes, CECS is a diagnosis of exclusion. CECS of the leg is a condition caused by exercise which results in increased tissue pressure within an anatomical compartment due to an acute increase in muscle volume – as much as 20% is possible during exercise. When this happens, pressure builds up in the tissues and muscles causing tissue ischemia. An increase in muscle weight will reduce the compartment volume of the surrounding fascial borders and result in an increased compartment pressure. An increase in the pressure of the tissue can force fluid to leak into the interstitial space (extracellular fluid), leading to a disruption of the micro-circulation of the leg. This condition occurs commonly in the lower leg and various other locations within the body, such as the foot or forearm. CECS can be seen in athletes who train rigorously in activities that involve constant repetitive actions or motions. Pathophysiology In a normal human body, blood flow from the arterial system (higher pressure) to venous system (lower pressure) requires a pressure gradient. When this pressure gradient is diminished, blood flow from the artery to the vein is reduced. This causes a backup of blood and excessive fluid to leak from the capillary wall into spaces between the soft tissues cells, causing swelling of the extracellular space and a rise in intracompartmental pressure. This swelling of the soft tissues surrounding the blood vessels compresses the blood and lymphatic vessels further, causing more fluid to enter the extracellular spaces, leading to additional compression. The pressure continues to increase due to the non-compliant nature of the fascia containing the compartment. This worsening cycle can eventually lead to a lack of sufficient oxygen in the soft tissues (tissue ischemia) and tissue death (necrosis). Tingling and abnormal sensation (paraesthesia) can begin as early as 30 minutes from the start of tissue ischemia and permanent damage can occur as early as 12 hours from the onset of the inciting injury. Diagnosis Compartment syndrome is a clinical diagnosis, meaning that a medical providers examination and the patients history usually give the diagnosis. Apart from the typical signs and symptoms, measurement of intracompartmental pressure can also be important for diagnosis. Using a combination of clinical diagnosis and serial intracompartmental pressure measurements increases both the sensitivity and specificity of diagnosing compartment syndrome. A transducer connected to a catheter is inserted 5 cm into the zone of injury. A compartment pressure no less than 30 mmHg of the diastolic pressure in a conscious or unconscious person is associated with compartment syndrome. Fasciotomy is indicated in that case. For those patients with low blood pressure (hypotension), a pressure of 20 mmHg higher than the intracompartmental pressure is associated with compartmental syndrome. Noninvasive methods of diagnosis such as near-infraredspectroscopy (NIRS) which uses sensors on the skin, shows promise in controlled settings. However, with limited data in uncontrolled settings, clinical presentation and intracompartmental pressure remain the gold standard for diagnosis.Chronic exertional compartment syndrome is usually a diagnosis of exclusion, with the hallmark finding being absence of symptoms at rest. Measurement of intracompartmental pressures during symptom reproduction (usually immediately following running) is the most useful test. Imaging studies (X-ray, CT, MRI) can be useful in ruling out other more common diagnoses instead of confirming the diagnosis of compartment syndrome. Additionally, MRI has been shown to be effective in diagnosing chronic exertional compartment syndrome. The average duration of symptoms prior to diagnosis is 28 months. Treatment Acute Any external compression (tourniquet, orthopedic casts or dressings applied on the affected limb) should be removed. Cutting of the cast will reduce the intracompartmental pressure by 65%, followed by 10 to 20% pressure reduction once padding is cut. After removal of the external compression the limb should be placed at the level of the heart. The vital signs of the patient should be closely monitored. If the clinical condition does not improve, then fasciotomy is indicated to decompress the compartments. An incision large enough to decompress all the compartments is necessary. This surgical procedure is performed inside an operating theater under general or local anesthesia. The timing of the fasciotomy wound closure is debated. Some surgeons suggest wound closure should be done seven days after fasciotomy. Multiple techniques exist for closure of the surgical site including vacuum-assisted and shoelace. Both techniques are acceptable methods for closure, but the vacuum-assisted technique has led to longer hospitalization time. A skin graft may be required to close the wound, which would complicate the treatment with a much longer hospitalization stay. Chronic Treatment for chronic exertional compartment syndrome can include decreasing or subsiding exercise and/or exacerbating activities, massage, non-steroidal anti-inflammatory medication, and physiotherapy. Chronic compartment syndrome in the lower leg can be treated conservatively or surgically. Conservative treatment includes rest, anti-inflammatory medications, and manual decompression. Warming the affected area with a heating pad may help to loosen the fascia prior to exercise. Icing the area may result in further constriction of the fascia and is not recommended before exercise. The use of devices that apply external pressure to the area, such as splints, casts, and tight wound dressings, should be avoided. If symptoms persist after conservative treatment or if an individual does not wish to give up the physical activities which bring on symptoms, compartment syndrome can be treated by a surgery known as a fasciotomy. A US military study conducted in 2012 found that teaching individuals with lower leg chronic exertional compartment syndrome to change their running style to a forefoot running technique abated symptoms in those with symptoms limited to the anterior compartment. Running with a forefoot strike limits use of the tibialis anterior muscle which may explain the relief in symptoms in those with anterior compartment syndrome. Hyperbaric oxygen therapy has been suggested by case reports – though as of 2011 not proven in randomized control trials – to be an effective adjunctive therapy for crush injury, compartment syndrome, and other acute traumatic ischemias, by improving wound healing and reducing the need for repetitive surgery. Prognosis A mortality rate of 47% has been reported for acute compartment syndrome of the thigh. According to one study the rate of fasciotomy for acute compartment syndrome varied from 2% to 24%. This is due to uncertainty and differences in labeling a condition as acute compartment syndrome. The most significant prognostic factor in people with acute compartment syndrome is time to diagnosis and subsequent fasciotomy. In people with a missed or late diagnosis of acute compartment syndrome, limb amputation may be necessary for survival. Following a fasciotomy, some symptoms may be permanent depending on factors such as which compartment, time until fasciotomy, and muscle necrosis. Muscle necrosis can occur quickly, within 3 hours of original injury in some studies. Fasciotomy of the lateral compartment of the leg may lead to symptoms due to the nerves and muscles in that compartment. These may include foot drop, numbness along leg, numbness of big toe, pain, and loss of foot eversion. Epidemiology In one case series of 164 people with acute compartment syndrome, 69% of the cases had an associated fracture. The authors of that article also calculated an annual incidence of acute compartment syndrome of 1 to 7.3 per 100,000. There are significant differences in the incidence of acute compartment syndrome based on age and gender in the setting of trauma. Men are ten times more likely than women to develop ACS. The mean age for ACS in men is 30 years while the mean age is 44 years for women. Acute compartment syndrome may occur more often in individuals less than 35 years old due to increased muscle mass within the compartments . The anterior compartment of the leg is the most common site for ACS. See also Abdominal compartment syndrome Escharotomy Ischemia-reperfusion injury of the appendicular musculoskeletal system References External links Compartment Syndrome of the Forearm – Orthopaedia.com Chronic Exertional Compartment Syndrome detailed at MayoClinic.com Compartment syndrome at the Duke University Health Systems Orthopedics program 05-062a. at Merck Manual of Diagnosis and Therapy Home Edition Compartment syndrome American Association of Orthopaedic Surgeons Compartment Syndrome
Secondary hyperparathyroidism
Secondary hyperparathyroidism is the medical condition of excessive secretion of parathyroid hormone (PTH) by the parathyroid glands in response to hypocalcemia (low blood calcium levels), with resultant hyperplasia of these glands. This disorder is primarily seen in patients with chronic kidney failure. It is sometimes abbreviated "SHPT" in medical literature. Signs and symptoms Bone and joint pain are common, as are limb deformities. The elevated PTH has also pleiotropic effects on the blood, immune system, and neurological system. Cause Chronic kidney failure is the most common cause of secondary hyperparathyroidism. Failing kidneys do not convert enough vitamin D to its active form, and they do not adequately excrete phosphate. When this happens, insoluble calcium phosphate forms in the body and removes calcium from the circulation. Both processes lead to hypocalcemia and hence secondary hyperparathyroidism. Secondary hyperparathyroidism can also result from malabsorption (chronic pancreatitis, small bowel disease, malabsorption-dependent bariatric surgery) in that the fat-soluble vitamin D can not get reabsorbed. This leads to hypocalcemia and a subsequent increase in parathyroid hormone secretion in an attempt to increase the serum calcium levels. A few other causes can stem from inadequate dietary intake of calcium, a vitamin D deficiency, or steatorrhea. Diagnosis The PTH is elevated due to decreased levels of calcium or 1,25-dihydroxy-vitamin D3. It is usually seen in cases of chronic kidney disease or defective calcium receptors on the surface of parathyroid glands. Treatment If the underlying cause of the hypocalcemia can be addressed, the hyperparathyroidism will resolve. In people with chronic kidney failure, treatment consists of dietary restriction of phosphorus; supplements containing an active form of vitamin D, such as calcitriol, doxercalciferol, paricalcitol; and phosphate binders, which are either calcium-based and non-calcium based.Extended Release Calcifediol was recently approved by the FDA as a treatment for secondary hyperparathyroidism (SHPT) in adults with stage 3 or 4 chronic kidney disease (CKD) and low vitamin D blood levels (25-hydroxyvitamin D less than 30 ng/mL). It can help treat SHPT by increasing Vitamin D levels and lowering parathyroid hormone or PTH. It is not indicated for people with stage 5 CKD or on dialysis.In the treatment of secondary hyperparathyroidism due to chronic kidney disease on dialysis calcimimetics do not appear to affect the risk of early death. It does decrease the need for a parathyroidectomy but caused more issues with low blood calcium levels and vomiting.Most people with hyperparathyroidism secondary to chronic kidney disease will improve after renal transplantation, but many will continue to have a degree of residual hyperparathyroidism (tertiary hyperparathyroidism) post-transplant with associated risk of bone loss, etc. Prognosis If left untreated, the disease will progress to tertiary hyperparathyroidism, where correction of the underlying cause will not stop excess PTH secretion, i.e. parathyroid gland hypertrophy becomes irreversible. In contrast with secondary hyperparathyroidism, tertiary hyperparathyroidism is associated with hypercalcemia rather than hypocalcemia. See also Primary hyperparathyroidism Tertiary hyperparathyroidism References == External links ==
Filariasis
Filariasis is a parasitic disease caused by an infection with roundworms of the Filarioidea type. These are spread by blood-feeding insects such as black flies and mosquitoes. They belong to the group of diseases called helminthiases. These parasites exist in the wild in subtropical parts of southern Asia, Africa, the South Pacific, and parts of South America. One does not acquire them in temperate areas like Europe or the United States.Eight known filarial worms have humans as a definitive host. These are divided into three groups according to the part of the body they affect: Lymphatic filariasis is caused by the worms Wuchereria bancrofti, Brugia malayi, and Brugia timori. These worms occupy the lymphatic system, including the lymph nodes; in chronic cases, these worms lead to the syndrome of elephantiasis. Subcutaneous filariasis is caused by Loa loa (the eye worm), Mansonella streptocerca, and Onchocerca volvulus. These worms occupy the layer just under the skin. L. loa causes Loa loa filariasis, while O. volvulus causes river blindness. Serous cavity filariasis is caused by the worms Mansonella perstans and Mansonella ozzardi, which occupy the serous cavity of the abdomen. Dirofilaria immitis, the dog heartworm, rarely infects humans.The adult worms, which usually stay in one tissue, release early larval forms known as microfilariae into the persons blood. These circulating microfilariae can be taken up during a blood meal by an insect vector; in the vector, they develop into infective larvae that can be spread to another person. Individuals infected by filarial worms may be described as either "microfilaraemic" or "amicrofilaraemic", depending on whether microfilariae can be found in their peripheral blood. Filariasis is diagnosed in microfilaraemic cases primarily through direct observation of microfilariae in the peripheral blood. Occult filariasis is diagnosed in amicrofilaraemic cases based on clinical observations and, in some cases, by finding a circulating antigen in the blood. Signs and symptoms The most spectacular symptom of lymphatic filariasis is elephantiasis – edema with thickening of the skin and underlying tissues—which was the first disease discovered to be transmitted by mosquito bites. Elephantiasis results when the parasites lodge in the lymphatic system.Elephantiasis affects mainly the lower extremities, while the ears, mucous membranes, and amputation stumps are affected less frequently. However, different species of filarial worms tend to affect different parts of the body; Wuchereria bancrofti can affect the legs, arms, vulva, breasts, and scrotum (causing hydrocele formation), while Brugia timori rarely affects the genitals. Those who develop the chronic stages of elephantiasis are usually free from microfilariae (amicrofilaraemic), and often have adverse immunological reactions to the microfilariae, as well as the adult worms.The subcutaneous worms present with rashes, urticarial papules, and arthritis, as well as hyper- and hypopigmentation macules. Onchocerca volvulus manifests itself in the eyes, causing "river blindness" (onchocerciasis), one of the leading causes of blindness in the world. Serous cavity filariasis presents with symptoms similar to subcutaneous filariasis, in addition to abdominal pain, because these worms are also deep-tissue dwellers. Cause Human filarial nematode worms have complicated life cycles, which primarily consists of five stages. After the male and female worms mate, the female gives birth to live microfilariae by the thousands. The microfilariae are taken up by the vector insect (intermediate host) during a blood meal. In the intermediate host, the microfilariae molt and develop into third-stage (infective) larvae. Upon taking another blood meal, the vector insect, such as Culex pipiens, injects the infectious larvae into the dermis layer of the skin. After about one year, the larvae molt through two more stages, maturing into the adult worms. Diagnosis Filariasis is usually diagnosed by identifying microfilariae on Giemsa stained, thin and thick blood film smears, using the "gold standard" known as the finger prick test. The finger prick test draws blood from the capillaries of the finger tip; larger veins can be used for blood extraction, but strict windows of the time of day must be observed. Blood must be drawn at appropriate times, which reflect the feeding activities of the vector insects. Examples are W. bancrofti, whose vector is a mosquito; night is the preferred time for blood collection. Loa loas vector is the deer fly; daytime collection is preferred. This method of diagnosis is only relevant to microfilariae that use the blood as transport from the lungs to the skin. Some filarial worms, such as M. streptocerca and O. volvulus, produce microfilariae that do not use the blood; they reside in the skin only. For these worms, diagnosis relies upon skin snips and can be carried out at any time. Concentration methods Various concentration methods are applied: membrane filter, Knotts concentration method, and sedimentation technique.Polymerase chain reaction (PCR) and antigenic assays, which detect circulating filarial antigens, are also available for making the diagnosis. The latter are particularly useful in amicrofilaraemic cases. Spot tests for antigen are far more sensitive, and allow the test to be done anytime, rather in the late hours. Lymph node aspirate and chylous fluid may also yield microfilariae. Medical imaging, such as CT or MRI, may reveal "filarial dance sign" in the chylous fluid; X-ray tests can show calcified adult worms in lymphatics. The DEC provocation test is performed to obtain satisfying numbers of parasites in daytime samples. Xenodiagnosis is now obsolete, and eosinophilia is a nonspecific primary sign. Treatment The recommended treatment for people outside the United States is albendazole combined with ivermectin. A combination of diethylcarbamazine and albendazole is also effective. Side effects of the drugs include nausea, vomiting, and headaches. All of these treatments are microfilaricides; they have no effect on the adult worms. While the drugs are critical for treatment of the individual, proper hygiene is also required. There is good evidence that albendazole alone; or addition of albendazole to diethylcarbamazine or ivermectin, makes minimal difference in clearing microfilaria or adult worms from blood circulation. Diethylcarbamazine-medicated salt is effective in controlling lymphatic filariasis while maintaining its coverage at 90% in the community for six months.Different trials were made to use the known drug at its maximum capacity in absence of new drugs. In a study from India, it was shown that a formulation of albendazole had better anti-filarial efficacy than albendazole itself.In 2003, the common antibiotic doxycycline was suggested for treating elephantiasis. Filarial parasites have symbiotic bacteria in the genus Wolbachia, which live inside the worm and seem to play a major role in both its reproduction and the development of the disease. This drug has shown signs of inhibiting the reproduction of the bacteria, further inducing sterility. Clinical trials in June 2005 by the Liverpool School of Tropical Medicine reported an eight-week course almost eliminated microfilaraemia. Society and culture Research teams In 2015 William C. Campbell and Satoshi Ōmura were co-awarded half of that years Nobel prize in Physiology or Medicine for the discovery of the drug avermectin, which, in the further developed form ivermectin, has decreased the occurrence of lymphatic filariasis. Prospects for elimination Filarial diseases in humans offer prospects for elimination by means of vermicidal treatment. If the human link in the chain of infection can be broken, then notionally the disease could be wiped out in a season. In practice it is not quite so simple, and there are complications in that multiple species overlap in certain regions and double infections are common. This creates difficulties for routine mass treatment because people with onchocerciasis in particular react badly to treatment for lymphatic filariasis. Other animals Filariasis can also affect domesticated animals, such as cattle, sheep, and dogs. Cattle Verminous hemorrhagic dermatitis is a clinical disease in cattle due to Parafilaria bovicola. Intradermal onchocerciasis of cattle results in losses in leather due to Onchocerca dermata, O. ochengi, and O. dukei. O. ochengi is closely related to human O. volvulus (river blindness), sharing the same vector, and could be useful in human medicine research. Stenofilaria assamensis and others cause different diseases in Asia, in cattle and zebu. Horses "Summer bleeding" is hemorrhagic subcutaneous nodules in the head and upper forelimbs, caused by Parafilaria multipapillosa (North Africa, Southern and Eastern Europe, Asia and South America). Dogs Heart filariasis is caused by Dirofilaria immitis. See also Ascariasis Eradication of infectious diseases Helminthiasis List of parasites (human) Neglected tropical diseases References Further reading "Special issue", Indian Journal of Urology, 21 (1), January–June 2005 "Filariasis". Therapeutics in Dermatology. June 2012. Retrieved 24 July 2012. External links Page from the "Merck Veterinary Manual" on "Parafilaria multipapillosa" in horses
Papulosquamous disorder
A papulosquamous disorder is a condition which presents with both papules and scales, or both scaly papules and plaques.Examples include psoriasis, lichen planus, and pityriasis rosea. See also List of cutaneous conditions References Further reading Norman RA, Blanco PM (2003). "Papulosquamous diseases in the elderly". Dermatologic Therapy. 16 (3): 231–42. doi:10.1046/j.1529-8019.2003.01633.x. PMID 14510880. http://www.emedicine.com/derm/index.shtml#papulosquamous == External links ==
Brain ischemia
Brain ischemia is a condition in which there is insufficient bloodflow to the brain to meet metabolic demand. This leads to poor oxygen supply or cerebral hypoxia and thus leads to the death of brain tissue or cerebral infarction/ischemic stroke. It is a sub-type of stroke along with subarachnoid hemorrhage and intracerebral hemorrhage.Ischemia leads to alterations in brain metabolism, reduction in metabolic rates, and energy crisis.There are two types of ischemia: focal ischemia, which is confined to a specific region of the brain; and global ischemia, which encompasses wide areas of brain tissue. The main symptoms of brain ischemia involve impairments in vision, body movement, and speaking. The causes of brain ischemia vary from sickle cell anemia to congenital heart defects. Symptoms of brain ischemia can include unconsciousness, blindness, problems with coordination, and weakness in the body. Other effects that may result from brain ischemia are stroke, cardiorespiratory arrest, and irreversible brain damage. An interruption of blood flow to the brain for more than 10 seconds causes unconsciousness, and an interruption in flow for more than a few minutes generally results in irreversible brain damage. In 1974, Hossmann and Zimmermann demonstrated that ischemia induced in mammalian brains for up to an hour can be at least partially recovered. Accordingly, this discovery raised the possibility of intervening after brain ischemia before the damage becomes irreversible. Symptoms and signs The symptoms of brain ischemia reflect the anatomical region undergoing blood and oxygen deprivation. Ischemia within the arteries branching from the internal carotid artery may result in symptoms such as blindness in one eye, weakness in one arm or leg, or weakness in one entire side of the body. Ischemia within the arteries branching from the vertebral arteries in the back of the brain may result in symptoms such as dizziness, vertigo, double vision, or weakness on both sides of the body. Other symptoms include difficulty speaking, slurred speech, and the loss of coordination. The symptoms of brain ischemia range from mild to severe. Further, symptoms can last from a few seconds to a few minutes or extended periods of time. If the brain becomes damaged irreversibly and infarction occurs, the symptoms may be permanent.Similar to cerebral hypoxia, severe or prolonged brain ischemia will result in unconsciousness, brain damage or death, mediated by the ischemic cascade.Multiple cerebral ischemic events may lead to subcortical ischemic depression, also known as vascular depression. This condition is most commonly seen in elderly depressed patients. Late onset depression is increasingly seen as a distinct sub-type of depression, and can be detected with an MRI. Causes Brain ischemia has been linked to a variety of diseases or abnormalities. Individuals with sickle cell anemia, compressed blood vessels, ventricular tachycardia, plaque buildup in the arteries, blood clots, extremely low blood pressure as a result of heart attack, and congenital heart defects have a higher predisposition to brain ischemia in comparison to the average population. Sickle cell anemia may cause brain ischemia associated with the irregularly shaped blood cells. Sickle shaped blood cells clot more easily than normal blood cells, impeding blood flow to the brain.Compression of blood vessels may also lead to brain ischemia, by blocking the arteries that carry oxygen to the brain. Tumors are one cause of blood vessel compression.Ventricular tachycardia represents a series of irregular heartbeats that may cause the heart to completely shut down resulting in cessation of oxygen flow. Further, irregular heartbeats may result in formation of blood clots, thus leading to oxygen deprivation to all organs.Blockage of arteries due to plaque buildup may also result in ischemia. Even a small amount of plaque build up can result in the narrowing of passageways, causing that area to become more prone to blood clots. Large blood clots can also cause ischemia by blocking blood flow.A heart attack can also cause brain ischemia due to the correlation that exists between heart attack and low blood pressure. Extremely low blood pressure usually represents the inadequate oxygenation of tissues. Untreated heart attacks may slow blood flow enough that blood may start to clot and prevent the flow of blood to the brain or other major organs. Extremely low blood pressure can also result from drug overdose and reactions to drugs. Therefore, brain ischemia can result from events other than heart attacks.Congenital heart defects may also cause brain ischemia due to the lack of appropriate artery formation and connection. People with congenital heart defects may also be prone to blood clots.Other pathological events that may result in brain ischemia include cardiorespiratory arrest, stroke, and severe irreversible brain damage.Recently, Moyamoya disease has also been identified as a potential cause for brain ischemia. Moyamoya disease is an extremely rare cerebrovascular condition that limits blood circulation to the brain, consequently leading to oxygen deprivation. Pathophysiology During brain ischemia, the brain cannot perform aerobic metabolism due to the loss of oxygen and substrate. The brain is not able to switch to anaerobic metabolism and, because it does not have any long term energy stored, the levels of adenosine triphosphate (ATP) drop rapidly, approaching zero within 4 minutes. In the absence of biochemical energy, cells begin to lose the ability to maintain electrochemical gradients. Consequently, there is a massive influx of calcium into the cytosol, a massive release of glutamate from synaptic vesicles, lipolysis, calpain activation, and the arrest of protein synthesis. Additionally, removal of metabolic wastes is slowed. The interruption of blood flow to the brain for ten seconds results in the immediate loss of consciousness. The interruption of blood flow for twenty seconds results in the stopping of electrical activity. An area called a penumbra may result, wherein neurons do not receive enough blood to communicate, however do receive sufficient oxygenation to avoid cell death for a short period of time. Diagnosis Classification The broad term, "stroke" can be divided into three categories: brain ischemia, subarachnoid hemorrhage and intracerebral hemorrhage. Brain ischemia can be further subdivided, by cause, into thrombotic, embolic, and hypoperfusion. Thrombotic and embolic are generally focal or multifocal in nature while hypoperfusion affects the brain globally. Focal brain ischemia Focal brain ischemia occurs when a blood clot has occluded a cerebral vessel. Focal brain ischemia reduces blood flow to a specific brain region, increasing the risk of cell death to that particular area. It can be either caused by thrombosis or embolism. Global brain ischemia Global brain ischemia occurs when blood flow to the brain is halted or drastically reduced. This is commonly caused by cardiac arrest. If sufficient circulation is restored within a short period of time, symptoms may be transient. However, if a significant amount of time passes before restoration, brain damage may be permanent. While reperfusion may be essential to protecting as much brain tissue as possible, it may also lead to reperfusion injury. Reperfusion injury is classified as the damage that ensues after restoration of blood supply to ischemic tissue.Due to different susceptibility to ischemia of various brain regions, a global brain ischemia may cause focal brain infarction. The cerebral cortex and striatum are more susceptible than the thalamus, and the thalamus in turn is more sensitive than the brainstem. Partial cerebral cortex infarction from global brain ischemia typically manifests as watershed stroke. Biomarker Use of biomarker is one method that has been evaluated to predict the risk of stroke, diagnose stroke and its causes, predict stroke severity and outcome, and guide prevention therapy. Blood Biomarkers: Many proteins and RNA biomarkers identified are connected to ischemic stroke pathophysiology includes Central Nervous System Tissue Injury Biomarkers- S100B, Glial fibrillary acidic protein, enolase 2,Anti-NMDA receptor encephalitis. Inflammatory Biomarkers - c-reactive protein, Interleukin 6, Tumor necrosis factor α,VCAM-1. Coagulation / Thrombosis Biomarkers - Fibrinogen, D-dimer,Von Willebrand factor Other Biomarkers- PARK7, B-type neurotrophic growth factor. Treatment Alteplase (t-PA) is an effective medication for acute ischemic stroke. When given within 3 hours, treatment with tpa significantly improves the probability of a favourable outcome versus treatment with placebo.The outcome of brain ischemia is influenced by the quality of subsequent supportive care. Systemic blood pressure (or slightly above) should be maintained so that cerebral blood flow is restored. Also, hypoxaemia and hypercapnia should be avoided. Seizures can induce more damage; accordingly, anticonvulsants should be prescribed and should a seizure occur, aggressive treatment should be undertaken. Hyperglycaemia should also be avoided during brain ischemia. Management When someone presents with an ischemic event, treatment of the underlying cause is critical for prevention of further episodes.Anticoagulation with warfarin or heparin may be used if the patient has atrial fibrillation.Operative procedures such as carotid endarterectomy and carotid stenting may be performed if the patient has a significant amount of plaque in the carotid arteries associated with the local ischemic events. Research Therapeutic hypothermia has been attempted to improve results post brain ischemia. This procedure was suggested to be beneficial based on its effects post cardiac arrest. Evidence supporting the use of therapeutic hypothermia after brain ischemia, however, is limited.A closely related disease to brain ischemia is brain hypoxia. Brain hypoxia is the condition in which there is a decrease in the oxygen supply to the brain even in the presence of adequate blood flow. If hypoxia lasts for long periods of time, coma, seizures, and even brain death may occur. Symptoms of brain hypoxia are similar to ischemia and include inattentiveness, poor judgment, memory loss, and a decrease in motor coordination. Potential causes of brain hypoxia are suffocation, carbon monoxide poisoning, severe anemia, and use of drugs such as cocaine and other amphetamines. Other causes associated with brain hypoxia include drowning, strangling, choking, cardiac arrest, head trauma, and complications during general anesthesia. Treatment strategies for brain hypoxia vary depending on the original cause of injury, primary and/or secondary. See also Mechanism of anoxic depolarization in the brain Watershed stroke References Bibliography Gusev, Eugene I.; Skvortsova, Veronica I. (2003). Brain ischemia. New York: Kluwer Academic/Plenum Publishers. ISBN 0-306-47694-0. Further reading Chang, Steven; Doty, James; Skirboll, Stephen; Steinberg, Gary. Cerebral ischemia . cgi.stanford.edu. URL last accessed February 26, 2006. == External links ==
Diarrhea
Diarrhea, also spelled diarrhoea, is the condition of having at least three loose, liquid, or watery bowel movements each day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are exclusively breastfed, however, are normal.The most common cause is an infection of the intestines due to either a virus, bacterium, or parasite—a condition also known as gastroenteritis. These infections are often acquired from food or water that has been contaminated by feces, or directly from another person who is infected. The three types of diarrhea are: short duration watery diarrhea, short duration bloody diarrhea, and persistent diarrhea (lasting more than two weeks, which can be either watery or bloody). The short duration watery diarrhea may be due to cholera, although this is rare in the developed world. If blood is present, it is also known as dysentery. A number of non-infectious causes can result in diarrhea. These include lactose intolerance, irritable bowel syndrome, non-celiac gluten sensitivity, celiac disease, inflammatory bowel disease such as ulcerative colitis, hyperthyroidism, bile acid diarrhea, and a number of medications. In most cases, stool cultures to confirm the exact cause are not required.Diarrhea can be prevented by improved sanitation, clean drinking water, and hand washing with soap. Breastfeeding for at least six months and vaccination against rotavirus is also recommended. Oral rehydration solution (ORS)—clean water with modest amounts of salts and sugar—is the treatment of choice. Zinc tablets are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years. When people have diarrhea it is recommended that they continue to eat healthy food and babies continue to be breastfed. If commercial ORS is not available, homemade solutions may be used. In those with severe dehydration, intravenous fluids may be required. Most cases, however, can be managed well with fluids by mouth. Antibiotics, while rarely used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help decrease the number of bowel movements but is not recommended in those with severe disease.About 1.7 to 5 billion cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on average three times a year. Total deaths from diarrhea are estimated at 1.53 million in 2019—down from 2.9 million in 1990. In 2012, it was the second most common cause of deaths in children younger than five (0.76 million or 11%). Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years of age. Other long term problems that can result include stunted growth and poor intellectual development. Definition Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day, or as having more stools than is normal for that person.Acute diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel, lasting less than 14 days, by World Gastroenterology Organization. Acute diarrhea that is watery may be known as AWD (Acute Watery Diarrhoea.) Secretory Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption. There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates the secretion of anions, especially chloride ions (Cl–). Therefore, to maintain a charge balance in the gastrointestinal tract, sodium (Na+) is carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even during fasting. It continues even when there is no oral food intake. Osmotic Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also result from maldigestion, e.g. pancreatic disease or coeliac disease, in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium or vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent, e.g. milk or sorbitol, is stopped. Exudative Exudative diarrhea occurs with the presence of blood and pus in the stool. This occurs with inflammatory bowel diseases, such as Crohns disease or ulcerative colitis, and other severe infections such as E. coli or other forms of food poisoning. Inflammatory Inflammatory diarrhea occurs when there is damage to the mucosal lining or brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids. Features of all three of the other types of diarrhea can be found in this type of diarrhea. It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis. Dysentery If there is blood visible in the stools, it is also known as dysentery. The blood is a trace of an invasion of bowel tissue. Dysentery is a symptom of, among others, Shigella, Entamoeba histolytica, and Salmonella. Health effects Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition resulting from any cause reduces physical fitness and work productivity in adults," and diarrhea is a primary cause of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence.Diarrhea can cause electrolyte imbalances, kidney impairment, dehydration, and defective immune system responses. When oral drugs are administered, the efficiency of the drug is to produce a therapeutic effect and the lack of this effect may be due to the medication travelling too quickly through the digestive system, limiting the time that it can be absorbed. Clinicians try to treat the diarrheas by reducing the dosage of medication, changing the dosing schedule, discontinuation of the drug, and rehydration. The interventions to control the diarrhea are not often effective. Diarrhea can have a profound effect on the quality of life because fecal incontinence is one of the leading factors for placing older adults in long term care facilities (nursing homes). Causes In the latter stages of human digestion, ingested materials are inundated with water and digestive fluids such as gastric acid, bile, and digestive enzymes in order to break them down into their nutrient components, which are then absorbed into the bloodstream via the intestinal tract in the small intestine. Prior to defecation, the large intestine reabsorbs the water and other digestive solvents in the waste product in order to maintain proper hydration and overall equilibrium. Diarrhea occurs when the large intestine is prevented, for any number of reasons, from sufficiently absorbing the water or other digestive fluids from fecal matter, resulting in a liquid, or "loose", bowel movement.Acute diarrhea is most commonly due to viral gastroenteritis with rotavirus, which accounts for 40% of cases in children under five. In travelers, however, bacterial infections predominate. Various toxins such as mushroom poisoning and drugs can also cause acute diarrhea. Chronic diarrhea can be the part of the presentations of a number of chronic medical conditions affecting the intestine. Common causes include ulcerative colitis, Crohns disease, microscopic colitis, celiac disease, irritable bowel syndrome, and bile acid malabsorption. Infections There are many causes of infectious diarrhea, which include viruses, bacteria and parasites. Infectious diarrhea is frequently referred to as gastroenteritis. Norovirus is the most common cause of viral diarrhea in adults, but rotavirus is the most common cause in children under five years old. Adenovirus types 40 and 41, and astroviruses cause a significant number of infections. Shiga-toxin producing Escherichia coli, such as E coli o157:h7, are the most common cause of infectious bloody diarrhea in the United States.Campylobacter spp. are a common cause of bacterial diarrhea, but infections by Salmonella spp., Shigella spp. and some strains of Escherichia coli are also a frequent cause.In the elderly, particularly those who have been treated with antibiotics for unrelated infections, a toxin produced by Clostridioides difficile often causes severe diarrhea.Parasites, particularly protozoa e.g., Cryptosporidium spp., Giardia spp., Entamoeba histolytica, Blastocystis spp., Cyclospora cayetanensis, are frequently the cause of diarrhea that involves chronic infection. The broad-spectrum antiparasitic agent nitazoxanide has shown efficacy against many diarrhea-causing parasites.Other infectious agents, such as parasites or bacterial toxins, may exacerbate symptoms. In sanitary living conditions where there is ample food and a supply of clean water, an otherwise healthy person usually recovers from viral infections in a few days. However, for ill or malnourished individuals, diarrhea can lead to severe dehydration and can become life-threatening. Sanitation Open defecation is a leading cause of infectious diarrhea leading to death.Poverty is a good indicator of the rate of infectious diarrhea in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical care."One of the most common causes of infectious diarrhea is a lack of clean water. Often, improper fecal disposal leads to contamination of groundwater. This can lead to widespread infection among a population, especially in the absence of water filtration or purification. Human feces contains a variety of potentially harmful human pathogens. Nutrition Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea. It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a condition often found in children in developing countries can, even in mild cases, have a significant impact on the development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency and reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever. Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes. However, there is some discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship does not exist between the rate of disease and vitamin A status, Others suggest an increase in the rate associated with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this population has the potential for increased risk of disease contraction. Malabsorption Malabsorption is the inability to absorb food fully, mostly from disorders in the small bowel, but also due to maldigestion from diseases of the pancreas. Causes include: enzyme deficiencies or mucosal abnormality, as in food allergy and food intolerance, e.g. celiac disease (gluten intolerance), lactose intolerance (intolerance to milk sugar, common in non-Europeans), and fructose malabsorption. pernicious anemia, or impaired bowel function due to the inability to absorb vitamin B12, loss of pancreatic secretions, which may be due to cystic fibrosis or pancreatitis, structural defects, like short bowel syndrome (surgically removed bowel) and radiation fibrosis, such as usually follows cancer treatment and other drugs, including agents used in chemotherapy; and certain drugs, like orlistat, which inhibits the absorption of fat. Inflammatory bowel disease The two overlapping types here are of unknown origin: Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum. Crohns disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel. Irritable bowel syndrome Another possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved by defecation and unusual stool (diarrhea or constipation) for at least three days a week over the previous three months. Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements and medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid malabsorption diagnosed with an abnormal SeHCAT test. Other diseases Diarrhea can be caused by other diseases and conditions, namely: Chronic ethanol ingestion Hyperthyroidism Certain medications Bile acid malabsorption Ischemic bowel disease: This usually affects older people and can be due to blocked arteries. Microscopic colitis, a type of inflammatory bowel disease where changes are seen only on histological examination of colonic biopsies. Bile salt malabsorption (primary bile acid diarrhea) where excessive bile acids in the colon produce a secretory diarrhea. Hormone-secreting tumors: some hormones, e.g. serotonin, can cause diarrhea if secreted in excess (usually from a tumor). Chronic mild diarrhea in infants and toddlers may occur with no obvious cause and with no other ill effects; this condition is called toddlers diarrhea. Environmental enteropathy Radiation enteropathy following treatment for pelvic and abdominal cancers. Medications Some medications, such as the penicillin can cause diarrhea. Over 700 medications are known to cause diarrhea. The classes of medications that are known to cause diarrhea are laxatives, antacids, heartburn medications, antibiotics, anti-neoplastic drugs, anti-inflammatories as well as many dietary supplements. Pathophysiology Evolution According to two researchers, Nesse and Williams, diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea may represent a defense mechanism". Diagnostic approach The following types of diarrhea may indicate further investigation is needed: In infants Moderate or severe diarrhea in young children Associated with blood Continues for more than two days Associated non-cramping abdominal pain, fever, weight loss, etc. In travelers In food handlers, because of the potential to infect others; In institutions such as hospitals, child care centers, or geriatric and convalescent homes.A severity score is used to aid diagnosis in children. Chronic diarrhea When diarrhea lasts for more than four weeks a number of further tests may be recommended including: Complete blood count and a ferritin if anemia is present Thyroid stimulating hormone Tissue transglutaminase for celiac disease Fecal calprotectin to exclude inflammatory bowel disease Stool tests for ova and parasites as well as for Clostridioides difficile A colonoscopy or fecal immunochemical testing for cancer, including biopsies to detect microscopic colitis Testing for bile acid diarrhea with SeHCAT, 7α-hydroxy-4-cholesten-3-one or fecal bile acids depending on availability Hydrogen breath test looking for lactose intolerance Further tests if immunodeficiency, pelvic radiation disease or small intestinal bacterial overgrowth suspected.A 2019 guideline recommended that testing for ova and parasites was only needed in people who are at high risk though they recommend routine testing for giardia. Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) were not recommended. Prevention Sanitation Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhoea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections.In institutions, communities, and households, interventions that promote hand washing with soap lead to significant reductions in the incidence of diarrhea. The same applies to preventing open defecation at a community-wide level and providing access to improved sanitation. This includes use of toilets and implementation of the entire sanitation chain connected to the toilets (collection, transport, disposal or reuse of human excreta). There is limited evidence that safe disposal of child or adult feces can prevent diarrheal disease. Hand washing Basic sanitation techniques can have a profound effect on the transmission of diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally shown to reduce the incidence of disease by approximately 30–48%. Hand washing in developing countries, however, is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts of the world; however, access to soap and water is limited in a number of less developed countries. This lack of access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require the implementation of educational programs that encourage sanitary behaviours. Water Given that water contamination is a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease.Chlorine treatment of water, for example, has been shown to reduce both the risk of diarrheal disease, and of contamination of stored water with diarrheal pathogens. Vaccination Immunization against the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and 20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in 1985 yielded a slight (2–3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6–10%. Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective vaccines have been developed that have the potential to save many thousands of lives in developing nations, while reducing the overall cost of treatment, and the costs to society.Rotavirus vaccine decrease the rates of diarrhea in a population. New vaccines against rotavirus, Shigella, Enterotoxigenic Escherichia coli (ETEC), and cholera are under development, as well as other causes of infectious diarrhea. Nutrition Dietary deficiencies in developing countries can be combated by promoting better eating practices. Zinc supplementation proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group. The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence. Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the latter strategy was estimated to be significantly more cost effective. Breastfeeding Breastfeeding practices have been shown to have a dramatic effect on the incidence of diarrheal disease in poor populations. Studies across a number of developing nations have shown that those who receive exclusive breastfeeding during their first 6 months of life are better protected against infection with diarrheal diseases. One study in Brazil found that non-breastfed infants were 14 times more likely to die from diarrhea than exclusively breastfed infants. Exclusive breastfeeding is currently recommended for the first six months of an infants life by the WHO, with continued breastfeeding until at least two years of age. Others Probiotics decrease the risk of diarrhea in those taking antibiotics. Insecticide spraying may reduce fly numbers and the risk of diarrhea in children in a setting where there is seasonal variations in fly numbers throughout the year. Management In many cases of diarrhea, replacing lost fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC recommends that children and adults with cholera also continue to eat. There is no evidence that early refeeding in children can cause an increase in inappropriate use of intravenous fluid, episodes of vomiting, and risk of having persistent diarrhea.Medications such as loperamide (Imodium) and bismuth subsalicylate may be beneficial; however they may be contraindicated in certain situations. Fluids Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the "taste of tears"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree that drinks with too much sugar or salt can make dehydration worse.Appropriate amounts of supplemental zinc and potassium should be added if available. But the availability of these should not delay rehydration. As WHO points out, the most important thing is to begin preventing dehydration as early as possible. In another example of prompt ORS hopefully preventing dehydration, CDC recommends for the treatment of cholera continuing to give Oral Rehydration Solution during travel to medical treatment.Vomiting often occurs during the first hour or two of treatment with ORS, especially if a child drinks the solution too quickly, but this seldom prevents successful rehydration since most of the fluid is still absorbed. WHO recommends that if a child vomits, to wait five or ten minutes and then start to give the solution again more slowly.Drinks especially high in simple sugars, such as soft drinks and fruit juices, are not recommended in children under five as they may increase dehydration. A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water. Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally, a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person, with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to administer fluids if warranted. Eating The WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer duration and recover intestinal function more slowly. The WHO states "Food should never be withheld and the childs usual foods should not be diluted. Breastfeeding should always be continued." In the specific example of cholera, the CDC makes the same recommendation. Breast-fed infants with diarrhea often choose to breastfeed more, and should be encouraged to do so. In young children who are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery. Eating food containing fibers may help. Medications Antidiarrheal agents can be classified into four different groups: antimotility, antisecretory, adsorbent, and anti-infectious. While antibiotics are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated diarrhea is the most common adverse effect of treatment with general antibiotics. While bismuth compounds (Pepto-Bismol) decreased the number of bowel movements in those with travelers diarrhea, they do not decrease the length of illness. Anti-motility agents like loperamide are also effective at reducing the number of stools but not the duration of disease. These agents should be used only if bloody diarrhea is not present.Diosmectite, a natural aluminomagnesium silicate clay, is effective in alleviating symptoms of acute diarrhea in children, and also has some effects in chronic functional diarrhea, radiation-induced diarrhea, and chemotherapy-induced diarrhea. Another absorbent agent used for the treatment of mild diarrhea is kaopectate. Racecadotril an antisecretory medication may be used to treat diarrhea in children and adults. It has better tolerability than loperamide, as it causes less constipation and flatulence. However, it has little benefit in improving acute diarrhea in children.Bile acid sequestrants such as cholestyramine can be effective in chronic diarrhea due to bile acid malabsorption. Therapeutic trials of these drugs are indicated in chronic diarrhea if bile acid malabsorption cannot be diagnosed with a specific test, such as SeHCAT retention. Alternative therapies Zinc supplementation may benefit children over six months old with diarrhea in areas with high rates of malnourishment or zinc deficiency. This supports the World Health Organization guidelines for zinc, but not in the very young. A Cochrane Review from 2020 concludes that probiotics make little or no difference to people who have diarrhoea lasting 2 days or longer and that there is no proof that they reduce
Diarrhea
its duration. The probiotic lactobacillus can help prevent antibiotic-associated diarrhea in adults but possibly not children. For those with lactose intolerance, taking digestive enzymes containing lactase when consuming dairy products often improves symptoms. Epidemiology Worldwide in 2004, approximately 2.5 billion cases of diarrhea occurred, which resulted in 1.5 million deaths among children under the age of five. Greater than half of these were in Africa and South Asia. This is down from a death rate of 4.5 million in 1980 for gastroenteritis. Diarrhea remains the second leading cause of infant mortality (16%) after pneumonia (17%) in this age group.The majority of such cases occur in the developing world, with over half of the recorded cases of childhood diarrhea occurring in Africa and Asia, with 696 million and 1.2 billion cases, respectively, compared to only 480 million in the rest of the world.Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. In the Americas, diarrheal disease accounts for a total of 10% of deaths among children aged 1–59 months while in South East Asia, it accounts for 31.3% of deaths. It is estimated that around 21% of child mortalities in developing countries are due to diarrheal disease. Terminology The word diarrhea is from the Ancient Greek διάρροια from διά dia "through" and ῥέω rheo "flow". Diarrhea is the spelling in American English, whereas diarrhoea is the spelling in British English. Slang terms for the condition include "the runs", "the squirts" (or "squits" in Britain) and "the trots". See also Dysentery Travelers diarrhea == References ==
Tonicity
In chemical biology, tonicity is a measure of the effective osmotic pressure gradient; the water potential of two solutions separated by a partially-permeable cell membrane. Tonicity depends on the relative concentration of selective membrane-impermeable solutes across a cell membrane which determine the direction and extent of osmotic flux. It is commonly used when describing the swelling-versus-shrinking response of cells immersed in an external solution. Unlike osmotic pressure, tonicity is influenced only by solutes that cannot cross the membrane, as only these exert an effective osmotic pressure. Solutes able to freely cross the membrane do not affect tonicity because they will always equilibrate with equal concentrations on both sides of the membrane without net solvent movement. It is also a factor affecting imbibition. There are three classifications of tonicity that one solution can have relative to another: hypertonic, hypotonic, and isotonic.A hypotonic solution example is distilled water. Hypertonic solution A hypertonic is when the concentration inside the cell is high and the environment is dilute. In biology, the tonicity of a solution usually refers to its solute concentration relative to that of another solution on the opposite side of a cell membrane; a solution inside of a cell is called hypertonic if it has a greater concentration of solutes than the cytosol inside the cell and its environment is dilute. So when the cell is hypertonic(concentrated) the cell needs more water, so water is forced to enter inside the cell and the cell will swell up and eventually burst. When plant cells lose water they are hypotonic, because their inside is diluted(with more water) and the environment is concentrated so it needs water then the flexible cell membrane pulls away from the rigid cell wall, but remains joined to the cell wall at points called plasmodesmata. The cells often take on the appearance of a pincushion. In plant cells the terms isotonic, hypotonic and hypertonic cannot strictly be used accurately because the pressure exerted by the cell wall significantly affects the osmotic equilibrium point. Hypotonic solution A hypotonic is when the concentration inside of the cell is lower than a solution outside of a cell is called hypotonic. If it has a lower concentration of solutes inside a cell then water is forced to move out. Due to this, water is removed out of the cell causing shrinking of the cell membrane. For cells without a cell wall such as animal cells, they lose water and shrivel. When plant cells are in a hypotonic solution, the central vacuole loses water and the cell membrane sticks against the cell wall. Isotonicity A solution is isotonic when its effective osmole concentration is the same as that of another solution. In biology, the solutions on either side of a cell membrane are isotonic if the concentration of solutes outside the cell is equal to the concentration of solutes inside the cell. In this case the cell neither swells nor shrinks because there is no concentration gradient to induce the diffusion of large amounts of water across the cell membrane. Water molecules freely diffuse through the plasma membrane in both directions, and as the rate of water diffusion is the same in each direction, the cell will neither gain nor lose water. == References ==
Bilateral
Bilateral may refer to any concept including two sides, in particular: Bilateria, bilateral animals Bilateralism, the political and cultural relations between two states Bilateral, occurring on both sides of an organism (Anatomical terms of location § Medial and lateral) Bilateral symmetry, symmetry between two sides of an organism Bilateral filter, an image processing algorithm Bilateral amplifier, a type of amplifier Bilateral (album), an album by the band Leprous Bilateral school, see Partially selective school (England)
Lymphocytopenia
Lymphocytopenia is the condition of having an abnormally low level of lymphocytes in the blood. Lymphocytes are a white blood cell with important functions in the immune system. It is also called lymphopenia. The opposite is lymphocytosis, which refers to an excessive level of lymphocytes. Lymphocytopenia may be present as part of a pancytopenia, when the total numbers of all types of blood cells are reduced. Classification In some cases, lymphocytopenia can be further classified according to which kind of lymphocytes are reduced. If all three kinds of lymphocytes are suppressed, then the term is used without further qualification. In T lymphocytopenia, there are too few T lymphocytes, but normal numbers of other lymphocytes. It causes, and manifests as, a T cell deficiency. This is usually caused by HIV infection (resulting in AIDS), but may be Idiopathic CD4+ lymphocytopenia (ICL), which is a very rare heterogeneous disorder defined by CD4+ T-cell counts below 300 cells/μL in the absence of any known immune deficiency condition, such as human immunodeficiency virus (HIV) infection or chemotherapy. In B lymphocytopenia, there are too few B lymphocytes, but possibly normal numbers of other lymphocytes. It causes, and manifests as, a humoral immune deficiency. This is usually caused by medications that suppress the immune system. In NK lymphocytopenia, there are too few natural killer cells, but normal numbers of other lymphocytes. This is very rare. Causes The most common cause of temporary lymphocytopenia is a recent infection, such as the common cold.Lymphocytopenia, but not idiopathic CD4+ lymphocytopenia, is associated with corticosteroid use, infections with HIV and other viral, bacterial, and fungal agents, malnutrition, systemic lupus erythematosus, severe stress, intense or prolonged physical exercise (due to cortisol release), rheumatoid arthritis, sarcoidosis, multiple sclerosis, and iatrogenic (caused by other medical treatments) conditions. Lymphocytopenia is a frequent, temporary result from many types of chemotherapy, such as with cytotoxic agents or immunosuppressive drugs. Some malignancies that have spread to involve the bone marrow, such as leukemia or advanced Hodgkins disease, also cause lymphocytopenia. Another cause is infection with Influenza A virus subtype H1N1 (and other subtypes of the Influenza A virus) and is then often associated with Monocytosis; H1N1 was responsible for the Spanish flu, the 2009 flu pandemic and in 2016 for the Influenza-epidemic in Brazil. The SARS disease caused lymphocytopenia. Among patients with laboratory-confirmed COVID-19 in Wuhan China through January 29th, 2020, 83.2 percent had Lymphocytopenia at admission.Large doses of radiation, such as those involved with nuclear accidents or medical whole body radiation, may cause lymphocytopenia. Diagnosis Lymphocytopenia is diagnosed when the complete blood count shows a lymphocyte count lower than the age-appropriate reference interval (for example, below 1.0 x 10(9)/L in an adult). Prognosis Lymphocytopenia that is caused by infections tends to resolve once the infection has cleared. Patients with idiopathic CD4+ lymphocytopenia may have either abnormally low but stable CD4+ cell counts, or abnormally low and progressively falling CD4+ cell counts; the latter condition is terminal. Other animals Lymphocytopenia caused by Feline Leukemia Virus and Feline immunodeficiency virus retroviral infections is treated with Lymphocyte T-Cell Immune Modulator. References == External links ==
Orientation (mental)
Orientation is a function of the mind involving awareness of three dimensions: time, place and person. Problems with orientation lead to disorientation, and can be due to various conditions, from delirium to intoxication. Typically, disorientation is first in time, then in place and finally in person. Assessment In the context of an accident or major trauma, the Emergency Medical Responder performs spiraling (increasingly detailed) assessments which guide the critical first response. Assessment of mental orientation typically lands within the immediate top three priorities: Safety - Assess the area safety (potential traffic, fire, overhead/underfoot objects and collapse risks, rushing water, gunfire, chemical/radiation threats, storm conditions, downed power lines, etc.), wait for the threat to subside, or move the person to safety if and when possible, all without endangering oneself. ABCs - Note conscious or unconscious then assess Airway, Breathing and Circulation factors (with priority to any potential gross or debilitating blood loss.) Orientation - Determine if the person is "awake, alert, and oriented, times three (to person, place, and time)." This is frequently abbreviated AAOx3 which also serves as a mnemonic. The assessment involves asking the patient to repeat his own full name, his present location, and todays date. The assessment is best done right up front, ahead of moving or transporting the victim, because it may illuminate potential internal damage. Event/Situation - A fourth category is now used as well. If the person is oriented to what is going on around them, then they are said to be AAOx4. AAOx3 is not a concerning response since people are sometimes less aware of the situation due to pain, time of day, or lack of significant event.Alternately, the letters in AAOx4 can be documented as COAX4. A person who is COAx4 is said to be "conscious, alert, & oriented to person, place, time and event". When a handoff report is made, anything less than 4 is specifically noted for clarity (e.g., patient is COAx2, oriented to self and place). Mental orientation is closely related, and often intermixed with trauma shock, including physical shock (see: Shock (circulatory)) and mental shock (see: Acute stress reaction, a psychological condition in response to terrifying events.) The exact cerebral region involved in orientation is uncertain, but lesions of the brain stem and the cerebral hemispheres have been reported to cause disorientation, suggesting that they act together in maintaining awareness and its subfunction of orientation. Disorientation Disorientation is the opposite of orientation. It is a cognitive disability in which the senses of time, direction, and recognition of items (things), people and places become difficult to distinguish/identify. Causes of mental disorientation Disorientation can occur in healthy young adults as well as in the elderly or ill person. While exercising, if a person becomes dehydrated as a result of over-exertion, he or she may become disoriented to the time or place. While exercising, the body may not be able to supply enough oxygen to the brain fast enough. Mental disorientation can be the aim of some performance art, as creators with audience disorientation as a goal may work to deliberately augment sensations of time, place, person, purpose. See also Destabilisation Mental confusion Mental status examination Spatial disorientation Up-down cues == References ==
Ventricular flutter
Ventricular flutter is an arrhythmia, more specifically a tachycardia affecting the ventricles with a rate over 250-350 beats/min, and one of the most indiscernible. It is characterized on the ECG by a sinusoidal waveform without clear definition of the QRS and T waves. It has been considered as a possible transition stage between ventricular tachycardia and fibrillation, and is a critically unstable arrhythmia that can result in sudden cardiac death.It can occur in infancy, youth, or as an adult. It can be induced by programmed electrical stimulation. References External links http://www.medscape.com/viewarticle/409172_3
Infarction
Infarction is tissue death (necrosis) due to inadequate blood supply to the affected area. It may be caused by artery blockages, rupture, mechanical compression, or vasoconstriction. The resulting lesion is referred to as an infarct (from the Latin infarctus, "stuffed into"). Causes Infarction occurs as a result of prolonged ischemia, which is the insufficient supply of oxygen and nutrition to an area of tissue due to a disruption in blood supply. The blood vessel supplying the affected area of tissue may be blocked due to an obstruction in the vessel (e.g., an arterial embolus, thrombus, or atherosclerotic plaque), compressed by something outside of the vessel causing it to narrow (e.g., tumor, volvulus, or hernia), ruptured by trauma causing a loss of blood pressure downstream of the rupture, or vasoconstricted, which is the narrowing of the blood vessel by contraction of the muscle wall rather than an external force (e.g., cocaine vasoconstriction leading to myocardial infarction). Hypertension and atherosclerosis are risk factors for both atherosclerotic plaques and thromboembolism. In atherosclerotic formations, a plaque develops under a fibrous cap. When the fibrous cap is degraded by metalloproteinases released from macrophages or by intravascular shear force from blood flow, subendothelial thrombogenic material (extracellular matrix) is exposed to circulating platelets and thrombus formation occurs on the vessel wall occluding blood flow. Occasionally, the plaque may rupture and form an embolus which travels with the blood-flow downstream to where the vessel narrows and eventually clogs the vessel lumen Classification By histopathology Infarctions are divided into two types according to the amount of blood present: White infarctions (anemic infarcts) affect solid organs such as the spleen, heart and kidneys wherein the solidity of the tissue substantially limits the amount of nutrients (blood/oxygen/glucose/fuel) that can flow into the area of ischaemic necrosis. Similar occlusion to blood flow and consequent necrosis can occur as a result of severe vasoconstriction as illustrated in severe Raynauds phenomenon that can lead to irreversible gangrene. Red infarctions (hemorrhagic infarcts) generally affect the lungs or other loose organs (testis, ovary, small intestines). The occlusion consists more of red blood cells and fibrin strands. Characteristics of red infarcts include: occlusion of a vein loose tissues that allow blood to collect in the infarcted zone tissues with a dual circulatory system (lung, small intestines) tissues previously congested from sluggish venous outflow reperfusion (injury) of previously ischemic tissue that is associated with reperfusion-related diseases, such as myocardial infarction, stroke (cerebral infarction), shock-resuscitation, replantation surgery, frostbite, burns, and organ transplantation. By localization Heart: Myocardial infarction (MI), commonly known as a heart attack, is an infarction of the heart, causing some heart cells to die. This is most commonly due to occlusion (blockage) of a coronary artery following the rupture of a vulnerable atherosclerotic plaque, which is an unstable collection of lipids (fatty acids) and white blood cells (especially macrophages) in the wall of an artery. The resulting ischemia (restriction in blood supply) and oxygen shortage, if left untreated for a sufficient period of time, can cause damage or kill heart muscle tissue (myocardium). Brain: Cerebral infarction is the ischemic kind of stroke due to a disturbance in the blood vessels supplying blood to the brain. It can be atherothrombotic or embolic. Stroke caused by cerebral infarction should be distinguished from two other kinds of stroke: cerebral hemorrhage and subarachnoid hemorrhage. Cerebral infarctions vary in their severity with one third of the cases resulting in death. In response to ischemia, the brain degenerates by the process of liquefactive necrosis. Lung: Pulmonary infarction or lung infarction Spleen: Splenic infarction occurs when the splenic artery or one of its branches are occluded, for example by a blood clot. Although it can occur asymptomatically, the typical symptom is severe pain in the left upper quadrant of the abdomen, sometimes radiating to the left shoulder. Fever and chills develop in some cases. It has to be differentiated from other causes of acute abdomen. Limb: Limb infarction is an infarction of an arm or leg. Causes include arterial embolisms and skeletal muscle infarction as a rare complication of long standing, poorly controlled diabetes mellitus. A major presentation is painful thigh or leg swelling. Bone: Infarction of bone results in avascular necrosis. Without blood, the bone tissue dies and the bone collapses. If avascular necrosis involves the bones of a joint, it often leads to destruction of the joint articular surfaces (see osteochondritis dissecans). Testicle: an infarction of a testicle is commonly caused by testicular torsion and may require removal of the affected testicle(s) if not undone by surgery quickly enough. Eye: an infarction can occur to the central retinal artery which supplies the retina causing sudden visual loss. Bowel: Bowel infarction is generally caused by mesenteric ischemia due to blockages in the arteries or veins that supply the bowel. Associated diseases Diseases commonly associated with infarctions include: Peripheral artery occlusive disease (the most severe form of which is gangrene) Antiphospholipid syndrome Sepsis Giant-cell arteritis (GCA) Hernia Volvulus Sickle-cell disease References External links Media related to Infarction at Wikimedia Commons The dictionary definition of infarction at Wiktionary
Alzheimers disease
Alzheimers disease (AD) is a neurodegenerative disease that usually starts slowly and progressively worsens. It is the cause of 60–70% of cases of dementia. The most common early symptom is difficulty in remembering recent events. As the disease advances, symptoms can include problems with language, disorientation (including easily getting lost), mood swings, loss of motivation, self-neglect, and behavioral issues. As a persons condition declines, they often withdraw from family and society. Gradually, bodily functions are lost, ultimately leading to death. Although the speed of progression can vary, the typical life expectancy following diagnosis is three to nine years.The cause of Alzheimers disease is poorly understood. There are many environmental and genetic risk factors associated with its development. The strongest genetic risk factor is from an allele of APOE. Other risk factors include a history of head injury, clinical depression, and high blood pressure. The disease process is largely associated with amyloid plaques, neurofibrillary tangles, and loss of neuronal connections in the brain. A probable diagnosis is based on the history of the illness and cognitive testing with medical imaging and blood tests to rule out other possible causes. Initial symptoms are often mistaken for normal aging. Examination of brain tissue is needed for a definite diagnosis, but this can only take place after death. Good nutrition, physical activity, and engaging socially are known to be of benefit generally in aging, and these may help in reducing the risk of cognitive decline and Alzheimers; in 2019 clinical trials were underway to look at these possibilities. There are no medications or supplements that have been shown to decrease risk.No treatments stop or reverse its progression, though some may temporarily improve symptoms. Affected people increasingly rely on others for assistance, often placing a burden on the caregiver. The pressures can include social, psychological, physical, and economic elements. Exercise programs may be beneficial with respect to activities of daily living and can potentially improve outcomes. Behavioral problems or psychosis due to dementia are often treated with antipsychotics, but this is not usually recommended, as there is little benefit and an increased risk of early death.As of 2020, there were approximately 50 million people worldwide with Alzheimers disease. It most often begins in people over 65 years of age, although up to 10% of cases are early-onset affecting those in their 30s to mid-60s. It affects about 6% of people 65 years and older, and women more often than men. The disease is named after German psychiatrist and pathologist Alois Alzheimer, who first described it in 1906. Alzheimers financial burden on society is large, with an estimated global annual cost of US$1 trillion. Alzheimers disease is currently ranked as the seventh leading cause of death in the United States. Signs and symptoms The course of Alzheimers is generally described in three stages, with a progressive pattern of cognitive and functional impairment. The three stages are described as early or mild, middle or moderate, and late or severe. The disease is known to target the hippocampus which is associated with memory, and this is responsible for the first symptoms of memory impairment. As the disease progresses so does the degree of memory impairment. First symptoms The first symptoms are often mistakenly attributed to aging or stress. Detailed neuropsychological testing can reveal mild cognitive difficulties up to eight years before a person fulfills the clinical criteria for diagnosis of Alzheimers disease. These early symptoms can affect the most complex activities of daily living. The most noticeable deficit is short term memory loss, which shows up as difficulty in remembering recently learned facts and inability to acquire new information.Subtle problems with the executive functions of attentiveness, planning, flexibility, and abstract thinking, or impairments in semantic memory (memory of meanings, and concept relationships) can also be symptomatic of the early stages of Alzheimers disease. Apathy and depression can be seen at this stage, with apathy remaining as the most persistent symptom throughout the course of the disease. Mild cognitive impairment (MCI) is often found to be a transitional stage between normal aging and dementia. MCI can present with a variety of symptoms, and when memory loss is the predominant symptom, it is termed amnestic MCI and is frequently seen as a prodromal stage of Alzheimers disease. Amnestic MCI has a greater than 90% likelihood of being associated with Alzheimers. Early stage In people with Alzheimers disease, the increasing impairment of learning and memory eventually leads to a definitive diagnosis. In a small percentage, difficulties with language, executive functions, perception (agnosia), or execution of movements (apraxia) are more prominent than memory problems. Alzheimers disease does not affect all memory capacities equally. Older memories of the persons life (episodic memory), facts learned (semantic memory), and implicit memory (the memory of the body on how to do things, such as using a fork to eat or how to drink from a glass) are affected to a lesser degree than new facts or memories.Language problems are mainly characterised by a shrinking vocabulary and decreased word fluency, leading to a general impoverishment of oral and written language. In this stage, the person with Alzheimers is usually capable of communicating basic ideas adequately. While performing fine motor tasks such as writing, drawing, or dressing, certain movement coordination and planning difficulties (apraxia) may be present, but they are commonly unnoticed. As the disease progresses, people with Alzheimers disease can often continue to perform many tasks independently, but may need assistance or supervision with the most cognitively demanding activities. Middle stage Progressive deterioration eventually hinders independence, with subjects being unable to perform most common activities of daily living. Speech difficulties become evident due to an inability to recall vocabulary, which leads to frequent incorrect word substitutions (paraphasias). Reading and writing skills are also progressively lost. Complex motor sequences become less coordinated as time passes and Alzheimers disease progresses, so the risk of falling increases. During this phase, memory problems worsen, and the person may fail to recognise close relatives. Long-term memory, which was previously intact, becomes impaired.Behavioral and neuropsychiatric changes become more prevalent. Common manifestations are wandering, irritability and emotional lability, leading to crying, outbursts of unpremeditated aggression, or resistance to caregiving. Sundowning can also appear. Approximately 30% of people with Alzheimers disease develop illusionary misidentifications and other delusional symptoms. Subjects also lose insight of their disease process and limitations (anosognosia). Urinary incontinence can develop. These symptoms create stress for relatives and caregivers, which can be reduced by moving the person from home care to other long-term care facilities. Late stage During the final stage, known as the late-stage or severe stage, there is complete dependence on caregivers. Language is reduced to simple phrases or even single words, eventually leading to complete loss of speech. Despite the loss of verbal language abilities, people can often understand and return emotional signals. Although aggressiveness can still be present, extreme apathy and exhaustion are much more common symptoms. People with Alzheimers disease will ultimately not be able to perform even the simplest tasks independently; muscle mass and mobility deteriorates to the point where they are bedridden and unable to feed themselves. The cause of death is usually an external factor, such as infection of pressure ulcers or pneumonia, not the disease itself. Causes Proteins fail to function normally. This disrupts the work of the brain cells affected and triggers a toxic cascade, ultimately leading to cell death and later brain shrinkage.Alzheimers disease is believed to occur when abnormal amounts of amyloid beta (Aβ), accumulating extracellularly as amyloid plaques and tau proteins, or intracellularly as neurofibrillary tangles, form in the brain, affecting neuronal functioning and connectivity, resulting in a progressive loss of brain function. This altered protein clearance ability is age-related, regulated by brain cholesterol, and associated with other neurodegenerative diseases.Advances in brain imaging techniques allow researchers to see the development and spread of abnormal amyloid and tau proteins in the living brain, as well as changes in brain structure and function. Beta-amyloid is a fragment of a larger protein. When these fragments cluster together, a toxic effect appears on neurons and disrupt cell-to-cell communication. Larger deposits called amyloid plaques are thus further formed.Tau proteins are responsible in neurons internal support and transport system to carry nutrients and other essential materials. In Alzheimers disease, the shape of tau proteins is altered and thus organize themselves into structures called neurofibrillary tangles. The tangles disrupt the transport system and are toxic to cells. The cause for most Alzheimers cases is still mostly unknown, except for 1–2% of cases where deterministic genetic differences have been identified. Several competing hypotheses attempt to explain the underlying cause; the two predominant hypotheses are the amyloid beta (Aβ) hypothesis and the cholinergic hypothesis.The oldest hypothesis, on which most drug therapies are based, is the cholinergic hypothesis, which proposes that Alzheimers disease is caused by reduced synthesis of the neurotransmitter acetylcholine. The loss of cholinergic neurons noted in the limbic system and cerebral cortex, is a key feature in the progression of Alzheimers. The 1991 amyloid hypothesis postulated that extracellular amyloid beta (Aβ) deposits are the fundamental cause of the disease. Support for this postulate comes from the location of the gene for the amyloid precursor protein (APP) on chromosome 21, together with the fact that people with trisomy 21 (Down syndrome) who have an extra gene copy almost universally exhibit at least the earliest symptoms of Alzheimers disease by 40 years of age. A specific isoform of apolipoprotein, APOE4, is a major genetic risk factor for Alzheimers disease. While apolipoproteins enhance the breakdown of beta amyloid, some isoforms are not very effective at this task (such as APOE4), leading to excess amyloid buildup in the brain. Genetic Only 1–2% of Alzheimers cases are inherited (autosomal dominant). These types are known as early onset familial Alzheimers disease, can have a very early onset, and a faster rate of progression. Early onset familial Alzheimers disease can be attributed to mutations in one of three genes: those encoding amyloid-beta precursor protein (APP) and presenilins PSEN1 and PSEN2. Most mutations in the APP and presenilin genes increase the production of a small protein called amyloid beta (Aβ)42, which is the main component of amyloid plaques. Some of the mutations merely alter the ratio between Aβ42 and the other major forms—particularly Aβ40—without increasing Aβ42 levels. Two other genes associated with autosomal dominant Alzheimers disease are ABCA7 and SORL1.Most cases of Alzheimers are not inherited and are termed sporadic Alzheimers disease, in which environmental and genetic differences may act as risk factors. Most cases of sporadic Alzheimers disease in contrast to familial Alzheimers disease are late-onset Alzheimers disease (LOAD) developing after the age of 65 years. Less than 5% of sporadic Alzheimers disease have an earlier onset. The strongest genetic risk factor for sporadic Alzheimers disease is APOEε4. APOEε4 is one of four alleles of apolipoprotein E (APOE). APOE plays a major role in lipid-binding proteins in lipoprotein particles and the epsilon4 allele disrupts this function. Between 40 and 80% of people with Alzheimers disease possess at least one APOEε4 allele. The APOEε4 allele increases the risk of the disease by three times in heterozygotes and by 15 times in homozygotes. Like many human diseases, environmental effects and genetic modifiers result in incomplete penetrance. For example, certain Nigerian populations do not show the relationship between dose of APOEε4 and incidence or age-of-onset for Alzheimers disease seen in other human populations.Alleles in the TREM2 gene have been associated with a 3 to 5 times higher risk of developing Alzheimers disease.A Japanese pedigree of familial Alzheimers disease was found to be associated with a deletion mutation of codon 693 of APP. This mutation and its association with Alzheimers disease was first reported in 2008, and is known as the Osaka mutation. Only homozygotes with this mutation have an increased risk of developing Alzheimers disease. This mutation accelerates Aβ oligomerization but the proteins do not form the amyloid fibrils that aggregate into amyloid plaques, suggesting that it is the Aβ oligomerization rather than the fibrils that may be the cause of this disease. Mice expressing this mutation have all the usual pathologies of Alzheimers disease. Other hypotheses The tau hypothesis proposes that tau protein abnormalities initiate the disease cascade. In this model, hyperphosphorylated tau begins to pair with other threads of tau as paired helical filaments. Eventually, they form neurofibrillary tangles inside nerve cell bodies. When this occurs, the microtubules disintegrate, destroying the structure of the cells cytoskeleton which collapses the neurons transport system.A number of studies connect the misfolded amyloid beta and tau proteins associated with the pathology of Alzheimers disease, as bringing about oxidative stress that leads to chronic inflammation. Sustained inflammation (neuroinflammation) is also a feature of other neurodegenerative diseases including Parkinsons disease, and ALS. Spirochete infections have also been linked to dementia. DNA damages accumulate in AD brains; reactive oxygen species may be the major source of this DNA damage.Sleep disturbances are seen as a possible risk factor for inflammation in Alzheimers disease. Sleep problems have been seen as a consequence of Alzheimers disease but studies suggest that they may instead be a causal factor. Sleep disturbances are thought to be linked to persistent inflammation. The cellular homeostasis of biometals such as ionic copper, iron, and zinc is disrupted in Alzheimers disease, though it remains unclear whether this is produced by or causes the changes in proteins. Smoking is a significant Alzheimers disease risk factor. Systemic markers of the innate immune system are risk factors for late-onset Alzheimers disease. Exposure to air pollution may be a contributing factor to the development of Alzheimers disease.One hypothesis posits that dysfunction of oligodendrocytes and their associated myelin during aging contributes to axon damage, which then causes amyloid production and tau hyper-phosphorylation as a side effect. Retrogenesis is a medical hypothesis that just as the fetus goes through a process of neurodevelopment beginning with neurulation and ending with myelination, the brains of people with Alzheimers disease go through a reverse neurodegeneration process starting with demyelination and death of axons (white matter) and ending with the death of grey matter. Likewise the hypothesis is, that as infants go through states of cognitive development, people with Alzheimers disease go through the reverse process of progressive cognitive impairment.The association with celiac disease is unclear, with a 2019 study finding no increase in dementia overall in those with CD, while a 2018 review found an association with several types of dementia including Alzheimers disease. Pathophysiology Neuropathology Alzheimers disease is characterised by loss of neurons and synapses in the cerebral cortex and certain subcortical regions. This loss results in gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus. Degeneration is also present in brainstem nuclei particularly the locus coeruleus in the pons. Studies using MRI and PET have documented reductions in the size of specific brain regions in people with Alzheimers disease as they progressed from mild cognitive impairment to Alzheimers disease, and in comparison with similar images from healthy older adults.Both Aβ plaques and neurofibrillary tangles are clearly visible by microscopy in brains of those with Alzheimers disease, especially in the hippocampus. However, Alzheimers disease may occur without neurofibrillary tangles in the neocortex. Plaques are dense, mostly insoluble deposits of beta-amyloid peptide and cellular material outside and around neurons. Tangles (neurofibrillary tangles) are aggregates of the microtubule-associated protein tau which has become hyperphosphorylated and accumulate inside the cells themselves. Although many older individuals develop some plaques and tangles as a consequence of aging, the brains of people with Alzheimers disease have a greater number of them in specific brain regions such as the temporal lobe. Lewy bodies are not rare in the brains of people with Alzheimers disease. Biochemistry Alzheimers disease has been identified as a protein misfolding disease, a proteopathy, caused by the accumulation of abnormally folded amyloid beta protein into amyloid plaques, and tau protein into neurofibrillary tangles in the brain. Plaques are made up of small peptides, 39–43 amino acids in length, called amyloid beta (Aβ). Amyloid beta is a fragment from the larger amyloid-beta precursor protein (APP) a transmembrane protein that penetrates the neurons membrane. APP is critical to neuron growth, survival, and post-injury repair. In Alzheimers disease, gamma secretase and beta secretase act together in a proteolytic process which causes APP to be divided into smaller fragments. One of these fragments gives rise to fibrils of amyloid beta, which then form clumps that deposit outside neurons in dense formations known as amyloid plaques.Alzheimers disease is also considered a tauopathy due to abnormal aggregation of the tau protein. Every neuron has a cytoskeleton, an internal support structure partly made up of structures called microtubules. These microtubules act like tracks, guiding nutrients and molecules from the body of the cell to the ends of the axon and back. A protein called tau stabilises the microtubules when phosphorylated, and is therefore called a microtubule-associated protein. In Alzheimers disease, tau undergoes chemical changes, becoming hyperphosphorylated; it then begins to pair with other threads, creating neurofibrillary tangles and disintegrating the neurons transport system. Pathogenic tau can also cause neuronal death through transposable element dysregulation. Disease mechanism Exactly how disturbances of production and aggregation of the beta-amyloid peptide give rise to the pathology of Alzheimers disease is not known. The amyloid hypothesis traditionally points to the accumulation of beta-amyloid peptides as the central event triggering neuron degeneration. Accumulation of aggregated amyloid fibrils, which are believed to be the toxic form of the protein responsible for disrupting the cells calcium ion homeostasis, induces programmed cell death (apoptosis). It is also known that Aβ selectively builds up in the mitochondria in the cells of Alzheimers-affected brains, and it also inhibits certain enzyme functions and the utilisation of glucose by neurons.Various inflammatory processes and cytokines may also have a role in the pathology of Alzheimers disease. Inflammation is a general marker of tissue damage in any disease, and may be either secondary to tissue damage in Alzheimers disease or a marker of an immunological response. There is increasing evidence of a strong interaction between the neurons and the immunological mechanisms in the brain. Obesity and systemic inflammation may interfere with immunological processes which promote disease progression.Alterations in the distribution of different neurotrophic factors and in the expression of their receptors such as the brain-derived neurotrophic factor (BDNF) have been described in Alzheimers disease. Diagnosis Alzheimers disease can only be definitively diagnosed with autopsy findings; in the absence of autopsy, clinical diagnoses of AD are "possible" or "probable", based on other findings. Up to 23% of those clinically diagnosed with AD may be misdiagnosed and may have pathology suggestive of another condition with symptoms that mimic those of AD.AD is usually clinically diagnosed based on the persons medical history, history from relatives, and behavioral observations. The presence of characteristic neurological and neuropsychological features and the absence of alternative conditions supports the diagnosis. Advanced medical imaging with computed tomography (CT) or magnetic resonance imaging (MRI), and with single-photon emission computed tomography (SPECT) or positron emission tomography (PET), can be used to help exclude other cerebral pathology or subtypes of dementia. Moreover, it may predict conversion from prodromal stages (mild cognitive impairment) to Alzheimers disease. FDA-approved radiopharmaceutical diagnostic agents used in PET for Alzheimers disease are florbetapir (2012), flutemetamol (2013), florbetaben (2014), and flortaucipir (2020). Because many insurance companies in the United States do not cover this procedure, its use in clinical practice is largely limited to clinical trials as of 2018.Assessment of intellectual functioning including memory testing can further characterise the state of the disease. Medical organizations have created diagnostic criteria to ease and standardise the diagnostic process for practising physicians. Definitive diagnosis can only be confirmed with post-mortem evaluations when brain material is available and can be examined histologically for senile plaques and neurofibrillary tangles. Criteria There are three sets of criteria for the clinical diagnoses of the spectrum of Alzheimers disease: the 2013 fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5); the National Institute on Aging-Alzheimers Association (NIA-AA) definition as revised in 2011; and the International Working Group criteria as revised in 2010. Three broad time periods, which can span decades, define the progression of Alzheimers disease from the preclinical phase, to mild cognitive impairment (MCI), followed by Alzheimers disease dementia.Eight intellectual domains are most commonly impaired in AD—memory, language, perceptual skills, attention, motor skills, orientation, problem solving and executive functional abilities, as listed in the fourth text revision of the DSM (DSM-IV-TR).The DSM-5 defines criteria for probable or possible Alzheimers for both major and mild neurocognitive disorder. Major or mild neurocognitive disorder must be present along with at least one cognitive deficit for a diagnosis of either probable or possible AD. For major neurocognitive disorder due to Alzheimers disease, probable Alzheimers disease can be diagnosed if the individual has genetic evidence of Alzheimers or if two or more acquired cognitive deficits, and a functional disability that is not from another disorder, are present. Otherwise, possible Alzheimers disease can be diagnosed as the diagnosis follows an atypical route. For mild neurocognitive disorder due to Alzheimers, probable Alzheimers disease can be diagnosed if there is genetic evidence, whereas possible Alzheimers disease can be met if all of the following are present: no genetic evidence, decline in both learning and memory, two or more cognitive deficits, and a functional disability not from another disorder.The NIA-AA criteria are used mainly in research rather than in clinical assessments. They define Alzheimers disease through three major stages: preclinical, mild cognitive impairment (MCI), and Alzheimers dementia. Diagnosis in the preclinical stage is complex and focuses on asymptomatic individuals; the latter two stages describe individuals experiencing symptoms. The core clinical criteria for MCI is used along with identification of biomarkers, predominantly those for neuronal injury (mainly tau-related) and amyloid beta deposition. The core clinical criteria itself rests on the presence of cognitive impairment without the presence of comorbidities. The third stage is divided into probable and possible Alzheimers disease dementia. In probable Alzheimers disease dementia there is steady impairment of cognition over time and a memory-related or non-memory-related cognitive dysfunction. In possible Alzheimers disease dementia, another causal disease such as cerebrovascular disease is present. Techniques Neuropsychological tests including cognitive tests such as the Mini–Mental State Examination (MMSE), the Montreal Cognitive Assessment (MoCA) and the Mini-Cog are widely used to aid in diagnosis of the cognitive impairments in AD. These tests may not always be accurate, as they lack sensitivity to mild cognitive impairment, and can be biased by language or attention problems; more comprehensive test arrays are necessary for high reliability of results, particularly in the earliest stages of the disease.Further neurological examinations are crucial in the differential diagnosis of Alzheimers disease and other diseases. Interviews with family members are used in assessment; caregivers can supply important information on daily living abilities and on the decrease in the persons mental function. A caregivers viewpoint is particularly important, since a person with Alzheimers disease is commonly unaware of their deficits. Many times, families have difficulties in the detection of initial dementia symptoms and may not communicate accurate information to a physician.Supplemental testing can rule out other potentially treatable diagnoses and help avoid misdiagnoses. Common supplemental tests include blood tests, thyroid function tests, as well as tests to assess vitamin B12 levels, rule out neurosyphilis and rule out metabolic problems (including tests for kidney function, electrolyte levels and for diabetes). MRI or CT scans might also be used to rule out other potential causes of the symptoms – including tumors or strokes. Delirium and depression can be common among individuals and are important to rule out.Psychological tests for depression are used, since depression can either be concurrent with Alzheimers disease (see Depression of Alzheimer disease), an early sign of cognitive impairment, or even the cause.Due to low accuracy, the C-PIB-PET scan is not recommended as an early diagnostic tool or for predicting the development of Alzheimers disease when people show signs of mild cognitive impairment (MCI). The use of 18F-FDG PET scans, as a single test, to identify people who may develop Alzheimers disease is not supported by evidence. Prevention There are no disease-modifying treatments available to cure Alzheimers disease and because of this, AD research has focused on interventions to prevent the onset and progression. There is no evidence that supports any particular measure in preventing Alzheimers, and studies of measures to prevent the onset or progression have produced inconsistent results. Epidemiological studies have proposed relationships between an individuals likelihood of developing AD and modifiable factors, such as medications, lifestyle, and diet. There are some challenges in determining whether interventions for Alzheimers disease act as a primary prevention method, preventing the disease itself, or a secondary prevention method, identifying the early stages of the disease. These challenges include duration of intervention, different stages of disease at which intervention begins, and lack of standardization of inclusion criteria regarding biomarkers specific for Alzheimers disease. Further research is needed to determine factors that can help prevent Alzheimers disease. Medication Cardiovascular risk factors, such as hypercholesterolaemia, hypertension, diabetes, and smoking, are associated with a higher risk of onset and worsened course of AD. The use of statins to lower cholesterol may be of benefit in Alzheimers. Antihypertensive and antidiabetic medications in individuals without overt cognitive impairment may decrease the risk of dementia by influencing cerebrovascular pathology. More research is needed to examine the relationship with Alzheimers disease specifically; clarification of the direct role medications play versus other concurrent lifestyle changes (diet, exercise, smoking) is needed.Depression is associated with an increased risk for Alzheimers disease; management with antidepressants may provide a preventative measure.Historically, long-term usage of non-steroidal anti-inflammatory drugs (NSAIDs) were thought to be associated with a reduced likelihood of developing Alzheimers disease as it reduces inflammation; however, NSAIDs do not appear to be useful as a treatment. Additionally, because women have a higher incidence of Alzheimers disease than men, it was once thought that estrogen deficiency during menopause was a risk factor. However, there is a lack of evidence to show that hormone replacement therapy (HRT) in menopause decreases risk of cognitive decline. Lifestyle Certain lifestyle activities, such as physical and
Alzheimers disease
cognitive exercises, higher education and occupational attainment, cigarette smoking, stress, sleep, and the management of other comorbidities, including diabetes and hypertension, may affect the risk of developing Alzheimers.Physical exercise is associated with a decreased rate of dementia, and is effective in reducing symptom severity in those with AD. Memory and cognitive functions can be improved with aerobic exercises including brisk walking three times weekly for forty minutes. It may also induce neuroplasticity of the brain. Participating in mental exercises, such as reading, crossword puzzles, and chess have shown a potential to be preventative. Meeting the WHO recommendations for physical activity is associated with a lower risk of AD.Higher education and occupational attainment, and participation in leisure activities, contribute to a reduced risk of developing Alzheimers, or of delaying the onset of symptoms. This is compatible with the cognitive reserve theory, which states that some life experiences result in more efficient neural functioning providing the individual a cognitive reserve that delays the onset of dementia manifestations. Education delays the onset of Alzheimers disease syndrome without changing the duration of the disease.Cessation in smoking may reduce risk of developing Alzheimers disease, specifically in those who carry APOE ɛ4 allele. The increased oxidative stress caused by smoking results in downstream inflammatory or neurodegenerative processes that may increase risk of developing AD. Avoidance of smoking, counseling and pharmacotherapies to quit smoking are used, and avoidance of environmental tobacco smoke is recommended.Alzheimers disease is associated with sleep disorders but the precise relationship is unclear. It was once thought that as people get older, the risk of developing sleep disorders and AD independently increase, but research is examining whether sleep disorders may increase the prevalence of AD. One theory is that the mechanisms to increase clearance of toxic substances, including Aβ, are active during sleep. With decreased sleep, a person is increasing Aβ production and decreasing Aβ clearance, resulting in Aβ accumulation. Receiving adequate sleep (approximately 7–8 hours) every night has become a potential lifestyle intervention to prevent the development of AD.Stress is a risk factor for the development of Alzheimers. The mechanism by which stress predisposes someone to development of Alzheimers is unclear, but it is suggested that lifetime stressors may affect a persons epigenome, leading to an overexpression or under expression of specific genes. Although the relationship of stress and Alzheimers is unclear, strategies to reduce stress and relax the mind may be helpful strategies in preventing the progression or Alzheimers disease. Meditation, for instance, is a helpful lifestyle change to support cognition and well-being, though further research is needed to assess long-term effects. Management There is no cure for Alzheimers disease; available treatments offer relatively small symptomatic benefits but remain palliative in nature. Treatments can be divided into pharmaceutical, psychosocial, and caregiving. Pharmaceutical Medications used to treat the cognitive problems of Alzheimers disease include: four acetylcholinesterase inhibitors (tacrine, rivastigmine, galantamine, and donepezil) and memantine, an NMDA receptor antagonist. The acetylcholinesterase inhibitors are intended for those with mild to severe Alzheimers, whereas memantine is intended for those with moderate or severe Alzheimers disease. The benefit from their use is small.Reduction in the activity of the cholinergic neurons is a well-known feature of Alzheimers disease. Acetylcholinesterase inhibitors are employed to reduce the rate at which acetylcholine (ACh) is broken down, thereby increasing the concentration of ACh in the brain and combating the loss of ACh caused by the death of cholinergic neurons. There is evidence for the efficacy of these medications in mild to moderate Alzheimers disease, and some evidence for their use in the advanced stage. The use of these drugs in mild cognitive impairment has not shown any effect in a delay of the onset of Alzheimers disease. The most common side effects are nausea and vomiting, both of which are linked to cholinergic excess. These side effects arise in approximately 10–20% of users, are mild to moderate in severity, and can be managed by slowly adjusting medication doses. Less common secondary effects include muscle cramps, decreased heart rate (bradycardia), decreased appetite and weight, and increased gastric acid production.Glutamate is an excitatory neurotransmitter of the nervous system, although excessive amounts in the brain can lead to cell death through a process called excitotoxicity which consists of the overstimulation of glutamate receptors. Excitotoxicity occurs not only in Alzheimers disease, but also in other neurological diseases such as Parkinsons disease and multiple sclerosis. Memantine is a noncompetitive NMDA receptor antagonist first used as an anti-influenza agent. It acts on the glutamatergic system by blocking NMDA receptors and inhibiting their overstimulation by glutamate. Memantine has been shown to have a small benefit in the treatment of moderate to severe Alzheimers disease. Reported adverse events with memantine are infrequent and mild, including hallucinations, confusion, dizziness, headache and fatigue. The combination of memantine and donepezil has been shown to be "of statistically significant but clinically marginal effectiveness".An extract of Ginkgo biloba known as EGb 761 has been used for treating Alzheimers and other neuropsychiatric disorders. Its use is approved throughout Europe. The World Federation of Biological Psychiatry guidelines lists EGb 761 with the same weight of evidence (level B) given to acetylcholinesterase inhibitors and memantine. EGb 761 is the only one that showed improvement of symptoms in both Alzheimers disease and vascular dementia. EGb 761 may have a role either on its own or as an add-on if other therapies prove ineffective. A 2016 review concluded that the quality of evidence from clinical trials on Ginkgo biloba has been insufficient to warrant its use for treating Alzheimers disease.Atypical antipsychotics are modestly useful in reducing aggression and psychosis in people with Alzheimers disease, but their advantages are offset by serious adverse effects, such as stroke, movement difficulties or cognitive decline. When used in the long-term, they have been shown to associate with increased mortality. Stopping antipsychotic use in this group of people appears to be safe. Psychosocial Psychosocial interventions are used as an adjunct to pharmaceutical treatment and can be classified within behavior-, emotion-, cognition- or stimulation-oriented approaches.Behavioral interventions attempt to identify and reduce the antecedents and consequences of problem behaviors. This approach has not shown success in improving overall functioning, but can help to reduce some specific problem behaviors, such as incontinence. There is a lack of high quality data on the effectiveness of these techniques in other behavior problems such as wandering. Music therapy is effective in reducing behavioral and psychological symptoms.Emotion-oriented interventions include reminiscence therapy, validation therapy, supportive psychotherapy, sensory integration, also called snoezelen, and simulated presence therapy. A Cochrane review has found no evidence that this is effective. Reminiscence therapy (RT) involves the discussion of past experiences individually or in group, many times with the aid of photographs, household items, music and sound recordings, or other familiar items from the past. A 2018 review of the effectiveness of RT found that effects were inconsistent, small in size and of doubtful clinical significance, and varied by setting. Simulated presence therapy (SPT) is based on attachment theories and involves playing a recording with voices of the closest relatives of the person with Alzheimers disease. There is partial evidence indicating that SPT may reduce challenging behaviors.The aim of cognition-oriented treatments, which include reality orientation and cognitive retraining, is the reduction of cognitive deficits. Reality orientation consists of the presentation of information about time, place, or person to ease the understanding of the person about its surroundings and his or her place in them. On the other hand, cognitive retraining tries to improve impaired capacities by exercising mental abilities. Both have shown some efficacy improving cognitive capacities.Stimulation-oriented treatments include art, music and pet therapies, exercise, and any other kind of recreational activities. Stimulation has modest support for improving behavior, mood, and, to a lesser extent, function. Nevertheless, as important as these effects are, the main support for the use of stimulation therapies is the change in the persons routine. Caregiving Since Alzheimers has no cure and it gradually renders people incapable of tending to their own needs, caregiving is essentially the treatment and must be carefully managed over the course of the disease. During the early and moderate stages, modifications to the living environment and lifestyle can increase safety and reduce caretaker burden. Examples of such modifications are the adherence to simplified routines, the placing of safety locks, the labeling of household items to cue the person with the disease or the use of modified daily life objects. If eating becomes problematic, food will need to be prepared in smaller pieces or even puréed. When swallowing difficulties arise, the use of feeding tubes may be required. In such cases, the medical efficacy and ethics of continuing feeding is an important consideration of the caregivers and family members. The use of physical restraints is rarely indicated in any stage of the disease, although there are situations when they are necessary to prevent harm to the person with Alzheimers disease or their caregivers.During the final stages of the disease, treatment is centred on relieving discomfort until death, often with the help of hospice. Diet Diet may be a modifiable risk factor for the development of Alzheimers disease. The Mediterranean diet, and the DASH diet are both associated with less cognitive decline. A different approach has been to incorporate elements of both of these diets into one known as the MIND diet. Studies of individual dietary components, minerals and supplements are conflicting as to whether they prevent AD or cognitive decline. Prognosis The early stages of Alzheimers disease are difficult to diagnose. A definitive diagnosis is usually made once cognitive impairment compromises daily living activities, although the person may still be living independently. The symptoms will progress from mild cognitive problems, such as memory loss through increasing stages of cognitive and non-cognitive disturbances, eliminating any possibility of independent living, especially in the late stages of the disease.Life expectancy of people with Alzheimers disease is reduced. The normal life expectancy for 60 to 70 years old is 23 to 15 years; for 90 years old it is 4.5 years. Following Alzheimers disease diagnosis it ranges from 7 to 10 years for those in their 60s and early 70s (a loss of 13 to 8 years), to only about 3 years or less (a loss of 1.5 years) for those in their 90s.Fewer than 3% of people live more than fourteen years. Disease features significantly associated with reduced survival are an increased severity of cognitive impairment, decreased functional level, history of falls, and disturbances in the neurological examination. Other coincident diseases such as heart problems, diabetes, or history of alcohol abuse are also related with shortened survival. While the earlier the age at onset the higher the total survival years, life expectancy is particularly reduced when compared to the healthy population among those who are younger. Men have a less favourable survival prognosis than women.Pneumonia and dehydration are the most frequent immediate causes of death brought by Alzheimers disease, while cancer is a less frequent cause of death than in the general population. Epidemiology Two main measures are used in epidemiological studies: incidence and prevalence. Incidence is the number of new cases per unit of person-time at risk (usually number of new cases per thousand person-years); while prevalence is the total number of cases of the disease in the population at any given time. Regarding incidence, cohort longitudinal studies (studies where a disease-free population is followed over the years) provide rates between 10 and 15 per thousand person-years for all dementias and 5–8 for Alzheimers disease, which means that half of new dementia cases each year are Alzheimers disease. Advancing age is a primary risk factor for the disease and incidence rates are not equal for all ages: every 5 years after the age of 65, the risk of acquiring the disease approximately doubles, increasing from 3 to as much as 69 per thousand person years. Females with Alzheimers disease are more common than males, but this difference is likely due to womens longer life spans. When adjusted for age, both sexes are affected by Alzheimers at equal rates. In the United States, the risk of dying from Alzheimers disease in 2010 was 26% higher among the non-Hispanic white population than among the non-Hispanic black population, and the Hispanic population had a 30% lower risk than the non-Hispanic white population. However, much Alzheimers research remains to be done in minority groups, such as the African American and the Hispanic/Latino populations. Studies have shown that these groups are underrepresented in clinical trials and do not have the same risk of developing Alzheimers when carrying certain genetic risk factors (i.e. APOE4), compared to their caucasian counterparts.The prevalence of Alzheimers disease in populations is dependent upon factors including incidence and survival. Since the incidence of Alzheimers disease increases with age, prevalence depends on the mean age of the population for which prevalence is given. In the United States in 2020, Alzheimers dementia prevalence was estimated to be 5.3% for those in the 60–74 age group, with the rate increasing to 13.8% in the 74–84 group and to 34.6% in those greater than 85. Prevalence rates in some less developed regions around the globe are lower. As the incidence and prevalence are steadily increasing, the prevalence itself is projected to triple by 2050. As of 2020, 50 million people globally have AD, with this number expected to increase to 152 million by 2050. History The ancient Greek and Roman philosophers and physicians associated old age with increasing dementia. It was not until 1901 that German psychiatrist Alois Alzheimer identified the first case of what became known as Alzheimers disease, named after him, in a fifty-year-old woman he called Auguste D. He followed her case until she died in 1906 when he first reported publicly on it. During the next five years, eleven similar cases were reported in the medical literature, some of them already using the term Alzheimers disease. The disease was first described as a distinctive disease by Emil Kraepelin after suppressing some of the clinical (delusions and hallucinations) and pathological features (arteriosclerotic changes) contained in the original report of Auguste D. He included Alzheimers disease, also named presenile dementia by Kraepelin, as a subtype of senile dementia in the eighth edition of his Textbook of Psychiatry, published on 15 July, 1910.For most of the 20th century, the diagnosis of Alzheimers disease was reserved for individuals between the ages of 45 and 65 who developed symptoms of dementia. The terminology changed after 1977 when a conference on Alzheimers disease concluded that the clinical and pathological manifestations of presenile and senile dementia were almost identical, although the authors also added that this did not rule out the possibility that they had different causes. This eventually led to the diagnosis of Alzheimers disease independent of age. The term senile dementia of the Alzheimer type (SDAT) was used for a time to describe the condition in those over 65, with classical Alzheimers disease being used to describe those who were younger. Eventually, the term Alzheimers disease was formally adopted in medical nomenclature to describe individuals of all ages with a characteristic common symptom pattern, disease course, and neuropathology.The National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) and the Alzheimers Disease and Related Disorders Association (ADRDA, now known as the Alzheimers Association) established the most commonly used NINCDS-ADRDA Alzheimers Criteria for diagnosis in 1984, extensively updated in 2007. These criteria require that the presence of cognitive impairment, and a suspected dementia syndrome, be confirmed by neuropsychological testing for a clinical diagnosis of possible or probable Alzheimers disease. A histopathologic confirmation including a microscopic examination of brain tissue is required for a definitive diagnosis. Good statistical reliability and validity have been shown between the diagnostic criteria and definitive histopathological confirmation. Society and culture Social costs Dementia, and specifically Alzheimers disease, may be among the most costly diseases for societies worldwide. As populations age, these costs will probably increase and become an important social problem and economic burden. Costs associated with AD include direct and indirect medical costs, which vary between countries depending on social care for a person with AD. Direct costs include doctor visits, hospital care, medical treatments, nursing home care, specialized equipment, and household expenses. Indirect costs include the cost of informal care and the loss in productivity of informal caregivers.In the United States as of 2019, informal (family) care is estimated to constitute nearly three-fourths of caregiving for people with AD at a cost of US$234 billion per year and approximately 18.5 billion hours of care. The cost to society worldwide to care for individuals with AD is projected to increase nearly ten-fold, and reach about US$9.1 trillion by 2050.Costs for those with more severe dementia or behavioral disturbances are higher and are related to the additional caregiving time to provide physical care. Caregiving burden The role of the main caregiver is often taken by the spouse or a close relative. Alzheimers disease is known for placing a great burden on caregivers which includes social, psychological, physical, or economic aspects. Home care is usually preferred by people with Alzheimers disease and their families. This option also delays or eliminates the need for more professional and costly levels of care. Nevertheless, two-thirds of nursing home residents have dementias.Dementia caregivers are subject to high rates of physical and mental disorders. Factors associated with greater psychosocial problems of the primary caregivers include having an affected person at home, the carer being a spouse, demanding behaviors of the cared person such as depression, behavioral disturbances, hallucinations, sleep problems or walking disruptions and social isolation. Regarding economic problems, family caregivers often give up time from work to spend 47 hours per week on average with the person with Alzheimers disease, while the costs of caring for them are high. Direct and indirect costs of caring for somebody with Alzheimers average between $18,000 and $77,500 per year in the United States, depending on the study.Cognitive behavioral therapy and the teaching of coping strategies either individually or in group have demonstrated their efficacy in improving caregivers psychological health. Media Alzheimers disease has been portrayed in films such as: Iris (2001), based on John Bayleys memoir of his wife Iris Murdoch; The Notebook (2004), based on Nicholas Sparks 1996 novel of the same name; A Moment to Remember (2004); Thanmathra (2005); Memories of Tomorrow (Ashita no Kioku) (2006), based on Hiroshi Ogiwaras novel of the same name; Away from Her (2006), based on Alice Munros short story The Bear Came over the Mountain; Still Alice (2014), about a Columbia University professor who has early onset Alzheimers disease, based on Lisa Genovas 2007 novel of the same name and featuring Julianne Moore in the title role. Documentaries on Alzheimers disease include Malcolm and Barbara: A Love Story (1999) and Malcolm and Barbara: Loves Farewell (2007), both featuring Malcolm Pointon.Alzheimers disease has also been portrayed in music by English musician the Caretaker in releases such as Persistent Repetition of Phrases (2008), An Empty Bliss Beyond This World (2011), and Everywhere at the End of Time (2016–2019). Paintings depicting the disorder include the late works by American artist William Utermohlen, who drew self-portraits from 1995 to 2000 as an experiment of showing his disease through art. Research directions Additional research on the lifestyle effect may provide insight into neuroimaging biomarkers and better understanding of the mechanisms causing both Alzheimers disease and early-onset AD. Treatment and prevention There is ongoing research examining the role of specific medications in reducing the prevalence (primary prevention) and/or progression (secondary prevention) of Alzheimers disease. The research trials investigating medications generally impact Aβ plaques, inflammation, APOE, neurotransmitter receptors, neurogenesis, epigenetic regulators, growth factors and hormones. These studies have led to a better understanding of the disease, but none identified a prevention strategy. Experimental models are commonly used by researchers in order to understand disease mechanisms as well develop and test novel therapeutics aimed at treating Alzheimers disease. Antibodies are being developed that may have the ability to alter the disease course by targeting amyloid beta, such as donanemab and aducanumab. Aducanumab was approved by the FDA in 2021, but its use and effectiveness remain unclear and controversial. Although it received FDA approval, aducanumab failed to show effectiveness in people who already had Alzheimers symptoms. References Further reading External links "Alzheimers Disease Research Timeline - Alzforum". www.alzforum.org. "Alzheimers Disease Brain Cell Atlas- brain-map.org". portal.brain-map.org. Alzheimers disease at Curlie
Female infertility
Female infertility refers to infertility in women. It affects an estimated 48 million women, with the highest prevalence of infertility affecting women in South Asia, Sub-Saharan Africa, North Africa/Middle East, and Central/Eastern Europe and Central Asia. Infertility is caused by many sources, including nutrition, diseases, and other malformations of the uterus. Infertility affects women from around the world, and the cultural and social stigma surrounding it varies. Cause Causes or factors of female infertility can basically be classified regarding whether they are acquired or genetic, or strictly by location. Although factors of female infertility can be classified as either acquired or genetic, female infertility is usually more or less a combination of nature and nurture. Also, the presence of any single risk factor of female infertility (such as smoking, mentioned further below) does not necessarily cause infertility, and even if a woman is definitely infertile, the infertility cannot definitely be blamed on any single risk factor even if the risk factor is (or has been) present. Acquired According to the American Society for Reproductive Medicine (ASRM), age, smoking, sexually transmitted infections, and being overweight or underweight can all affect fertility.In broad sense, acquired factors practically include any factor that is not based on a genetic mutation, including any intrauterine exposure to toxins during fetal development, which may present as infertility many years later as an adult. Age A womans fertility is affected by her age. The average age of a girls first period (menarche) is 12–13 (12.5 years in the United States, 12.72 in Canada, 12.9 in the UK), but, in postmenarchal girls, about 80% of the cycles are anovulatory in the first year after menarche, 50% in the third and 10% in the sixth year. A womans fertility peaks in the early and mid 20s, after which it starts to decline, with this decline being accelerated after age 35. However, the exact estimates of the chances of a woman to conceive after a certain age are not clear, with research giving differing results. The chances of a couple to successfully conceive at an advanced age depend on many factors, including the general health of a woman and the fertility of the male partner. Menopause typically occurs between 44 and 58 years of age. DNA testing is rarely carried out to confirm claims of maternity at advanced ages, but in one large study, among 12,549 African and Middle Eastern immigrant mothers, confirmed by DNA testing, only two mothers were found to be older than fifty; the oldest mother being 52.1 years at conception (and the youngest mother 10.7 years old). Tobacco smoking Tobacco smoking is harmful to the ovaries, and the degree of damage is dependent upon the amount and length of time a woman smokes or is exposed to a smoke-filled environment. Nicotine and other harmful chemicals in cigarettes interfere with the bodys ability to create estrogen, a hormone that regulates folliculogenesis and ovulation. Also, cigarette smoking interferes with folliculogenesis, embryo transport, endometrial receptivity, endometrial angiogenesis, uterine blood flow and the uterine myometrium. Some damage is irreversible, but stopping smoking can prevent further damage. Smokers are 60% more likely to be infertile than non-smokers. Smoking reduces the chances of IVF producing a live birth by 34% and increases the risk of an IVF pregnancy miscarrying by 30%. Also, female smokers have an earlier onset of menopause by approximately 1–4 years. Sexually transmitted infections Sexually transmitted infections are a leading cause of infertility. They often display few, if any visible symptoms, with the risk of failing to seek proper treatment in time to prevent decreased fertility. Body weight and eating disorders Twelve percent of all infertility cases are a result of a woman either being underweight or overweight. Fat cells produce estrogen, in addition to the primary sex organs. Too much body fat causes production of too much estrogen and the body begins to react as if it is on birth control, limiting the odds of getting pregnant. Too little body fat causes insufficient production of estrogen and disruption of the menstrual cycle. Both under and overweight women have irregular cycles in which ovulation does not occur or is inadequate. Proper nutrition in early life is also a major factor for later fertility.A study in the US indicated that approximately 20% of infertile women had a past or current eating disorder, which is five times higher than the general lifetime prevalence rate.A review from 2010 concluded that overweight and obese subfertile women have a reduced probability of successful fertility treatment and their pregnancies are associated with more complications and higher costs. In hypothetical groups of 1,000 women undergoing fertility care, the study counted approximately 800 live births for normal weight and 690 live births for overweight and obese anovulatory women. For ovulatory women, the study counted approximately 700 live births for normal weight, 550 live births for overweight and 530 live births for obese women. The increase in cost per live birth in anovulatory overweight and obese women were, respectively, 54 and 100% higher than their normal weight counterparts, for ovulatory women they were 44 and 70% higher, respectively. Radiation Exposure to radiation poses a high risk of infertility, depending on the frequency, power, and exposure duration. Radiotherapy is reported to cause infertility,. the amount of radiation absorbed by the ovaries will determine if she becomes infertile. High doses can destroy some or all of the eggs in the ovaries and might cause infertility or early menopause. Chemotherapy Chemotherapy poses a high risk of infertility. Chemotherapies with high risk of infertility include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin and antimetabolites such as methotrexate, mercaptopurine and 5-fluorouracil.Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles. Antral follicle count decreases after three series of chemotherapy, whereas follicle stimulating hormone (FSH) reaches menopausal levels after four series. Other hormonal changes in chemotherapy include decrease in inhibin B and anti-Müllerian hormone levels.Women may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of ovarian tissue, oocytes or embryos. Immune infertility Antisperm antibodies (ASA) have been considered as infertility cause in around 10–30% of infertile couples. ASA production are directed against surface antigens on sperm, which can interfere with sperm motility and transport through the female reproductive tract, inhibiting capacitation and acrosome reaction, impaired fertilization, influence on the implantation process, and impaired growth and development of the embryo. Factors contributing to the formation of antisperm antibodies in women are disturbance of normal immunoregulatory mechanisms, infection, violation of the integrity of the mucous membranes, accidental rape and unprotected oral or anal sex. Other acquired factors Adhesions secondary to surgery in the peritoneal cavity is the leading cause of acquired infertility. A meta-analysis in 2012 came to the conclusion that there is only little evidence for the surgical principle that using less invasive techniques, introducing less foreign bodies or causing less ischemia reduces the extent and severity of adhesions. Diabetes mellitus. A review of type 1 diabetes came to the result that, despite modern treatment, women with diabetes are at increased risk of female infertility, such as reflected by delayed puberty and menarche, menstrual irregularities (especially oligomenorrhoea), mild hyperandrogenism, polycystic ovarian syndrome, fewer live born children and possibly earlier menopause. Animal models indicate that abnormalities on the molecular level caused by diabetes include defective leptin, insulin and kisspeptin signalling. Coeliac disease. Non-gastrointestinal symptoms of coeliac disease may include disorders of fertility, such as delayed menarche, amenorrea, infertility or early menopause; and pregnancy complications, such as intrauterine growth restriction (IUGR), small for gestational age (SGA) babies, recurrent abortions, preterm deliveries or low birth weight (LBW) babies. Nevertheless, gluten-free diet reduces the risk. Some authors suggest that physicians should investigate the presence of undiagnosed coeliac disease in women with unexplained infertility, recurrent miscarriage or IUGR. Significant liver or kidney disease Thrombophilia Cannabis smoking, such as of marijuana causes disturbances in the endocannabinoid system, potentially causing infertility Radiation, such as in radiation therapy. The radiation dose to the ovaries that generally causes permanent female infertility is 20.3 Gy at birth, 18.4 Gy at 10 years, 16.5 Gy at 20 years and 14.3 Gy at 30 years. After total body irradiation, recovery of gonadal function occurs in 10−14% of cases, and the number of pregnancies observed after hematopoietic stem cell transplantation involving such as procedure is lower than 2%. Genetic factors There are many genes wherein mutation causes female infertility, as shown in table below. Also, there are additional conditions involving female infertility which are believed to be genetic but where no single gene has been found to be responsible, notably Mayer-Rokitansky-Küstner-Hauser Syndrome (MRKH). Finally, an unknown number of genetic mutations cause a state of subfertility, which in addition to other factors such as environmental ones may manifest as frank infertility. Chromosomal abnormalities causing female infertility include Turner syndrome. Oocyte donation is an alternative for patients with Turner syndrome.Some of these gene or chromosome abnormalities cause intersex conditions, such as androgen insensitivity syndrome. By location Hypothalamic-pituitary factors Hypothalamic dysfunction Hyperprolactinemia Ovarian factors Chemotherapy (as detailed previously) with certain agents have a high risk of toxicity on the ovaries. Many genetic defects (as also detailed previously) also disturb ovarian function. Polycystic ovary syndrome (also see infertility in polycystic ovary syndrome) Anovulation. Female infertility caused by anovulation is called "anovulatory infertility", as opposed to "ovulatory infertility" in which ovulation is present. Diminished ovarian reserve, also see Poor Ovarian Reserve Premature menopause Menopause Luteal dysfunction Gonadal dysgenesis (Turner syndrome) Tubal (ectopic)/peritoneal factors Endometriosis (also see endometriosis and infertility) Pelvic adhesions Pelvic inflammatory disease (PID, usually due to chlamydia) Tubal dysfunction Previous ectopic pregnancy. A randomized study in 2013 came to the result that the rates of intrauterine pregnancy two years after treatment of ectopic pregnancy are approximately 64% with radical surgery, 67% with medication, and 70% with conservative surgery. In comparison, the cumulative pregnancy rate of women under 40 years of age in the general population over two years is over 90%. Hydrosalpinx is the most frequent. This happens when there is a presence of fluid on the tubes. We have some ways to test it: Hysterosalphingography, in which we can see both the uterus (Hystero) and the tubes. Hysterosonosalphingography, in which we see only the uterus. This tests are used to check if the tubes are permeable or if there is any obstacle in the path to the uterus. We have to introduce a liquid contrast via vagina, and we check its path via x-ray. If the tube is blocked, the contrast liquid will be stopped in the tubes, but if its not blocked, it will end in the abdominal cavity. The flow of this contrast needs peristaltic movements. This blockage can be produced by sexually transmitted diseases, previous surgery, peritonitis or endometriosis. Uterine factors Uterine malformations Uterine fibroids Ashermans syndrome Implantation failure without any known primary cause. It results in negative pregnancy test despite having performed e.g. embryo transfer.Previously, a bicornuate uterus was thought to be associated with infertility, but recent studies have not confirmed such an association. Cervical factors Cervical stenosis Antisperm antibodies Non-receptive cervical mucus Vaginal factors Vaginismus Vaginal obstruction Interrupted meiosis Meiosis. a special type of cell division specific to germ cells, produces egg cells in women. During meiosis, accurate segregation of chromosomes must occur during two rounds of division to create, upon fertilisation, a zygote with a proper diploid(euploid) set of chromosomes. About half of all spontaneous abortions are aneuploid, that is, have an improper set of chromosomes. Human genetic variants that likely cause dysregulation of critical meiotic processes have been identified in 14 female infertility associated genes.A major cause of female infertility is premature ovarian insufficiency. This insufficiency is a heterogeneous disease that affects about 1% of women who are under age 40. Some instances of female infertility are caused by DNA repair dysregulation during meiosis. Diagnosis Diagnosis of infertility begins with a medical history and physical exam. The healthcare provider may order tests, including the following: Lab tests Hormone testing, to measure levels of female hormones at certain times during a menstrual cycle. Day 2 or 3 measure of FSH and estrogen, to assess ovarian reserve. Measurements of thyroid function (a thyroid stimulating hormone (TSH) level of between 1 and 2 is considered optimal for conception). Measurement of progesterone in the second half of the cycle to help confirm ovulation. Anti-Müllerian hormone to estimate ovarian reserve. Examination and imaging An endometrial biopsy, to verify ovulation and inspect the lining of the uterus. Laparoscopy, which allows the provider to inspect the pelvic organs. Fertiloscopy, a relatively new surgical technique used for early diagnosis (and immediate treatment). Pap smear, to check for signs of infection. Pelvic exam, to look for abnormalities or infection. A postcoital test, which is done soon after intercourse to check for problems with sperm surviving in cervical mucous (not commonly used now because of test unreliability). Hysterosalpingography or sonosalpingography, to check for tube patency Sonohysterography to check for uterine abnormalities.There are genetic testing techniques under development to detect any mutation in genes associated with female infertility.Initial diagnosis and treatment of infertility is usually made by obstetrician/gynecologists or womens health nurse practitioners. If initial treatments are unsuccessful, referral is usually made to physicians who are fellowship trained as reproductive endocrinologists. Reproductive endocrinologists are usually obstetrician/gynecologists with advanced training in reproductive endocrinology and infertility (in North America). These physicians treat reproductive disorders affecting not only women but also men, children, and teens. Usually reproductive endocrinology & infertility medical practices do not see women for general maternity care. The practice is primarily focused on helping their women to conceive and to correct any issues related to recurring pregnancy loss. Definition There is no unanimous definition of female infertility, because the definition depends on social and physical characteristics which may vary by culture and situation. NICE guidelines state that: "A woman of reproductive age who has not conceived after 1 year of unprotected vaginal sexual intercourse, in the absence of any known cause of infertility, should be offered further clinical assessment and investigation along with her partner." It is recommended that a consultation with a fertility specialist should be made earlier if the woman is aged 36 years or over, or there is a known clinical cause of infertility or a history of predisposing factors for infertility. According to the World Health Organization (WHO), infertility can be described as the inability to become pregnant, maintain a pregnancy, or carry a pregnancy to live birth. A clinical definition of infertility by the WHO and ICMART is "a disease of the reproductive system defined by the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse." Infertility can further be broken down into primary and secondary infertility. Primary infertility refers to the inability to give birth either because of not being able to become pregnant, or carry a child to live birth, which may include miscarriage or a stillborn child. Secondary infertility refers to the inability to conceive or give birth when there was a previous pregnancy or live birth. Prevention Acquired female infertility may be prevented through identified interventions: Maintaining a healthy lifestyle. Excessive exercise, consumption of caffeine and alcohol, and smoking have all been associated with decreased fertility. Eating a well-balanced, nutritious diet, with plenty of fresh fruits and vegetables, and maintaining a normal weight, on the other hand, have been associated with better fertility prospects. Treating or preventing existing diseases. Identifying and controlling chronic diseases such as diabetes and hypothyroidism increases fertility prospects. Lifelong practice of safer sex reduces the likelihood that sexually transmitted diseases will impair fertility; obtaining prompt treatment for sexually transmitted diseases reduces the likelihood that such infections will do significant damage. Regular physical examinations (including pap smears) help detect early signs of infections or abnormalities. Not delaying parenthood. Fertility does not ultimately cease before menopause, but it starts declining after age 27 and drops at a somewhat greater rate after age 35. Women whose biological mothers had unusual or abnormal issues related to conceiving may be at particular risk for some conditions, such as premature menopause, that can be mitigated by not delaying parenthood. Egg freezing. A woman can freeze her eggs preserve her fertility. By using egg freezing while in the peak reproductive years, a womans oocytes are cryogenically frozen and ready for her use later in life, reducing her chances of female infertility. Treatment There is no method to reverse advanced maternal age, but there are assisted reproductive technologies for many causes of infertility in pre-menopausal women, including: Ovulation induction for anovulation In vitro fertilization in for example tubal abnormalities Epidemiology Female infertility varies widely by geographic location around the world. In 2010, there was an estimated 48.5 million infertile couples worldwide, and from 1990 to 2010 there was little change in levels of infertility in most of the world. In 2010, the countries with the lowest rates of female infertility included the South American countries of Peru, Ecuador and Bolivia, as well as in Poland, Kenya, and Republic of Korea. The highest rate regions included Eastern Europe, North Africa, the Middle East, Oceania, and Sub-Saharan Africa. The prevalence of primary infertility has increased since 1990, but secondary infertility has decreased overall. Rates decreased (although not prevalence) of female infertility in high-income, Central/Eastern Europe, and Central Asia regions. Female infertility is prevalent across the globe. In 2013, the estimated prevalence of female infertility ranged from roughly 3% to 7% depending on the part of globe being followed. [78] Africa Sub-Saharan Africa has had decreasing levels of primary infertility from 1990 to 2010. Within the Sub-Saharan region, rates were lowest in Kenya, Zimbabwe, and Rwanda, while the highest rates were in Guinea, Mozambique, Angola, Gabon, and Cameroon along with Northern Africa near the Middle East. According to a 2004 DHS report, rates in Africa were highest in Middle and Sub-Saharan Africa, with East Africas rates close behind. Asia In Asia, the highest rates of combined secondary and primary infertility was in the South Central region, and then in the Southeast region, with the lowest rates in the Western areas. Latin America and Caribbean The prevalence of female infertility in the Latin America/Caribbean region is typically lower than the global prevalence. However, the greatest rates occurred in Jamaica, Suriname, Haiti, and Trinidad and Tobago. Central and Western Latin America has some of the lowest rates of prevalence. The highest regions in Latin America and the Caribbean was in the Caribbean Islands and in less developed countries. Society and culture Social stigma Social stigma due to infertility is seen in many cultures throughout the world in varying forms. Often, when women cannot conceive, the blame is put on them, even when approximately 50% of infertility issues come from the man. In addition, many societies only tend to value a woman if she is able to produce at least one child, and a marriage can be considered a failure when the couple cannot conceive. The act of conceiving a child can be linked to the couples consummation of marriage, and reflect their social role in society. This is seen in the "African infertility belt", where infertility is prevalent in Africa which includes countries spanning from Tanzania in the east to Gabon in the west. In this region, infertility is highly stigmatized and can be considered a failure of the couple to their societies. This is demonstrated in Uganda and Nigeria where there is a great pressure put on childbearing and its social implications. This is also seen in some Muslim societies including Egypt and Pakistan. In the United States, and all over the world, infertility and women’s infertility at large is an invisible yet debilitating disease that is stigmatized and looked down upon. But, in recent years many have begun to sue organizations for infertility insurance coverage, as the Americans with Disabilities Act (ADA) has recognized infertility as a disability. This however adds another stigmatization to women suffering from infertility as the word disability has a negative connotation in various world societies. [77] Wealth is sometimes measured by the number of children a woman has, as well as inheritance of property. Children can influence financial security in many ways. In Nigeria and Cameroon, land claims are decided by the number of children. Also, in some Sub-Saharan countries women may be denied inheritance if she did not bear any children In some African and Asian countries a husband can deprive his infertile wife of food, shelter and other basic necessities like clothing. In Cameroon, a woman may lose access to land from her husband and left on her own in old age.In many cases, a woman who cannot bear children is excluded from social and cultural events including traditional ceremonies. This stigmatization is seen in Mozambique and Nigeria where infertile women have been treated as outcasts to society. This is a humiliating practice which devalues infertile women in society. In the Makua tradition, pregnancy and birth are considered major life events for a woman, with the ceremonies of nthaara and nthaara no mwana, which can only be attended by women who have been pregnant and have had a baby.The effect of infertility can lead to social shaming from internal and social norms surrounding pregnancy, which affects women around the world. When pregnancy is considered such an important event in life, and considered a "socially unacceptable condition", it can lead to a search for treatment in the form of traditional healers and expensive Western treatments. The limited access to treatment in many areas can lead to extreme and sometimes illegal acts in order to produce a child. Marital role Men in some countries may find another wife when their first cannot produce a child, hoping that by sleeping with more women he will be able to produce his own child. This can be prevalent in some societies, including Cameroon, Nigeria, Mozambique, Egypt, Botswana, and Bangladesh, among many more where polygamy is more common and more socially acceptable. In couples that are unsuccessful in conceiving, divorce rates are roughly 3.5 times higher than those of couples who are fertile. This was based off of those with female infertility. [78] In some cultures, including Botswana and Nigeria, women can select a woman with whom she allows her husband to sleep with in hopes of conceiving a child. Women who are desperate for children may compromise with her husband to select a woman and accept duties of taking care of the children to feel accepted and useful in society.Women may also sleep with other men in hopes of becoming pregnant. This can be done for many reasons including advice from a traditional healer, or finding if another man was "more compatible". In many cases, the husband was not aware of the extra sexual relations and would not be informed if a woman became pregnant by another man. This is not as culturally acceptable however, and can contribute to the gendered suffering of women who have fewer options to become pregnant on their own as opposed to men.Men and women can also turn to divorce in attempt to find a new partner with whom to bear a child. Infertility in many cultures is a reason for divorce, and a way for a man or woman to increase his/her chances of producing an heir. When a woman is divorced, she can lose her security that often comes with land, wealth, and a family. This can ruin marriages and can lead to distrust in the marriage. The increase of sexual partners can potentially result with the spread of disease including HIV/AIDS, and can actually contribute to future generations of infertility. Domestic abuse The emotional strain and stress that comes with infertility in the household can lead to the mistreatment and domestic abuse of a woman. The devaluation of a wife due to her inability to conceive can lead to domestic abuse and emotional trauma such as victim blaming. Women are sometimes or often blamed as the cause of a couples infertility, which can lead to emotional abuse, anxiety, and shame. In addition, blame for not being able to conceive is often put on the female, even if it is the man who is infertile. Women who are not able to conceive can be starved, beaten, and may be neglected financially by her husband as if she had no child bearing use to him. The physical abuse related to infertility may result from this and the emotional stress that comes with it. In some countries, the emotional and physical abuses that come with infertility can potentially lead to assault, murder, and suicide. Mental and psychological impact Many infertile women tend to cope with immense stress and social stigma behind their condition, which can lead to considerable mental distress. The long-term stress involved in attempting to conceive a child and the social pressures behind giving birth can lead to emotional distress that may manifest as mental disease. Women with infertility might deal with psychological stressors such as denial, anger, grief, guilt, and depression. There can be considerable social shaming that can lead to intense feelings of sadness and frustration that potentially contribute to depression and suicide. The implications behind infertility bear huge consequences for the mental health of an infertile woman because of the social pressures and personal grief behind being unable to bear children. The range of psychological issues pertaining to infertility in women is vast and can include inferiority complex, stress with interpersonal relationships, and possibly major depression and or anxiety. With the impacts of infertility on social life, cultural significance, and psychological factors, “infertility has been classified as one of the greatest stressors of life.” [76] Emotional impact of infertility treatment Many women have reported finding treatment for infertility stressful and a cause of relationship difficulties with their partners. The fear of failure was the most important barrier to treatment. Females, in studied cases, typically experience more adverse effects of infertility and treatments than to males. [75]The psychological support is fundamental to limit the possibility to drop-out from infertility treatment and reduce the distress level which is strongly associated with lower pregnancy rates. In addition some medications (in particular clomifene citrate) used in the treatment have several side effects which may be an important risk factor for the development of depression. See also Advanced maternal age Fertility Infertility Male infertility Oncofertility References [75] Raval, H.; Slade, P.; Buck, P.; Lieberman, B. E. (1987-10). "The impact of infertility on emotions and the marital and sexual relationship". Journal of Reproductive and Infant Psychology. 5 (4): 221–234. doi
Female infertility
:10.1080/02646838708403497. ISSN 0264-6838. {{cite journal}}: Empty citation (help): Check date values in: |date= (help) [76] Khan, Ambreen Rashid (March 2019). "Impact of Infertility on Mental Health of Women" (PDF). The International Journal of Indian Psychology. 7 (1): 804–809. doi:10.25215/0701.089. [77] Sternke, Elizabeth A.; Abrahamson, Kathleen (2015-03-01). "Perceptions of Women with Infertility on Stigma and Disability". Sexuality and Disability. 33 (1): 3–17. doi:10.1007/s11195-014-9348-6. ISSN 1573-6717. [78] Direkvand-Moghadam, Ashraf; Delpisheh, Ali; Khosravi, A. (2013-12-30). "Epidemiology of Female Infertility; A Review of Literature". Biosciences Biotechnology Research Asia. 10 (2): 559–567. doi:10.13005/bbra/1165. ISSN 0973-1245. == External links ==
Hymenolepiasis
Hymenolepiasis is infestation by one of two species of tapeworm: Hymenolepis nana or H. diminuta. Alternative names are dwarf tapeworm infection and rat tapeworm infection. The disease is a type of helminthiasis which is classified as a neglected tropical disease. Symptoms and signs Hymenolepiasis does not always have symptoms, but they usually are described as abdominal pain, loss of appetite, itching around the anus, irritability, and diarrhea. However, in one study of 25 patients conducted in Peru, successful treatment of the infection made no significant difference to symptoms. Some authorities report that heavily infected cases are more likely to be symptomatic.Symptoms in humans are due to allergic responses or systematic toxaemia caused by waste products of the tapeworm. Light infections are usually symptomless, whereas infection with more than 2000 worms can cause enteritis, abdominal pain, diarrhea, loss of appetite, restlessness, irritability, restless sleep, and anal and nasal pruritus. Rare symptoms include increased appetite, vomiting, nausea, bloody diarrhea, hives, extremity pain, headache, dizziness, and behavioral disturbances. Occasionally, epileptic seizures occur in infected children.Examination of the stool for eggs and parasites confirms the diagnosis. The eggs and proglottids of H. nana are smaller than H. diminuta. Proglottids of both are relatively wide and have three testes. Identifying the parasites to the species level is often unnecessary from a medical perspective, as the treatment is the same for both. Complications Abdominal discomfort and, in case of prolonged diarrhea, dehydration are possible complications.In 2015 an unusual complication was noted in a man whose immune system had been compromised by HIV. He developed multiple tumors of malignant cell nests within his body that had originated from a tapeworm in his intestines. Causes Hymenolepis worms live in the intestines of rats and are common in warm climates, and are generally found in the feces of rats, which are consumed by their secondary hosts—beetles. The worms mature into a life form referred to as a "cysticercoid" in the insect; in H. nana, the insect is always a beetle. Humans and other animals become infected when they intentionally or unintentionally eat material contaminated by insects. In an infected person, it is possible for the worms entire lifecycle to be completed in the bowel, so infection can persist for years if left untreated. H. nana infections are much more common than H. diminuta infections in humans because, in addition to being spread by insects, the disease can be spread directly from person to person by eggs in feces. When this happens, H. nana oncosphere larvae encyst in the intestinal wall and develop into cysticercoids and then adults. These infections were previously common in the southeastern USA, and have been described in crowded environments and individuals confined to institutions. However, the disease occurs throughout the world. H. nana infections can grow worse over time because, unlike in most tapeworms, H. nana eggs can hatch and develop without ever leaving the definitive host. H. diminuta The risk of human infection from H. diminuta is very low, since its main host is the rat. Also known as the rat tapeworm, H. diminuta adults live and mate in the bowels of rats. Eggs of H. diminuta are excreted by the rats in droppings, which are frequently consumed by beetles. Once inside the beetle, the eggs mature into a cysticercoid. The juvenile tapeworms claw their way out of the beetle gut into the circulatory system by means of their three pairs of hooks. There, they wait for a rat to ingest the host beetle, where they mature to adult form, lay eggs, and restart the entire cycle. Beetle manipulation H. diminuta has an effective mechanism for interspecies transfection. Beetles prefer to ingest rat droppings infected with tapeworm eggs, because of their odor. It is not known if the odor is produced specifically by the eggs or the droppings. H. diminuta also sterilizes its beetle host, if female. This is so the beetle does not waste energy in its reproductive system, allowing H. diminuta to further exploit the beetles metabolic resources. H. nana H. nana is a tapeworm, belonging to the class Cestoidea, phylum Platyhelminthes. It consists of a linear series of sets of reproductive organs of both sexes; each set is referred to as a genitaluim and the area around it is a proglottid. New proglottids are continuously differentiated near the anterior end in a process called strobilation. Each segment moves toward the posterior end as a new one takes its place and, during the process, becomes sexually mature. The proglottid can copulate with itself, with others in the strobilla, or with those in other worms. When the segment reaches the end of its strobila, it disintegrates en route, releasing eggs in a process called apolysis. Lifecycle H. nana is the only cestode capable of completing its lifecycle without an intermediate host. It can, however, pass through an intermediate host, as well. The most common intermediate hosts for H. nana are arthropods (e.g. flour beetles). When an egg is ingested by the definitive host, it hatches and releases a six-hook larva called the oncosphere (hexacanth) which penetrates the villi of the small intestine and develops into a cysticercoid. Infection Transmission of H.nana occurs by the fecal-oral route. It also occurs by accidental ingestion of an insect containing the cysticercoid. Screening for activity against H. nana H. nana in mice is used because: Human infection is easily maintained in mice. Its armed scolex is similar to other pathogenic tapeworms. It corresponds to other tapeworms in its sensitivity to standard anthelmintics,Method: Mature worms are collected from infected mice. Terminal gravid proglottids are removed, crushed under coverslips, and eggs are removed. Eggs containing hooklets (mature) are counted. 0.2 ml stock soln. containing 1000 eggs/ml given to each mouse. Adult worms develop in 15–17 days. The test drug is given orally; mice are necropsied on the third day after treatment. A standard drug is given. The intestines are examined under a dissecting microscope for worms or scolices. The response is measured bt the number mice cleared. Pathology H. nana lodges itself in the intestines and absorbs nutrients from the intestinal lumen. In human adults, the tapeworm is more of a nuisance than a health problem, but in small children, many H. nana worms can be dangerous. Usually, the larvae of this tapeworm cause the most problem in children; they burrow into the walls of the intestine, and if enough tapeworms are present in the child, severe damage can be inflicted. This is done by absorbing all the nutrients from the food the child eats. Usually, a single tapeworm will not cause health issues. H. nana usually will not cause deaths unless in extreme circumstances and usually in young children or in people who have weakened immune systems. In some parts of the world, individuals who are heavily infected are a result of internal autoinfection. Prevention Good hygiene, public health and sanitation programs, and elimination of infected rats help to prevent the spread of hymenolepiasis. Preventing fecal contamination of food and water in institutions and crowded areas is of primary importance. General sanitation and rodent and insect control (especially control of fleas and grain insects) are also essential for prevention of H. nana infection. Treatment The two drugs that have been well-described for the treatment of hymenolepiasis are praziquantel and niclosamide. Praziquantel, which is parasiticidal in a single dose for all the stages of the parasite, is the drug of choice because it acts very rapidly against H. nana. Although structurally unrelated to other anthelminthics, it kills both adult worms and larvae. In vitro, the drug produces vacuolization and disruption of the tegument in the neck of the worms, but not in more posterior portions of the strobila. Praziquantel is well absorbed when taken orally, and it undergoes first-pass metabolism and 80% of the dose is excreted as metabolites in urine within 24 hours. Repeated treatment is required for H. nana at an interval of 7–10 days.Praziquantel as a single dose (25 mg/kg) is the current treatment of choice for hymenolepiasis and has an efficacy of 96%. Single-dose albendazole (400 mg) is also very efficacious (>95%).A three-day course of nitazoxanide is 75–93% efficacious. The dose is 1 g daily for adults and children over 12; 400 mg daily for children aged 4 to 11 years; and 200 mg daily for children aged 3 years or younger. Prognosis Cure rates are extremely good with modern treatments, but successful cure results may be of no symptomatic benefit to patients. See also List of parasites (human) References Further reading MedlinePlus Encyclopedia: Hymenolepiasis "Hymenolepiasis". CDC - DPDx. 2013-11-29. Retrieved 2015-11-01. == External links ==
Adie syndrome
Adie syndrome, also known as Holmes-Adie syndrome, is a neurological disorder characterized by a tonically dilated pupil that reacts slowly to light but shows a more definite response to accommodation (i.e., light-near dissociation). It is frequently seen in females with absent knee or ankle jerks and impaired sweating. The syndrome is caused by damage to the postganglionic fibers of the parasympathetic innervation of the eye, usually by a viral or bacterial infection that causes inflammation, and affects the pupil of the eye and the autonomic nervous system. It is named after the British neurologists William John Adie and Gordon Morgan Holmes, who independently described the same disease in 1931. Signs and symptoms Adie syndrome presents with three hallmark symptoms, namely at least one abnormally dilated pupil (mydriasis) which does not constrict in response to light, loss of deep tendon reflexes, and abnormalities of sweating. Other signs may include hyperopia due to accommodative paresis, photophobia and difficulty reading. Some individuals with Adie syndrome may also have cardiovascular abnormalities. Pathophysiology Pupillary symptoms of Holmes–Adie syndrome are thought to be the result of a viral or bacterial infection that causes inflammation and damage to neurons in the ciliary ganglion, located in the posterior orbit, that provides parasympathetic control of eye constriction. Additionally, patients with Holmes-Adie Syndrome can also experience problems with autonomic control of the body. This second set of symptoms is caused by damage to the dorsal root ganglia of the spinal cord. Adies pupil is supersensitive to ACh so a muscarinic agonist (e.g. pilocarpine) whose dose would not be able to cause pupillary constriction in a normal patient, would cause it in a patient with Adies Syndrome. The circuitry for the pupillary constriction does not descend below the upper midbrain, henceforth impaired pupillary constriction is extremely important to detect as it can be an early sign of brainstem herniation. Diagnosis Clinical exam may reveal sectoral paresis of the iris sphincter or vermiform iris movements. The tonic pupil may become smaller (miotic) over time which is referred to as "little old Adies". Testing with low dose (1/8%) pilocarpine may constrict the tonic pupil due to cholinergic denervation supersensitivity. A normal pupil will not constrict with the dilute dose of pilocarpine. CT scans and MRI scans may be useful in the diagnostic testing of focal hypoactive reflexes. Treatment The usual treatment of a standardised Adie syndrome is to prescribe reading glasses to correct for impairment of the eye(s). Pilocarpine drops may be administered as a treatment as well as a diagnostic measure. Thoracic sympathectomy is the definitive treatment of diaphoresis, if the condition is not treatable by drug therapy. Prognosis Adies syndrome is not life-threatening or disabling. As such, there is no mortality rate relating to the condition; however, loss of deep tendon reflexes is permanent and may progress over time. Epidemiology It most commonly affects younger women (2.6:1 female preponderance) and is unilateral in 80% of cases. Average age of onset is 32 years. See also Ciliary ganglion Ross syndrome References Further reading == External links ==
Hidradenitis suppurativa
Hidradenitis suppurativa (HS), sometimes known as acne inversa or Verneuils disease, is a long-term dermatological condition characterized by the occurrence of inflamed and swollen lumps. These are typically painful and break open, releasing fluid or pus. The areas most commonly affected are the underarms, under the breasts, and the groin. Scar tissue remains after healing. HS may significantly limit many everyday activities, for instance, walking, hugging, moving, and sitting down. Sitting disability may occur in patients with lesions in sacral, gluteal, perineal, femoral, groin or genital regions; and prolonged periods of sitting down itself can also worsen the condition of the skin of these patients.The exact cause is usually unclear, but believed to involve a combination of genetic and environmental factors. About a third of people with the disease have an affected family member. Other risk factors include obesity and smoking. The condition is not caused by an infection, poor hygiene, or the use of deodorant. Instead, it is believed to be caused by hair follicles being obstructed, with the nearby apocrine sweat glands being strongly implicated in this obstruction. The sweat glands themselves may or may not be inflamed. Diagnosis is based on the symptoms.No cure is known. Warm baths may be tried in those with mild disease. Cutting open the lesions to allow them to drain does not result in significant benefit. While antibiotics are commonly used, evidence for their use is poor. Immunosuppressive medication may also be tried. In those with more severe disease, laser therapy or surgery to remove the affected skin may be viable. Rarely, a skin lesion may develop into skin cancer.If mild cases of HS are included, then the estimate of its frequency is from 1–4% of the population. Women are three times more likely to be diagnosed with it than men. Onset is typically in young adulthood and may become less common after 50 years old. It was first described between 1833 and 1839 by French anatomist Alfred Velpeau. Causes The exact cause of hidradenitis suppurativa remains unknown, and there has, in the recent past, been notable disagreement among experts in this regard. The condition, however, likely stems from both genetic and environmental causes. Specifically, an immune-mediated pathology has been proposed, although there have been sources that have already contradicted the probable likelihood of such an idea.Lesions will occur in any body areas with hair follicles, although areas such as the axilla, groin, and perineal region are more commonly involved. This theory includes most of these potential indicators: Post-pubescent individuals Blocked hair follicles or blocked apocrine sweat glands Excessive sweating Androgen dysfunction Genetic disorders that alter cell structure Patients with more advanced cases may find exercise intolerably painful, which may increase their rate of obesity.The historical understanding of the disease suggests dysfunctional apocrine glands or dysfunctional hair follicles, possibly triggered by a blocked gland, which creates inflammation, pain, and a swollen lesion. Triggering factors Several triggering factors should be taken into consideration: Obesity is an exacerbating rather than a triggering factor, through mechanical irritation, occlusion, and skin maceration. Tight clothing, and clothing made of heavy, nonbreathable materials Deodorants, depilation products, shaving of the affected area – their association with HS is still an ongoing debate among researchers. Drugs, in particular oral contraceptive pills and lithium. Hot and especially humid climates. Predisposing factors Genetic factors: an autosomal dominant inheritance pattern has been proposed. Endocrine factors: sex hormones, especially an excess of androgens, are thought to be involved, although the apocrine glands are not sensitive to these hormones. Women often have outbreaks before their menstrual period and after pregnancy; HS severity usually decreases during pregnancy and after menopause.Some cases have been found to result from mutations in the NCSTN, PSEN1, or PSENEN genes. The genes produce proteins that are all components of a complex called gamma- (γ-) secretase. This complex cuts apart (cleaves) many different proteins, which is an important step in several chemical signaling pathways. One of these pathways, known as notch signaling, is essential for the normal maturation and division of hair follicle cells and other types of skin cells. Notch signaling is also involved in normal immune system function. Studies suggest that mutations in the NCSTN, PSEN1, or PSENEN gene impair notch signaling in hair follicles. Although little is known about the mechanism, abnormal notch signaling appears to promote the development of nodules and to lead to inflammation in the skin. In addition, the composition of the intestinal microflora and as a consequence dietary patterns appear to play a role. Although dysbiosis of the cutaneous microbiome apparent in HS is not observed, the concurrent existence of inflammatory gut and skin diseases has led to the postulation of a gut-skin axis in which gut microbiota is implicated. Indeed, analysis of bacterial taxa in fecal samples from HS patients support the possibility of a role for intestinal microbial alterations in this chronic inflammatory skin disease. Diagnosis Stages Hidradenitis suppurativa presents itself in three stages. Due to the large spectrum of clinical severity and the severe impact on quality of life, a reliable method for evaluating HS severity is needed. Hurleys staging system Hurleys staging system was the first classification system proposed, and is still in use for the classification of patients with skin diseases (i.e., psoriasis, HS, acne). Hurley separated patients into three groups based largely on the presence and extent of cicatrization and sinuses. It has been used as a basis for clinical trials in the past and is a useful basis to approach therapy for patients. These three stages are based on Hurleys staging system, which is simple and relies on the subjective extent of the diseased tissue the patient has. Hurleys three stages of hidradenitis suppurativa are: Sartorius staging system The Sartorius staging system is more sophisticated than Hurleys. Sartorius et al. suggested that the Hurley system is not sophisticated enough to assess treatment effects in clinical trials during research. This classification allows for better dynamic monitoring of the disease severity in individual patients. The elements of this staging system are: Anatomic regions involved (axilla, groin gluteal, or other region or inframammary region left or right) Number and types of lesions involved (abscesses, nodules, fistulas or sinuses, scars, points for lesions of all regions involved) The distance between lesions, in particular the longest distance between two relevant lesions (i.e., nodules and fistulas in each region or size if only one lesion present) The presence of normal skin in between lesions (i.e., if all lesions are clearly separated by normal skin)Points are accumulated in each of the above categories, and added to give both a regional and total score. In addition, the authors recommend adding a visual analog scale for pain or using the dermatology life quality index (DLQI, or the skindex) when assessing HS. Treatment Treatment depends upon presentation and severity of the disease. Due to the poorly studied nature of the disease, the effectiveness of the drugs and therapies listed below is unclear. Possible treatments include the following: Lifestyle Warm baths may be tried in those with mild disease. Weight loss and the cessation of smoking are also recommended. Medication Antibiotics: taken by mouth, these are used for their anti-inflammatory properties rather than to treat infection. Most effective is a combination of rifampicin and clindamycin given concurrently for 2–3 months. Popular antibiotics also include tetracycline and minocycline. Topical clindamycin has been shown to have an effect in double-blind placebo controlled studies. Corticosteroid injections, also known as intralesional steroids, can be particularly useful for localized disease, if the drug can be prevented from escaping via the sinuses. Antiandrogen therapy, hormonal therapy with antiandrogenic medications such as spironolactone, flutamide, cyproterone acetate, ethinylestradiol, finasteride, dutasteride, and metformin, have been found to be effective in clinical studies. However, the quality of available evidence is low and does not presently allow for robust evidence-based recommendations. Intravenous infusion or subcutaneous injection of anti-inflammatory (TNF inhibitors; anti-TNF-alpha) drugs such as infliximab, and etanercept This use of these drugs is not currently Food and Drug Administration (FDA) approved and is somewhat controversial, so may not be covered by insurance. TNF inhibitor: Studies have supported that various TNF inhibitors have a positive effect on HS lesions. Specifically adalimumab at weekly intervals is useful. Adalimumab is the only medication approved by the FDA for the treatment of HS as of 2021. Topical isotretinoin is usually ineffective in people with HS, and is more commonly known as a medication for the treatment of acne vulgaris. Individuals affected by HS who responded to isotretinoin treatment tended to have milder cases of the condition. Surgery When the process becomes chronic, wide surgical excision is the procedure of choice. Wounds in the affected area do not heal by secondary intention, and immediate or delayed application of a split-thickness skin graft is an option. Another option is covering the defect with a perforator flap. With this technique, the (mostly totally excised) defect is covered with tissue from an area nearby. For example, the axilla with a fully excised defect of 15 × 7 cm can be covered with a thoracodorsal artery perforator flap. Laser hair removal The 1064-nm wavelength laser for hair removal may aid in the treatment of HS. A randomized control study has shown improvement in HS lesions with the use of an Nd:YAG laser. Prognosis In stage III disease, as classified by the Hurleys staging system, fistulae left undiscovered, undiagnosed, or untreated, can rarely lead to the development of squamous cell carcinoma in the anus or other affected areas. Other stage III chronic sequelae may also include anemia, multilocalized infections, amyloidosis, and arthropathy. Stage III complications have been known to lead to sepsis, but clinical data are still uncertain. Potential complications Contractures and reduced mobility of the lower limbs and axillae due to fibrosis and scarring occur. Severe lymphedema may develop in the lower limbs. Local and systemic infections (meningitis, bronchitis, pneumonia, etc.), are seen, which may even progress to sepsis. Interstitial keratitis Anal, rectal, or urethral fistulae Normochromic or hypochromic anemia People with HS may be at increased risk for autoimmune disorders including ankylosing spondylitis, rheumatoid arthritis, and psoriatic arthritis. Squamous cell carcinoma has been found on rare occasions in chronic hidradenitis suppurativa of the anogenital region. The mean time to the onset of this type of lesion is 10 years or more and the tumors are usually highly aggressive. Tumors of the lung and oral cavity, and liver cancer Hypoproteinemia and amyloidosis, which can lead to kidney failure and death Seronegative and usually asymmetric arthropathy: pauciarticular arthritis, polyarthritis/polyarthralgia syndrome History From 1833 to 1839, in a series of three publications, Velpeau identified and described a disease now known as hidradenitis suppurativa. In 1854, Verneuil described hidradenitis suppurativa as hidrosadénite Phlegmoneuse. This is how HS obtained its alternate name "Verneuils disease". In 1922, Schiefferdecker hypothesized a pathogenic link between "acne inversa" and human sweat glands. In 1956, Pillsbury et al. coined the term follicular occlusion triad for the common association of hidradenitis suppurativa, acne conglobata and dissecting cellulitis of the scalp. Modern clinical research still employs Pillsburys terminology for these conditions descriptions. In 1975, Plewig and Kligman, following Pillsburys research path, modified the "acne triad", replacing it with the "acne tetrad: acne triad, plus pilonidal sinus". Plewig and Kligmans research follows in Pillsburys footsteps, offering explanations of the symptoms associated with hidradenitis suppurativa. In 1989, Plewig and Stegers research led them to rename hidradenitis suppurativa, calling it "acne inversa" – which is not still used today in medical terminology, although some individuals still use this outdated term.A surgeon from Paris, Velpeau described an unusual inflammatory process with formation of superficial axillary, submammary, and perianal abscesses, in a series of three publications from 1833 to 1839. One of his colleagues, also located in Paris, named Verneuil, coined the term hidrosadénite phlegmoneuse about 15 years later. This name for the disease reflects the former pathogenetic model of acne inversa, which is considered inflammation of sweat glands as the primary cause of hidradenitis suppurativa. In 1922, Schiefferdecker suspected a pathogenic association between acne inversa and apocrine sweat glands. In 1956, Pillsbury postulated follicular occlusion as the cause of acne inversa, which they grouped together with acne conglobata and perifolliculitis capitis abscendens et suffodiens ("dissecting cellulitis of the scalp") as the "acne triad". Plewig and Kligman added another element to their acne triad, pilonidal sinus. Plewig et al. noted that this new "acne tetrad" includes all the elements found in the original "acne triad", in addition to a fourth element, pilonidal sinus. In 1989, Plewig and Steger introduced the term "acne inversa", indicating a follicular source of the disease and replacing older terms such as "Verneuil disease". Other names Hidradenitis suppurativa has been referred to by multiple names in the literature, as well as in various cultures. Some of these are also used to describe different diseases, or specific instances of this disease. Acne conglobata – not really a synonym – this is a similar process, but in classic acne areas of chest and back Acne inversa – a proposed new term which has not gained widespread favor. Apocrine acne – an outdated term based on the disproven concept that apocrine glands are primarily involved, though many do have apocrine gland infection Apocrinitis – another outdated term based on the same thesis Fox-den disease – a term not used in medical literature, based on the deep fox den–like sinuses Hidradenitis supportiva – a misspelling Pyodermia fistulans significa – now considered archaic Verneuils disease – recognizing the surgeon whose name is most often associated with the disorder as a result of his 1854–1865 studies Histology Terminology Although hidradenitis suppurativa is often referred to as acne inversa, it is not a form of acne, and lacks the core defining features of acne such as the presence of closed comedones and increased sebum production. References External links Medline: What is Hidradenitis Suppurativa? Hidradenitis Suppurativa (2004) Prof J. Revuz Archived 28 February 2005 at the Wayback Machine
Urinary incontinence
Urinary incontinence (UI), also known as involuntary urination, is any uncontrolled leakage of urine. It is a common and distressing problem, which may have a large impact on quality of life. It has been identified as an important issue in geriatric health care. The term enuresis is often used to refer to urinary incontinence primarily in children, such as nocturnal enuresis (bed wetting). UI is an example of a stigmatized medical condition, which creates barriers to successful management and makes the problem worse. People may be too embarrassed to seek medical help, and attempt to self-manage the symptom in secrecy from others. Pelvic surgery, pregnancy, childbirth, and menopause are major risk factors. Urinary incontinence is often a result of an underlying medical condition but is under-reported to medical practitioners. There are four main types of incontinence: Urge incontinence due to an overactive bladder Stress incontinence due to "a poorly functioning urethral sphincter muscle (intrinsic sphincter deficiency) or to hypermobility of the bladder neck or urethra" Overflow incontinence due to either poor bladder contraction or blockage of the urethra Mixed incontinence involving features of different other typesTreatments include pelvic floor muscle training, bladder training, surgery, and electrical stimulation. Behavioral therapy generally works better than medication for stress and urge incontinence. The benefit of medications is small and long term safety is unclear. Urinary incontinence is more common in older women. Causes Urinary incontinence can result from both urologic and non-urologic causes. Urologic causes can be classified as either bladder dysfunction or urethral sphincter incompetence and may include detrusor overactivity, poor bladder compliance, urethral hypermobility, or intrinsic sphincter deficiency. Non-urologic causes may include infection, medication or drugs, psychological factors, polyuria, hydrocephalus, stool impaction, and restricted mobility. The causes leading to urinary incontinence are usually specific to each sex, however, some causes are common to both men and women. Women The most common types of urinary incontinence in women are stress urinary incontinence and urge urinary incontinence. Women that have symptoms of both types are said to have "mixed" urinary incontinence. After menopause, estrogen production decreases and, in some women, urethral tissue will demonstrate atrophy, becoming weaker and thinner, possibly playing a role in the development of urinary incontinence.Stress urinary incontinence in women is most commonly caused by loss of support of the urethra, which is usually a consequence of damage to pelvic support structures as a result of pregnancy, childbirth, obesity, age, among others. About 33% of all women experience urinary incontinence after giving birth, and women who deliver vaginally are about twice as likely to have urinary incontinence as women who give birth via a Caesarean section. Stress incontinence is characterized by leaking of small amounts of urine with activities that increase abdominal pressure such as coughing, sneezing, laughing and lifting. This happens when the urethral sphincter cannot close completely due to the damage in the sphincter itself, or the surrounding tissue. Additionally, frequent exercise in high-impact activities can cause athletic incontinence to develop. Urge urinary incontinence, is caused by uninhibited contractions of the detrusor muscle, a condition known as overactive bladder syndrome. It is characterized by leaking of large amounts of urine in association with insufficient warning to get to the bathroom in time. Men Urge incontinence is the most common type of incontinence in men. Similar to women, urine leakage happens following a very intense feeling of urination, not allowing enough time to reach the bathroom, a condition called overactive bladder syndrome. In men, the condition is commonly associated with benign prostatic hyperplasia (an enlarged prostate), which causes bladder outlet obstruction, a dysfunction of the detrusor muscle (muscle of the bladder), eventually causing overactive bladder syndrome, and the associated incontinence.Stress urinary incontinence is the other common type of incontinence in men, and it most commonly happens after prostate surgery. Prostatectomy, transurethral resection of the prostate, prostate brachytherapy, and radiotherapy can all damage the urethral sphincter and surrounding tissue, causing it to be incompetent. An incompetent urethral sphincter cannot prevent the urine from leaking out of the urinary bladder during activities that increase the intraabdominal pressure, such as coughing, sneezing, or laughing. Continence usually improves within 6 to 12 months after prostate surgery without any specific interventions, and only 5 to 10% of people report persistent symptoms. Both Age is a risk factor that increases both the severity and prevalence of UI Polyuria (excessive urine production) of which, in turn, the most frequent causes are: uncontrolled diabetes mellitus, primary polydipsia (excessive fluid drinking), central diabetes insipidus and nephrogenic diabetes insipidus. Polyuria generally causes urinary urgency and frequency, but does not necessarily lead to incontinence. Neurogenic disorders like multiple sclerosis, spina bifida, Parkinsons disease, strokes and spinal cord injury can all interfere with nerve function of the bladder. This can lead to neurogenic bladder dysfunction Overactive bladder syndrome. However, the etiology behind this is usually different between men and women, as mentioned above. Other suggested risk factors include smoking, caffeine intake and depression Mechanism Adults The body stores urine — water and wastes removed by the kidneys — in the urinary bladder, a balloon-like organ. The bladder connects to the urethra, the tube through which urine leaves the body.Continence and micturition involve a balance between urethral closure and detrusor muscle activity (the muscle of the bladder). During urination, detrusor muscles in the wall of the bladder contract, forcing urine out of the bladder and into the urethra. At the same time, sphincter muscles surrounding the urethra relax, letting urine pass out of the body. The urethral sphincter is the muscular ring that closes the outlet of the urinary bladder preventing urine to pass outside the body. Urethral pressure normally exceeds bladder pressure, resulting in urine remaining in the bladder, and maintaining continence. The urethra is supported by pelvic floor muscles and tissue, allowing it to close firmly. Any damage to this balance between the detrusor muscle, urethral sphincter, supportive tissue and nerves can lead to some type of incontinence .For example, stress urinary incontinence is usually a result of the incompetent closure of the urethral sphincter. This can be caused by damage to the sphincter itself, the muscles that support it, or nerves that supply it. In men, the damage usually happens after prostate surgery or radiation, and in women, its usually caused by childbirth and pregnancy. The pressure inside the abdomen (from coughing and sneezing) is normally transmitted to both urethra and bladder equally, leaving the pressure difference unchanged, resulting in continence. When the sphincter is incompetent, this increase in pressure will push the urine against it, leading to incontinence.Another example is urge incontinence. This incontinence is associated with sudden forceful contractions of the detrusor muscle (bladder muscle), leading to an intense feeling of urination, and incontinence if the person does not reach the bathroom on time. The syndrome is known as overactive bladder syndrome, and its related to dysfunction of the detrusor muscle. Children Urination, or voiding, is a complex activity. The bladder is a balloon-like muscle that lies in the lowest part of the abdomen. The bladder stores urine then releases it through the urethra, the canal that carries urine to the outside of the body. Controlling this activity involves nerves, muscles, the spinal cord and the brain.The bladder is made of two types of muscles: the detrusor, a muscular sac that stores urine and squeezes to empty, and the sphincter, a circular group of muscles at the bottom or neck of the bladder that automatically stays contracted to hold the urine in and automatically relax when the detrusor contracts to let the urine into the urethra. A third group of muscles below the bladder (pelvic floor muscles) can contract to keep urine back.A babys bladder fills to a set point, then automatically contracts and empties. As the child gets older, the nervous system develops. The childs brain begins to get messages from the filling bladder and begins to send messages to the bladder to keep it from automatically emptying until the child decides it is the time and place to void.Failures in this control mechanism result in incontinence. Reasons for this failure range from the simple to the complex. Diagnosis The pattern of voiding and urine leakage is important as it suggests the type of incontinence. Other points include straining and discomfort, use of drugs, recent surgery, and illness.The physical examination looks for signs of medical conditions causing incontinence, such as tumors that block the urinary tract, stool impaction, and poor reflexes or sensations, which may be evidence of a nerve-related cause.Other tests include: Stress test – the patient relaxes, then coughs vigorously as the doctor watches for loss of urine. Urinalysis – urine is tested for evidence of infection, urinary stones, or other contributing causes. Blood tests – blood is taken, sent to a laboratory, and examined for substances related to causes of incontinence. Ultrasound – sound waves are used to visualize the kidneys and urinary bladder, assess the capacity of the bladder before voiding, and the remaining amount of urine after voiding. This helps know if theres a problem in emptying. Cystoscopy – a thin tube with a tiny camera is inserted in the urethra and used to see the inside of the urethra and bladder. Urodynamics – various techniques measure pressure in the bladder and the flow of urine.People are often asked to keep a diary for a day or more, up to a week, to record the pattern of voiding, noting times and the amounts of urine produced. Research projects that assess the efficacy of anti-incontinence therapies often quantify the extent of urinary incontinence. The methods include the 1-h pad test, measuring leakage volume; using a voiding diary, counting the number of incontinence episodes (leakage episodes) per day; and assessing of the strength of pelvic floor muscles, measuring the maximum vaginal squeeze pressure. Main types There are 4 main types of urinary incontinence: Stress incontinence, also known as effort incontinence, is essentially due to incomplete closure of the urinary sphincter, due to problems in the sphincter itself or insufficient strength of the pelvic floor muscles supporting it. This type of incontinence is when urine leaks during activities that increase intra-abdominal pressure, such as coughing, sneezing or bearing down. Urge incontinence is an involuntary loss of urine occurring while suddenly feeling the need or urge to urinate, usually secondary to overactive bladder syndrome. Overflow incontinence is the incontinence that happens suddenly without feeling an urge to urinate and without necessarily doing any physical activities. It is also known as under-active bladder syndrome. This usually happens with chronic obstruction of the bladder outlet or with diseases damaging the nerves supplying the urinary bladder. The urine stretches the bladder without the person feeling the pressure, and eventually, it overwhelms the ability of the urethral sphincter to hold it back. Mixed incontinence contains symptoms of multiple other types of incontinence. It is not uncommon in the elderly female population and can sometimes be complicated by urinary retention. Other types Functional incontinence occurs when a person recognizes the need to urinate but cannot make it to the bathroom. The loss of urine may be large. There are several causes of functional incontinence including confusion, dementia, poor eyesight, mobility or dexterity, unwillingness to the toilet because of depression or anxiety or inebriation due to alcohol. Functional incontinence can also occur in certain circumstances where no biological or medical problem is present. For example, a person may recognize the need to urinate but may be in a situation where there is no toilet nearby or access to a toilet is restricted. Structural incontinence: Rarely, structural problems can cause incontinence, usually diagnosed in childhood (for example, an ectopic ureter). Fistulas caused by obstetric and gynecologic trauma or injury are commonly known as obstetric fistulas and can lead to incontinence. These types of vaginal fistulas include, most commonly, vesicovaginal fistula and, more rarely, ureterovaginal fistula. These may be difficult to diagnose. The use of standard techniques along with a vaginogram or radiologically viewing the vaginal vault with instillation of contrast media. Nocturnal enuresis is episodic UI while asleep. It is normal in young children. Transient incontinence is temporary incontinence most often seen in pregnant women when it subsequently resolves after the birth of the child. Giggle incontinence is an involuntary response to laughter. It usually affects children. Double incontinence. There is also a related condition for defecation known as fecal incontinence. Due to involvement of the same muscle group (levator ani) in bladder and bowel continence, patients with urinary incontinence are more likely to have fecal incontinence in addition. This is sometimes termed "double incontinence". Post-void dribbling is the phenomenon where urine remaining in the urethra after voiding the bladder slowly leaks out after urination. Coital incontinence (CI) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation. It has been reported to occur in 10% to 24% of sexually active women with pelvic floor disorders. Climacturia is urinary incontinence at the moment of orgasm. It can be a result of radical prostatectomy. Screening Yearly screening is recommended for women by the Womens Preventive Services Initiative. Screening questions should inquire about what symptoms they have experienced, how severe the symptoms are, and if the symptoms affect their daily lives. As of 2018, studies have not shown a change in outcomes with urinary incontinence screenings in women. Management Treatment options range from conservative treatment, behavioral therapy, bladder retraining, pelvic floor therapy, collecting devices (for men), fixer-occluder devices for incontinence (in men), medications and surgery. A 2018 systematic review update showed that both nonpharmacological and pharmacological treatments are effective for treating UI in non-pregnant women. All treatments, except hormones and periurethral bulking agents, are more effective than no treatment in improving or curing UI symptoms or achieving patient satisfaction. The success of treatment depends on the correct diagnoses. Behavioral therapy Behavioral therapy involves the use of both suppressive techniques (distraction, relaxation) and learning to avoid foods that may worsen urinary incontinence. This may involve avoiding or limiting consumption of caffeine and alcohol. Behavioral therapies, including bladder training, biofeedback, and pelvic floor muscle training, are most effective for improving urinary incontinence in women, with a low risk of adverse events. Behavioral therapy is not curative for urinary incontinence, but it can improve a persons quality of life. Behavioral therapy has benefits as both a monotherapy and as an adjunct to medications for symptom reduction.Avoiding heavy lifting and preventing constipation may help with uncontrollable urine leakage. Stopping smoking is also recommended as it is associated with improvements in urinary incontinence in men and women. Weight loss is recommended in those who are obese. Physical therapy and exercise Physical therapy can be effective for women in reducing urinary incontinence.Pelvic floor physical therapists work with patients to identify and treat underlying pelvic muscle dysfunction that can cease urinary incontinence. They may recommend exercises to strengthen the muscles, electrostimulation, or biofeedback treatments. Exercising the muscles of the pelvis such as with Kegel exercises are a first line treatment for women with stress incontinence. Efforts to increase the time between urination, known as bladder training, is recommended in those with urge incontinence. Both these may be used in those with mixed incontinence.Small vaginal cones of increasing weight may be used to help with exercise. They seem to be better than no active treatment in women with stress urinary incontinence, and have similar effects to training of pelvic floor muscles or electrostimulation.Biofeedback uses measuring devices to help the patient become aware of his or her bodys functioning. By using electronic devices or diaries to track when the bladder and urethral muscles contract, the patient can gain control over these muscles. Biofeedback can be used with pelvic muscle exercises and electrical stimulation to relieve stress and urge incontinence.Time voiding while urinating and bladder training are techniques that use biofeedback. In time voiding, the patient fills in a chart of voiding and leaking. From the patterns that appear in the chart, the patient can plan to empty his or her bladder before he or she would otherwise leak. Biofeedback and muscle conditioning, known as bladder training, can alter the bladders schedule for storing and emptying urine. These techniques are effective for urge and overflow incontinenceA 2013 randomized controlled trial found no benefit of adding biofeedback to pelvic floor muscle exercise in stress urinary incontinence, but observing improvements in both groups. In another randomized controlled trial the addition of biofeedback to the training of pelvic floor muscles for the treatment of stress incontinence, improved pelvic floor muscle function, reduced urinary symptoms, and improved the quality of life.Preoperative pelvic floor muscle training (PFMT) in men undergoing radical prostatectomy was not effective in reducing urinary incontinence.Alternative exercises have been studied for stress urinary incontinence in women. Evidence was insufficient to support the use of Paula method, abdominal muscle training, Pilates, Tai Chi, breathing exercises, postural training, and generalized fitness. Devices Individuals who continue to experience urinary incontinence need to find a management solution that matches their individual situation. The use of mechanical devices has not been well studied in women, as of 2014. Collecting systems (for men) – consists of a sheath worn over the penis funneling the urine into a urine bag worn on the leg. These products come in a variety of materials and sizes for individual fit. Studies show that urisheaths and urine bags are preferred over absorbent products – in particular when it comes to ‘limitations to daily activities’. Solutions exist for all levels of incontinence. Advantages with collecting systems are that they are discreet, the skin stays dry all the time, and they are convenient to use both day and night. Disadvantages are that it is necessary to get measured to ensure proper fit, and in some countries, a prescription is needed. Absorbent products (include shields, incontinence pads, undergarments, protective underwear, briefs, diapers, adult diapers and underpants) are the best-known product types to manage incontinence. They are widely available in pharmacies and supermarkets. The advantages of using these are that they barely need any fitting or introduction by a healthcare specialist. The disadvantages with absorbent products are that they can be bulky, leak, have odors and can cause skin breakdown due to the constant dampness. Intermittent catheters are single-use catheters that are inserted into the bladder to empty it, and once the bladder is empty they are removed and discarded. Intermittent catheters are primarily used for urinary retention (inability to empty the bladder), but for some people they can be used to reduce or avoid incontinence. These are prescription-only medical devices. Indwelling catheters (also known as foleys) are often used in hospital settings, or if the user is not able to handle any of the above solutions himself/herself (e.g. severe neurologic injury or neurodegenerative disease). These are also prescription-only medical devices. The indwelling catheter is typically connected to a urine bag that can be worn on the leg or hung on the side of the bed. Indwelling catheters need to be monitored and changed on a regular basis by a healthcare professional. The advantage of indwelling catheters is that because the urine is funneled away from the body, the skin remains dry. However, the disadvantage is that it is very common to incur urinary tract infections when using indwelling catheters. Bladder spasms and other problems can also occur with long-term use of indwelling catheters. Penis clamp (or penis compression device), which is applied to compress the urethra to compensate for the malfunctioning of the natural urinary sphincter, preventing leakage from the bladder. This management solution is only suitable for light or moderate incontinence. Vaginal pessaries for women are devices inserted into the vagina. This device provides support to the urethra which passes right in front of it, allowing it to close more firmly. Medications A number of medications exist to treat urinary incontinence including: fesoterodine, tolterodine and oxybutynin. These medications work by relaxing smooth muscle in the bladder. While some of these medications appear to have a small benefit, the risk of side effects are a concern. Medications are effective for about one in ten people, and all medications have similar efficacy.Medications are not recommended for those with stress incontinence and are only recommended in those with urge incontinence who do not improve with bladder training.Injectable bulking agents may be used to enhance urethral support, however, they are of unclear benefit. Surgery Women and men that have persistent incontinence despite optimal conservative therapy may be candidates for surgery. Surgery may be used to help stress or overflow incontinence. Common surgical techniques for stress incontinence include slings, tension-free vaginal tape, bladder suspension, artificial urinary sphincters, among others.The use of transvaginal mesh implants and bladder slings is controversial due to the risk of debilitating painful side effects such as vaginal erosion. In 2012 transvaginal mesh implants were classified as a high risk device by the US Food and Drug Administration. Urodynamic testing seems to confirm that surgical restoration of vault prolapse can cure motor urge incontinence. Traditional suburethral sling operations are probably slightly better than open abdominal retropubic colposuspension and are probably slightly less effective than mid-urethral sling operations in reducing urinary incontinence in women, but it is still uncertain if any of the different types of traditional suburethral sling operations are better than others. Similarly, there is insufficient evidence to be certain about the effectiveness or safety of single-incision sling operations for urinary incontinence in women. Traditional suburethral slings may have a higher risk of surgical complications than minimally invasive slings but the risk of complications compared with other types of operation is still uncertain.Laparoscopic colposuspension (keyhole surgery through the abdomen) with sutures is as effective as open colposuspension for curing incontinence in women up to 18 months after surgery, but it is unclear whether there are fewer risk of complications during or after surgery. There is probably a higher risk of complications with traditional suburethral slings than with open abdominal retropubic suspension. The artificial urinary sphincter is an implantable device used to treat stress incontinence, mostly in men. The device is made of 2 or 3 parts: The pump, cuff, and balloon reservoir connected to each other by specialized tubes. The cuff wraps around the urethra and closes it. When the person wants to urinate, he presses the pump (implanted in the scrotum), to deflate the cuff, and allow the urine to pass. The cuff regains pressure within a few minutes to regain continence. The European Association of Urology considers the artificial urinary sphincter as the gold standard in surgical management of stress urinary incontinence in men after prostatectomy. Epidemiology Globally, up to 35% of the population over the age of 60 years is estimated to be incontinent. In 2014, urinary leakage affected between 30% and 40% of people over 65 years of age living in their own homes or apartments in the U.S. Twenty-four percent of older adults in the U.S. have moderate or severe urinary incontinence that should be treated medically. People with dementia are three times more likely to have urinary incontinence compared to people of similar ages.Bladder control problems have been found to be associated with higher incidence of many other health problems such as obesity and diabetes. Difficulty with bladder control results in higher rates of depression and limited activity levels.Incontinence is expensive both to individuals in the form of bladder control products and to the health care system and nursing home industry. Injury-related to incontinence is a leading cause of admission to assisted living and nursing care facilities. In 1997 more than 50% of nursing facility admissions were related to incontinence. Women Bladder symptoms affect women of all ages. However, bladder problems are most prevalent among older women. Women over the age of 60 years are twice as likely as men to experience incontinence; one in three women over the age of 60 years are estimated to have bladder control problems. One reason why women are more affected is the weakening of pelvic floor muscles by pregnancy. Men Men tend to experience incontinence less often than women, and the structure of the male urinary tract accounts for this difference. Stress incontinence is common after prostate cancer treatments.While urinary incontinence affects older men more often than younger men, the onset of incontinence can happen at any age. Estimates around 2007 suggested that 17 percent of men over age 60, an estimated 600,000 men in the US, experienced urinary incontinence, with this percentage increasing with age. Children Incontinence happens less often after age 5: About 10 percent of 5-year-olds, 5 percent of 10-year-olds, and 1 percent of 18-year-olds experience episodes of incontinence. It is twice as common in girls as in boys. History The management of urinary incontinence with pads is mentioned in the earliest medical book known, the Ebers Papyrus (1500 BC).Incontinence has historically been a taboo subject in Western culture. However, this situation changed some when Kimberly-Clark aggressively marketed adult diapers in the 1980s with actor June Allyson as spokeswoman. Allyson was initially reticent to participate, but her mother, who had incontinence, convinced her that it was her duty in light of her successful career. The product proved a success. Law The case Hiltibran et al v. Levy et al in the United States District Court for the Western District of Missouri resulted in that court issuing an order in 2011. That order requires incontinence briefs funded by Medicaid to be given by Missouri to adults who would be institutionalized without them. Research The effectiveness of different therapeutic approaches to treating urinary incontinence is not well studied for some medical conditions. For example, for people who experience urinary incontinence due to stroke, treatment approaches such as physical therapy, cognitive therapy, complementary medicine, and specialized interventions with experienced medical professionals are sometimes suggested, however it is not clear how effective these are at improving incontinence and there is no strong medical evidence to guide clinical practice. References External links Urinary incontinence at Curlie Patient-centered information from the European Urological Association Independent continence product advisor
Hypovolemia
Hypovolemia, also known as volume depletion or volume contraction, is a state of abnormally low extracellular fluid in the body. This may be due to either a loss of both salt and water or a decrease in blood volume. Hypovolemia refers to the loss of extracellular fluid and should not be confused with dehydration.Hypovolemia is caused by a variety of events, but these can be simplified into two categories: those that are associated with kidney function and those that are not. The signs and symptoms of hypovolemia worsen as the amount of fluid lost increases. Immediately or shortly after mild fluid loss (from blood donation, diarrhea, vomiting, bleeding from trauma, etc.), one may experience headache, fatigue, weakness, dizziness, or thirst. Untreated hypovolemia or excessive and rapid losses of volume may lead to hypovolemic shock. Signs and symptoms of hypovolemic shock include increased heart rate, low blood pressure, pale or cold skin, and altered mental status. When these signs are seen, immediate action should be taken to restore the lost volume. Signs and symptoms Signs and symptoms of hypovolemia progress with increased loss of fluid volume.Early symptoms of hypovolemia include headache, fatigue, weakness, thirst, and dizziness. The more severe signs and symptoms are often associated with hypovolemic shock. These include oliguria, cyanosis, abdominal and chest pain, hypotension, tachycardia, cold hands and feet, and progressively altering mental status. Causes The causes of hypovolemia can be characterized into two categories: Kidney Loss of body sodium and consequent intravascular water (due to impaired reabsorption of salt and water in the tubules of the kidneys) Osmotic diuresis: the increase in urine production due to an excess of osmotic (namely glucose and urea) load in the tubules of the kidneys Overuse of pharmacologic diuretics Impaired response to hormones controlling salt and water balance (see mineralocorticoids) Impaired kidney function due to tubular injury or other diseases Other Loss of bodily fluids due to:Gastrointestinal losses; e.g. vomiting and diarrhea Skin losses; e.g. excessive sweating and burns Respiratory losses; e.g. hyperventilation (breathing fast) Build up of fluid in empty spaces (third spaces) of the body due to:Acute pancreatitis Intestinal obstruction Increase in vascular permeability Hypoalbuminemia Loss of blood (external or internal bleeding or blood donation) Pathophysiology The signs and symptoms of hypovolemia are primarily due to the consequences of decreased circulating volume and a subsequent reduction in the amount of blood reaching the tissues of the body. In order to properly perform their functions, tissues require the oxygen transported in the blood. A decrease in circulating volume can lead to a decrease in bloodflow to the brain, resulting in headache and dizziness.Baroreceptors in the body (primarily those located in the carotid sinuses and aortic arch) sense the reduction of circulating fluid and send signals to the brain to increase sympathetic response (see also: baroreflex). This sympathetic response is to release epinephrine and norepinephrine, which results in peripheral vasoconstriction (reducing size of blood vessels) in order to conserve the circulating fluids for organs vital to survival (i.e. brain and heart). Peripheral vasoconstriction accounts for the cold extremities (hands and feet), increased heart rate, increased cardiac output (and associated chest pain). Eventually, there will be less perfusion to the kidneys, resulting in decreased urine output. Diagnosis Hypovolemia can be recognized by a fast heart rate, low blood pressure, and the absence of perfusion as assessed by skin signs (skin turning pale) and/or capillary refill on forehead, lips and nail beds. The patient may feel dizzy, faint, nauseated, or very thirsty. These signs are also characteristic of most types of shock.In children, compensation can result in an artificially high blood pressure despite hypovolemia (a decrease in blood volume). Children typically are able to compensate (maintain blood pressure despite hypovolemia) for a longer period than adults, but deteriorate rapidly and severely once they are unable to compensate (decompensate). Consequently, any possibility of internal bleeding in children should be treated aggressively.Signs of external bleeding should be assessed, noting that individuals can bleed internally without external blood loss or otherwise apparent signs.There should be considered possible mechanisms of injury that may have caused internal bleeding, such as ruptured or bruised internal organs. If trained to do so and if the situation permits, there should be conducted a secondary survey and checked the chest and abdomen for pain, deformity, guarding, discoloration or swelling. Bleeding into the abdominal cavity can cause the classical bruising patterns of Grey Turners sign (bruising along the sides) or Cullens sign (around the navel). Investigation In a hospital, physicians respond to a case of hypovolemic shock by conducting these investigations: Blood tests: U+Es/Chem7, full blood count, glucose, blood type and screen Central venous catheter Arterial line Urine output measurements (via urinary catheter) Blood pressure SpO2 oxygen saturation monitoring Stages Untreated hypovolemia can lead to shock (see also: hypovolemic shock). Most sources state that there are 4 stages of hypovolemia and subsequent shock; however, a number of other systems exist with as many as 6 stages.The 4 stages are sometimes known as the "Tennis" staging of hypovolemic shock, as the stages of blood loss (under 15% of volume, 15–30% of volume, 30–40% of volume and above 40% of volume) mimic the scores in a game of tennis: 15, 15–30, 30–40 and 40. It is basically the same as used in classifying bleeding by blood loss. The signs and symptoms of the major stages of hypovolemic shock include: Treatment Field care The most important step in treatment of hypovolemic shock is to identify and control the source of bleeding.Medical personnel should immediately supply emergency oxygen to increase efficiency of the patients remaining blood supply. This intervention can be life-saving.Also, the respiratory pump is especially important during hypovolemia as spontaneous breathing may help reduce the effect of this loss of blood pressure on stroke volume by increasing venous return.The use of intravenous fluids (IVs) may help compensate for lost fluid volume, but IV fluids cannot carry oxygen the way blood does—however, researchers are developing blood substitutes that can. Infusing colloid or crystalloid IV fluids also dilutes clotting factors in the blood, increasing the risk of bleeding. Current best practice allow permissive hypotension in patients with hypovolemic shock, both avoid overly diluting clotting factors and avoid artificially raising blood pressure to a point where it "blows off" clots that have formed. Hospital treatment Fluid replacement is beneficial in hypovolemia of stage 2, and is necessary in stage 3 and 4. See also the discussion of shock and the importance of treating reversible shock while it can still be countered. The following interventions are carried out: IV access Oxygen as required Fresh frozen plasma or blood transfusion Surgical repair at sites of bleedingVasopressors (such as dopamine and noradrenaline) should generally be avoided, as they may result in further tissue ischemia and dont correct the primary problem. Fluids are the preferred choice of therapy. History In cases where loss of blood volume is clearly attributable to bleeding (as opposed to, e.g., dehydration), most medical practitioners prefer the term exsanguination for its greater specificity and descriptiveness, with the effect that the latter term is now more common in the relevant context. See also Hypervolemia Non-pneumatic anti-shock garment Polycythemia, an increase of the hematocrit level, with the "relative polycythemia" being a decrease in the volume of plasma Volume status == References ==
Slipping
Slipping is a technique used in boxing that is similar to bobbing. It is considered one of the four basic defensive strategies, along with blocking, holding, and clinching. It is performed by moving the head to either side so that the opponents punches "slip" by the boxer.Slipping punches allows the fighter to recover quicker and counter punches faster than the opponent can reset into proper fighting stance. In boxing, timing is known to be a key factor in the outcome. Timing your slips correctly is essential in protecting yourself and saving energy. Slipping, if done incorrectly, can be dangerous as it exposes you to a counter-punch and an unbalanced stance. Which can lead to an opening for the opponent. Muhammad Ali is considered to be, pound for pound, one of the greatest fighters of all time. But what made him so lethal? Was it his power, speed, or technique of slipping punches? Many fighters and analysts will say it was his slipping capability. How to slip punches There are multiple ways to slip punches in boxing, but the most basic types are slipping the inside jab, and outside jab. When slipping an outside jab, your body weight needs to be centered. And as your opponent throws the jab, rotate your body clockwise and lean slightly to your right. Which would then shift weight to the rear of your leg. Pivot both your feet in the same direction. Now youre on the outside of your opponents jab which gives the ability to counter punch over their jab. For the inside jab, as the opponent throws the jab, rotate your body counter-clockwise, lean slightly to your left putting more weight on your lead leg. Its possible to just lean without rotating, but rotating helps the movement of your guard. Raise your rear hand ready for the opponent to throw a left hook. Common mistakes There are many different mistakes you can make when trying to slip a punch: Slipping too early Slipping too wide Slipping inside the cross Only moving your head Dropping your guard How to master slipping The best method to mastering slipping is practice. Now just saying practice seems simple. But the practice needs to be with a worthy opponent. Preferably someone that is taller than you and has a longer reach. Another method is a slip bag you can hang up and move back and forth. This helps you improve movement, timing, and eye coordination while performing a slip. Repetition and patience is key to mastering slipping. References External links https://www.mightyfighter.com/how-to-slip-punches/ https://lawofthefist.com/complete-guide-to-slipping-punches-in-boxing/
Maxillary hypoplasia
Maxillary hypoplasia, or maxillary deficiency, is an underdevelopment of the bones of the upper jaw. It is associated with Crouzon syndrome, Angelman syndrome, as well as Fetal alcohol syndrome. It can also be associated with Cleft lip and cleft palate. Some people could develop it due to poor dental extractions. Signs and symptoms The underdevelopment of the bones in the upper jaw, which gives the middle of the face a sunken look. This same underdevelopment can make it difficult to eat and can lead to complications such as Nasopharyngeal airway restriction. This restriction causes forward head posture which can then lead to back pain, neck pain, and numbness in the hands and arms. The nasopharyngeal airway restriction can also lead to Sleep apnea and snoring. Sleep apnea can lead to heart problems, endocrine problems, increased weight, and cognition problems, among other issues. Cause Although the exact genetic link for isolated maxillary hypoplasia has not been identified, the structure of the facial bones as a whole relies on genetic inheritance and therefore there is likely an inheritance pattern. Maxillary hypoplasia can be present as part of genetic syndromes such as Angelman syndrome.Fetal alcohol syndrome is associated with maxillary hypoplasia. Injury to facial bones during childhood can lead to atypical growth. Exposure to Phenytoin in the first trimester of pregnancy has also been associated with the development of maxillary hypoplasia. Pathophysiology Abnormal development of the bones of the upper face which is usually a secondary effect of a different developmental abnormality. When associated with cleft lip and palate, the abnormal development can be due to deficiency in the ability to grow because of the cleft lip or palate. The underdevelopment can also be caused by scarring from surgical repair of a cleft lip or palate. Diagnosis Diagnosed mainly on visual inspection. The cheekbones and nose appear flat with thin lips and the lower jaw appears to be protruding even though it is normal in size. Computed tomography scan CT scan can be performed to compare the size of the Maxilla and Mandible. Computed tomography scan. Treatment Corrective surgery is the most common treatment to correct this disorder. It involves the repositioning of the upper jaw to align with the lower jaw, to provide symmetry. It is best performed during childhood, if possible, to allow the jaw to recover and develop. The surgery may be performed in consultation with an Orthodontist who works on repositioning the teeth in the mouth.Severe cases require surgical correction after completing craniofacial growth around age 17-21.Milder forms without obstruction can be corrected for cosmetic reasons using veneers, snap in smiles, and overlay dentures Prognosis When associated with nasopharyngeal occlusion, the person is more likely to spend their days in forward head posture which may lead to back pain, neck pain and numbness in the arms and hands. It can also lead to sleep apnea and snoring.• People can generally live a relatively normal like with maxillary hypoplasia. Normal life expectancy. Recovery The recovery time after the surgery depends on the extent of the surgery itself. Patients are usually advised to eat soft foods for days, or sometimes weeks, to allow their jaw time to heal. They also require regular checkups with the doctor to monitor bone displacement, signs of infection, or other issues. Epidemiology Maxillary hypoplasia is the most common secondary deformity that results from cleft lip and cleft palate. Because of the subjective nature of the diagnosis, the incidence of maxillary hypoplasia in people with cleft lip and palate varies between 15-50%. It is estimated that 25-50% of these patients require surgical intervention. Research directions Research on the topic of maxillary hypoplasia is currently focused on the best way to treat and manage the disorder. A retrospective study was published in January of 2020 that evaluated the accuracy of virtual surgical planning-assisted management of maxillary hypoplasia. The study found that virtual surgical planning was an acceptable alternative to conventional planning and demonstrated to be very accurate. References == External links ==
Non-celiac gluten sensitivity
Non-celiac gluten sensitivity (NCGS) or gluten sensitivity is "a clinical entity induced by the ingestion of gluten leading to intestinal and/or extraintestinal symptoms that improve once the gluten-containing foodstuff is removed from the diet, and celiac disease and wheat allergy have been excluded".NCGS is included in the spectrum of gluten-related disorders. The definition and diagnostic criteria of non-celiac gluten sensitivity were debated and established by three consensus conferences. However, as of 2019, there remained much debate in the scientific community as to whether or not NCGS was a distinct clinical disorder.The pathogenesis of NCGS is not well understood, but the activation of the innate immune system, the direct cytotoxic effects of gluten and probably other wheat components, are implicated. There is evidence that not only gliadin (the main cytotoxic antigen of gluten), but also other proteins named ATIs which are present in gluten-containing cereals (wheat, rye, barley, and their derivatives) may have a role in the development of symptoms. ATIs are potent activators of the innate immune system. FODMAPs, especially fructans, are present in small amounts in gluten-containing grains and have been identified as a possible cause of some gastrointestinal symptoms in NCGS patients. As of 2019, reviews have concluded that although FODMAPs may play a role in NCGS, they explain only certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with NCGS may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis.For these reasons, NCGS is a controversial clinical condition and some authors still question it. It has been suggested that "non-celiac wheat sensitivity" is a more appropriate term, without forgetting that other gluten-containing cereals are implicated in the development of symptoms.NCGS is the most common syndrome of gluten-related disorders with prevalence rates between 0.5%–13% in the general population. As no biomarker for diagnosing this condition is available, its diagnosis is made by exclusion of other gluten-related disorders such as celiac disease and wheat allergy. Many people have not been diagnosed following strict criteria, and there is a "fad component" to the recent rise in popularity of the gluten-free diet, leading to debate surrounding the evidence for this condition and its relationship to celiac disease and irritable bowel syndrome. People with NCGS are often unrecognized by specialists and lack adequate medical care and treatment. They often have a long history of health complaints and unsuccessful consultations with physicians, and thus many resort to a gluten-free diet and a self-diagnosis of gluten sensitivity. Signs and symptoms Reported symptoms of NCGS are similar to those of celiac disease, with most patients reporting both gastrointestinal and non-gastrointestinal symptoms. In the "classical" presentation of NCGS, gastrointestinal symptoms are similar to those of irritable bowel syndrome, and are also not distinguishable from those of wheat allergy, but there is a different interval between exposure to wheat and onset of symptoms. Wheat allergy has a fast onset (from minutes to hours) after the consumption of food containing wheat and can be anaphylaxic. Gastrointestinal Gastrointestinal symptoms may include any of the following: abdominal pain, bloating, bowel habit abnormalities (either diarrhea or constipation), nausea, aerophagia, flatulence, gastroesophageal reflux disease, and aphthous stomatitis. Extraintestinal NCGS can cause a wide range of extraintestinal symptoms, which can be the only manifestation of NCGS in absence of gastrointestinal symptoms. These include any of the following: headache, migraine, "foggy mind", fatigue, fibromyalgia, joint and muscle pain, leg or arm numbness, tingling of the extremities, dermatitis (eczema or skin rash), atopic disorders such as asthma, rhinitis, other allergies, depression, anxiety, iron-deficiency anemia, folate deficiency, or autoimmune diseases. NCGS is also linked to a wide spectrum of neurological and psychiatric disorders, including ataxia, schizophrenia, epilepsy, peripheral neuropathy, encephalopathy, vascular dementia, eating disorders, autism, attention deficit hyperactivity disorder (ADHD), hallucinations (so-called "gluten psychosis"), and various movement disorders (restless legs syndrome, chorea, parkinsonism, Tourette syndrome, palatal tremor, myoclonus, dystonia, opsoclonus myoclonus syndrome, paroxysms, dyskinesia, myorhythmia, myokymia).Above 20% of people with NCGS have IgE-mediated allergy to one or more inhalants, foods, or metals, among which most common are mites, graminaceae, parietaria, cat or dog hair, shellfish, and nickel. Approximately 35% of patients suffer other food intolerances, mainly lactose intolerance. Causes The pathogenesis of NCGS is not yet well understood, but the activation of the innate immune system, the direct cytotoxic effects of gluten, and probably the cytotoxicity of other wheat molecules are implicated. Besides gluten, other components in wheat, rye, barley, and their derivatives, including amylasetrypsin inhibitors (ATIs) and FODMAPs, may cause symptoms. Gluten It was hypothesized that gluten, as occurs in celiac disease, is the cause of NCGS. In addition to its ability to elicit abnormal responses of the immune system, in vitro studies on cell cultures showed that gluten is cytotoxic and causes direct intestinal damage. Gluten and gliadin promote cell apoptosis (a form of programmed cell death) and reduce the synthesis of nucleic acids (DNA and RNA) and proteins, leading to a reduction in the viability of cells. Gluten alters cellular morphology and motility, cytoskeleton organization, oxidative balance and intercellular contact (tight junction proteins). Other proteins Some people may have a reaction to other proteins (α-amylase/trypsin inhibitors [ATIs]) present in gluten-containing cereals that are able to inhibit amylase and trypsin. They have been identified as the possible activator of the innate immune system in celiac disease and NCGS.ATIs are part of the plants natural defence against insects and may cause toll-like receptor 4 (TLR4)-mediated intestinal inflammation in humans. These TLR4-stimulating activities of ATIs are limited to gluten-containing cereals (wheat, rye, barley, and derivatives) and may induce innate immunity in people with celiac disease or NCGS. ATIs resist proteolytic digestion. ATIs are about 2%–4% of the total protein in modern wheat and are present in commercial gluten. A 2017 study in mice demonstrated that ATIs exacerbate preexisting inflammation and may also worsen it at extraintestinal sites. This may explain why there is an increase of inflammation in people with preexisting diseases upon ingestion of ATI-containing grains.Modern wheat cultivation, by breeding for high ATI content, may play a role in the onset and course of disorders such as celiac disease and gluten sensitivity. However, it has been questioned whether there is sufficient empirical evidence to support this claim, because as of 2018 we lack studies that directly compare modern wheat versus ancient cultivars with low ATI content (such as einkorn wheat) in people with NCGS.Wheat germ agglutinin is also considered to be a possible trigger of NCGS-like symptoms. FODMAPs FODMAPs (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) that are present in gluten-containing grains (mainly fructans) have been identified as a possible cause of gastrointestinal symptoms in people with NCGS, in place of, or in addition to, gluten.The amount of fructans in gluten-containing cereals is relatively small and their role has been controversial. In rye they account for 3.6%–6.6% of dry matter, 0.7%–2.9% in wheat, and barley contains only trace amounts. They are only minor sources of FODMAPs when eaten in the usual standard amounts in the daily diet. Wheat and rye may comprise a major source of fructans when consumed in large amounts. They may cause mild wheat intolerance at most, limited to certain gastrointestinal symptoms, such as bloating, but do not justify the NCGS extradigestive symptoms. A 2018 review concluded that although fructan intolerance may play a role in NCGS, it only explains some gastrointestinal symptoms, but not the extradigestive symptoms that people with NCGS may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis. FODMAPs cause digestive symptoms when the person is hypersensitive to luminal distension. A 2019 review concluded that wheat fructans can cause certain IBS-like symptoms, such as bloating, but are unlikely to cause immune activation or extra-digestive symptoms. Many people with NCGS report resolution of their symptoms after removing gluten-containing cereals while continue to eat fruits and vegetables with high FODMAPs content. Diagnosis Absence of reliable biomarkers and the fact that some people do not have digestive symptoms make the recognition and diagnosis of non-celiac gluten sensitivity (NCGS) difficult.Diagnosis is generally performed only by exclusion criteria. NCGS diagnostic recommendations have been established by several consensus conferences. Exclusion of celiac disease and wheat allergy is important because these two conditions also appear in people who experience symptoms similar to those of NCGS, which improve with a gluten withdrawal and worsen after gluten consumption.The onset of NCGS symptoms may be delayed hours to a few days after gluten ingestion, whereas in celiac disease it can take days to weeks. Wheat allergy has a fast onset (from minutes to hours) after the consumption of food containing wheat and can lead to anaphylaxis.The presence of related extraintestinal manifestations has been suggested to be a feature of NCGS. When symptoms are limited to gastrointestinal effects, there may be an overlap with wheat allergy, irritable bowel syndrome (IBS), and (less likely) intolerance to FODMAPs.Proposed criteria for a diagnosis of NCGS suggest an improvement of gastrointestinal symptoms and extra-intestinal manifestations higher than 30% with a gluten-free diet (GFD), assessed through a rating scale, is needed to make a clinical diagnosis of NCGS. To exclude a placebo effect, a double-blind placebo-controlled gluten challenge is a useful tool, although it is expensive and complicated for routine clinical use, and so is usually performed in research studies.These suggestions were incorporated in the Salerno expert consensus on diagnostic criteria for NCGS. These recommend assessment of the response to a 6-week trial of a gluten-free diet using a defined rating scale (Step 1), followed by a double-blind, placebo-controlled challenge of gluten (or placebo) for a week of each (Step 2). A variation of greater than 30% in the main symptoms when challenged by gluten or placebo is needed for a positive result. Further research on possible biomarkers was also identified. Differential diagnosis Examinations evaluating celiac disease and wheat allergy should be performed before patients remove gluten from their diet. It is critical to make a clear distinction between celiac disease and NCGS. Celiac disease The main goal in diagnosing NCGS is to exclude celiac disease. NCGS and celiac disease cannot be separated in diagnosis because many gastrointestinal and non-gastrointestinal symptoms are similar in both diseases, and there are people with celiac disease having negative serology (absence of specific celiac disease antibodies in serum) or without villus atrophy. There is no test capable of eliminating a diagnosis of a celiac disease, but such a diagnosis is unlikely without confirming HLA-DQ2 and/or HLA-DQ8 haplotypes.The prevalence of undiagnosed celiac disease has increased fourfold during the past half-century, with most cases remaining unrecognized, undiagnosed and untreated, leaving celiac patients with the risk of long-term complications. Some people with NCGS may indeed have celiac disease. A 2015 systematic review found that 20% of people with NCGS presenting with HLA-DQ2 and/or HLA-DQ8 haplotypes, negative serology, and normal histology or duodenal lymphocytosis had celiac disease.The presence of autoimmune symptoms in people with NCGS suggests the possibility of undiagnosed celiac disease. Autoimmune diseases typically associated with celiac disease are diabetes mellitus type 1, thyroiditis, gluten ataxia, psoriasis, vitiligo, autoimmune hepatitis, dermatitis herpetiformis, primary sclerosing cholangitis, and others.To evaluate the possible presence of celiac disease, specific serology and duodenal biopsies are required while the person is still on a diet that includes gluten. Serological markers Serological CD markers (IgA tissue transglutaminase [tTGA], IgA endomysial [EmA] and IgG deamidated gliadin peptide [DGP] antibodies) are always negative in those with NCGS; in addition to specific IgA autoantibody levels, it is necessary to determine total IgA levels. IgG tTGA antibodies should be checked in selective IgA deficiency, which can be associated with celiac disease and occurs in as many as one in 40 celiac patients.Nevertheless, the absence of serological markers does not certainly exclude celiac disease. In those with celiac disease before diagnosis (on a gluten-containing diet), celiac disease serological markers are not always present. As the age of diagnosis increases, these antibody titers decrease, and may be low or even negative in older children and adults. The absence of celiac disease-specific antibodies is more common in patients without villous atrophy who only have duodenal lymphocytosis (Marsh 1 lesions) and who respond to a gluten-free diet with histological and symptomatic improvement. Duodenal biopsies According to the diagnostic criteria established by the consensus conferences (2011 and 2013), it is necessary to perform duodenal biopsies to exclude celiac disease in symptomatic people with negative specific celiac disease antibodies. Because of the patchiness of the celiac disease lesions, four or more biopsies are taken from the second and third parts of the duodenum, and at least one from the duodenal bulb. Even in the same biopsy fragments, different degrees of pathology may exist.Duodenal biopsies in people with NCGS are always almost normal – an essential parameter for diagnosis of NCGS, although is generally accepted that a subgroup of people with NGCS may have an increased number of duodenal intraepithelial lymphocytes (IELs) ( ≥25/100 enterocytes), which represent Marsh I lesions. Nevertheless, Marsh I is considered compatible with celiac disease and the most frequent cause of these findings, especially in people positive for HLA DQ2 and/or DQ8 haplotypes, is celiac disease, with a prevalence of 16-43%.In people with duodenal lymphocytosis – following guidelines from the European Society of Pediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) – a high count of celiac disease cells (or CD/CD3 ratio) in immunohistochemical assessment of biopsies, or the presence of IgA anti-TG2 and/or anti-endomysial intestinal deposits, might be specific markers for celiac disease. Catassi and Fasano proposed in 2010 that in patients without celiac disease antibodies, either lymphocytic infiltration associated with IgA subepithelial deposits or a histological response to a gluten-free diet, could support a diagnosis of celiac disease. Wheat allergy The clinical presentation may be sufficient in most cases to distinguish a wheat allergy from other entities. It is excluded when there are normal levels of serum IgE antibodies to gluten proteins and wheat fractions, and no skin reaction to prick tests for wheat allergy. Nevertheless, these tests are not always completely reliable.If an allergic reaction can not be clearly identified, the diagnosis should be confirmed by food provocation tests, ideally performed in a double-blinded and placebo-controlled manner. Delayed allergic reactions may occur with these type of tests, which have to be negative over time, but there are no international consensus statements on diagnosing delayed wheat/food-related symptoms. Usually, reactions that appear between two hours and five days after the oral challenge are considered delayed. Mucosal challenge followed by confocal endomicroscopy is a complementary diagnostic technique, but this technology is not yet generally available and remains experimental. Other tests Evaluating the presence of antigliadin antibodies (AGA) can be a useful complementary diagnostic test. Up to 50% NCGS patients may have elevated AGA IgG antibodies, but rarely AGA IgA antibodies (only 7% of the cases). In these patients, unlike in those with celiac disease, the IgG AGA became undetectable within six months of following a gluten-free diet. People already on a gluten-free diet Many people remove gluten from their diet after a long history of health complaints and unsuccessful consultations with numerous physicians, who simply consider them to be suffering from irritable bowel syndrome, and some may eliminate gluten before seeking medical attention. This fact can diminish the CD serological markers titers and may attenuate the inflammatory changes found in the duodenal biopsies. In these cases, patients should be tested for the presence of HLA-DQ2/DQ8 genetic markers because a negative HLA-DQ2 and HLA-DQ8 result has a high negative predictive value for celiac disease. If these markers are positive, it is advisable to undertake a gluten challenge under medical supervision, followed by serology and duodenal biopsies. However, gluten challenge protocols have significant limitations, because a symptomatic relapse generally precedes the onset of a serological and histological relapse, and therefore becomes unacceptable for many patients. Gluten challenge is also discouraged before the age of five and during pubertal growth.It remains unclear what daily intake of gluten is adequate and how long the gluten challenge should last. Some protocols recommend eating a maximum of 10 g of gluten per day for six weeks. Nevertheless, recent studies have shown that a two-week challenge of 3 g of gluten per day may induce histological and serological abnormalities in most adults with proven celiac disease. This new proposed protocol has shown higher tolerability and compliance. It has been calculated that its application in secondary-care gastrointestinal practice would identify celiac disease in 7% of patients referred for suspected NCGS, while the remaining 93% would be confirmed as NCGS; this is not yet universally adopted.For people on a gluten-free diet who are unable to perform an oral gluten challenge, an alternative to identify possible celiac disease is an in vitro gliadin challenge of small bowel biopsies, but this test is only available at selected specialized tertiary-care centers. Treatment After exclusion of celiac disease and wheat allergy, the subsequent step for diagnosis and treatment of NCGS is to start a strict gluten-free diet (GFD) to assess if symptoms improve or resolve completely. This may occur within days to weeks of starting a GFD, but improvement may also be due to a non-specific, placebo response. The recovery of the nervous system is slow and sometimes incomplete.Recommendations may resemble those for celiac disease, for the diet to be strict and maintained, with no transgression. The degree of gluten cross contamination tolerated by people with NCGS is not clear but there is some evidence that they can present with symptoms even after consumption of small amounts. Sporadic accidental contaminations with gluten can reactivate movement disorders. A part of people with gluten-related neuropathy or ataxia appears not to be able to tolerate even the traces of gluten allowed in most foods labeled as "gluten-free".Whereas celiac disease requires adherence to a strict lifelong gluten-free diet, it is not yet known whether NCGS is a permanent or a transient condition. The results of a 2017 study suggest that NCGS may be a chronic disorder, as is the case with celiac disease. A trial of gluten reintroduction to observe any reaction after one to two years of strict gluten-free diet might be performed.A strict gluten-free diet is effective in most of the neurological disorders associated with NCGS, ameliorating or even resolving the symptoms. It should be started as soon as possible to improve the prognosis. The death of neurons in the cerebellum in ataxia is the result of gluten exposure and is irreversible. Early treatment with a strict gluten-free diet can improve ataxia symptoms and prevent its progression. When dementia has progressed to an advanced degree, the diet has no beneficial effect. Cortical myoclonus appears to be treatment-resistant on both gluten-free diet and immunosuppression. Persistent symptoms Approximately one third of presumed NCGS patients continue to have symptoms, despite gluten withdrawal. Apart from a possible diagnostic error, there are multiple possible explanations.One reason is poor compliance with gluten withdrawal, whether voluntary and/or involuntary. There may be ingestion of gluten, in the form of cross contamination or food containing hidden sources. In some cases, the amelioration of gastrointestinal symptoms with a gluten-free diet is only partial, and these patients could significantly improve with the addition of a low-FODMAP diet.A subgroup may not improve when eating commercially available gluten-free products, as these can be rich in preservatives and additives such as sulfites, glutamates, nitrates and benzoates, which can also have a role in triggering functional gastrointestinal symptoms. Furthermore, people with NCGS may often present with IgE-mediated allergies to one or more foods. It has been estimated that around 35% suffer other food intolerances, mainly lactose intolerance. History The subject of "food intolerance", including gluten sensitivity and elimination diets, was discussed in 1976.Patients with symptoms including abdominal pain and diarrhea, which improved on gluten withdrawal, and who did not have celiac disease were initially described in 1976 and 1978 with the first series in 1980. Debate regarding the existence of a specific condition has continued since then, but the three consensus conferences held since 2010 produced consistent definitions of NCGS and its diagnostic criteria. Society and culture NCGS has been a topic of popular interest. Gluten has been named "the new diet villain". The gluten-free diet has become popular in the United States and other countries. Clinicians worldwide have been challenged by an increasing number of people who do not have celiac disease nor wheat allergy, with digestive or extra-digestive symptoms which improved after removing wheat / gluten from the diet. Many of these persons began a gluten-free diet on their own, without having been previously evaluated. Another reason that contributed to this trend was the publication of several books that demonize gluten and point to it as a cause of type 2 diabetes, weight gain and obesity, and a broad list of conditions ranging from depression and anxiety to arthritis and autism. The book that has had the most impact is Grain Brain: The Surprising Truth about Wheat, Carbs, and Sugar – Your Brains Silent Killers, by the American neurologist David Perlmutter, published in September 2013. Another book that has had great impact is Wheat Belly: Lose the Wheat, Lose the Weight, and Find Your Path Back to Health, by cardiologist William Davis. The gluten-free diet has been advocated and followed by many celebrities to lose weight, such as Miley Cyrus and Gwyneth Paltrow, and some elite athletes to improve performance.Estimates suggest that in 2014, 30% of people in the US and Australia were consuming gluten-free foods, with estimates that by 2016 approximately 100 million Americans would consume gluten-free products. Data from a 2015 Nielsen survey of 30,000 adults in 60 countries around the world showed that 21% of people prefer to buy gluten-free foods, with interest highest among younger generations. Another school of thought suggests that many people may be unnecessarily avoiding gluten when they do not need to.Debate around NCGS as a genuine clinical condition can be heightened because often patients are self diagnosed, or a diagnosis is made by alternative health practitioners. Many people who are making a gluten-free diet did not previously exclude celiac disease or, when they are fully evaluated, other alternative diagnoses can be found such as fructose intolerance or small intestinal bacterial overgrowth, or a better response to a low-FODMAP diet obtained. Research There are many open questions on gluten sensitivity, emphasized in one review that "it is still to be clarified whether this disorder is permanent or transient and whether it is linked to autoimmunity". It has not yet been established whether innate or adaptive immune responses are involved in NCGS, nor whether the condition relates specifically to gluten or rather relates to other components of grains.Studies indicate that AGA IgG is high in slightly more than half of NCGS patients and that, unlike for celiac disease patients, the IgG AGA decreases strongly over 6 months of gluten-free diet; AGA IgA is usually low or absent in NCGS patients.The need for developing biomarkers for NCGS is frequently emphasized; for example, one review indicated: "There is a desperate need for reliable biomarkers ... that include clinical, biochemical and histopathological findings which support the diagnosis of NCGS."Research has also attempted to discern, by double-blind placebo-controlled trials, between a "fad component" to the recent popularity of the gluten-free diet and an actual sensitivity to gluten or other components of wheat. In a 2013 double-blind, placebo-controlled challenge (DBPC) by Biesiekierski et al. in a few people with IBS, the authors found no difference between gluten or placebo groups and the concept of NCGS as a syndrome was questioned. Nevertheless, this study had design errors and an incorrect selection of participants, and probably the reintroduction of both gluten and whey protein had a nocebo effect similar in all people, and this could have masked the true effect of gluten/wheat reintroduction. In a 2015 double-blind placebo cross-over trial, small amounts of purified wheat gluten triggered gastrointestinal symptoms (such as abdominal bloating and pain) and extra-intestinal manifestations (such as foggy mind, depression and aphthous stomatitis) in self-reported NCGS. Nevertheless, it remains elusive whether these findings specifically implicate gluten or proteins present in gluten-containing cereals. In a 2018 double-blind, crossover research study on 59 persons on a gluten-free diet with challenges of gluten, fructans or placebo, intestinal symptoms (specifically bloating) were borderline significantly higher after challenge with fructans, in comparison with gluten proteins (P = 0.049). Although the differences between the three interventions was very small, the authors concluded that fructans (the specific type of FODMAP found in wheat) are more likely to be the cause of NCGS gastrointestinal symptoms, rather than gluten. In addition, fructans used in the study were extracted from chicory root, so it remains to be seen whether the wheat fructans produce the same effect. See also Gluten-related disorders == References ==
Omphalocele
Omphalocele or omphalocoele also called exomphalos, is a rare abdominal wall defect. Beginning at the 6th week of development, rapid elongation of the gut and increased liver size reduces intra abdominal space, which pushes intestinal loops out of the abdominal cavity. Around 10th week, the intestine returns to the abdominal cavity and the process is completed by the 12th week. Persistence of intestine or the presence of other abdominal viscera (e.g. stomach, liver) in the umbilical cord results in an omphalocele. Omphalocele occurs in 1 in 4,000 births and is associated with a high rate of mortality (25%) and severe malformations, such as cardiac anomalies (50%), neural tube defect (40%), exstrophy of the bladder and Beckwith–Wiedemann syndrome. Approximately 15% of live-born infants with omphalocele have chromosomal abnormalities. About 30% of infants with an omphalocele have other congenital abnormalities. Signs and symptoms The sac, which is formed from an outpouching of the peritoneum, protrudes in the midline, through the umbilicus (navel).It is normal for the intestines to protrude from the abdomen, into the umbilical cord, until about the tenth week of pregnancy, after which they return to inside the fetal abdomen. The omphalocele can be mild, with only a small loop of intestines present outside the abdomen, or severe, containing most of the abdominal organs. In severe cases surgical treatment is made more difficult because the infants abdomen is abnormally small, having had no need to expand to accommodate the developing organs. Larger omphaloceles are associated with a higher risk of cardiac defects. Complications Complications may occur prenatally, during birth, management, treatment or after surgery. Both prenatally and during birth, the exomphalos can rupture. During birth there may be trauma to the liver for giant omphaloceles. During management exomphalos can act as a metabolic drain affecting nitrogen balance which can lead to failure to thrive, as well as hypothermia. Use of a non-absorbent patch during surgery can lead to wound sepsis post-surgery. Herniation from the patch is also a possibility. Intestinal dysfunction for a few weeks after the surgery is common, therefore parenteral feeding is continued post-surgery, however prolonged use of this may lead to hepatomegaly and cholestasis. If intestinal dysfunction persists it can lead to intestinal necrosis. Intestinal atresia can occur, which is where the mucosa and submucosa of the intestine form a web that obstructs the lumen which leads to malabsorption. Obstruction of the bowel can occur which results in short bowel syndrome. For the first few years of life there is a high incidence of gastroesophageal reflux which can be complicated by oesophagitis.Post-surgery the umbilicus (navel) is deficient or abnormally placed that causes dislike amongst many patients. Umbilical reconstruction can be difficult due to scar tissue and lack of extra skin for surgical use, though this can be overcome by using tissue expanders below the skin and umbilicoplasty.Ultimately, prognosis depends on the size of the defect and whether associated abnormalities are present or complications develop. Mortalities and morbidities still occur, with the mortality rate for large omphaloceles with associated abnormalities being higher. Most surviving omphalocele infants have no-long-term problems and grow up to be normal individuals. Causes Omphalocele is caused by malrotation of the bowels while returning to the abdomen during development. Some cases of omphalocele are believed to be due to an underlying genetic disorder, such as Edwards syndrome (trisomy 18) or Patau syndrome (trisomy 13). Beckwith–Wiedemann syndrome is also associated with omphaloceles. Pathophysiology Exomphalos is caused by a failure of the ventral body wall to form and close the naturally occurring umbilical hernia that occurs during embryonic folding which is a process of embryogenesis. The normal process of embryogenesis is that at 2 weeks gestation the human embryo is a flat disc that consists of three layers, the outer ectoderm and inner endoderm separated by a middle layer called the mesoderm. The ectoderm gives rise to skin and the CNS, the mesoderm gives rise to muscle and the endoderm gives rise to organs. The focus areas for exomphalos are that the ectoderm will form the umbilical ring, the mesoderm will form the abdominal muscles and the endoderm will form the gut. After the disc becomes tri-layered, it undergoes growth and folding to transform it from disc to cylinder shaped. The layer of ectoderm and mesoderm in the dorsal axis grow ventrally to meet at the midline. Simultaneously, the cephalic (head) and caudal (tail) ends of these layers of the disc fold ventrally to meet the lateral folds in the center. The meeting of both axis at the center form the umbilical ring. Meanwhile, the endoderm migrates to the center of this cylinder.By the fourth week of gestation the umbilical ring is formed. During the 6th week the midgut rapidly grows from the endoderm which causes a herniation of the gut through the umbilical ring. The gut rotates as it re-enters the abdominal cavity which allows for the small intestine and colon to migrate to their correct anatomical position by the end of the 10th week of development. This process fails to occur normally in cases of exomphalos, resulting in abdominal contents protruding from the umbilical ring.Gut contents fail to return to the abdomen due to a fault in myogenesis (muscle formation and migration during embryogenesis). During embryogenesis the mesoderm that forms muscle divides into several somites that migrate dorso-ventrally towards the midline. The somites develop three parts that are sclerotome which will form bone, dermatome which will form skin of the back and myotome which will form muscle. The somites that remain close to the neural tube at the back of the body have epaxial myotome, whilst the somites that migrate to the midline have hypaxial myotome. The hypaxial myotome forms the abdominal muscles. The myotome cells will give rise to myoblasts (embryonic progenitor cells) which will align to form myotubules and then muscle fibers. Consequently, the myotome will become three muscle sheets that form the layers of abdominal wall muscles. The muscle of concern for exomphalos is the rectus abdominis. In the disease the muscle undergoes normal differentiation but fails to expand ventro-medially and narrow the umbilical ring which causes the natural umbilical hernia that occurs at 6 weeks of gestation to remain external to the body.The location of the folding defect in the embryo determines the ultimate position of the exomphalos. A cephalic folding defect results in an epigastric exomphalos that is positioned high up on the abdomen which can be seen in the chromosomal defect pentalogy of Cantrell. Lateral folding defects result in a typical exomphalos that is positioned in the middle of the abdomen. A caudal folding defect results in a hypogastric exomphalos that is positioned on the lower abdomen. Genetics The genes that cause exomphalos are controversial and subject to research. Exomphalos is greatly associated with chromosomal defects and thus these are being explored to pinpoint the genetic cause of the disease. Studies in mice have indicated that mutations in the fibroblast growth factor receptors 1 and 2 (Fgfr1, Fgfr2) cause exomphalos. Fibroblast growth factor (FBGF) encourages the migration of myotubules during myogenesis. When FBGF runs out myoblasts stop migrating, cease division and differentiate into myotubules that form muscle fibers. Mutations in homeobox genes such as Alx4, that direct the formation of body structures during early embryonic development cause exomphalos in mice. Mutations in the Insulin like growth factor-2 gene (IGF2) and its associated receptor gene IGF2R cause high levels of IGF-2 protein in humans which leads to exomphalos in the associated disease Beckwith Wiedemann syndrome (BWS). IFG2R is responsible for degradation of excess IGF-2 protein. BWS disease is caused by a mutation in chromosome 11 at the locus where the IGF2 gene resides. Observance of the inheritance patterns of the associated anomalies through pedigrees show that exomphalos can be the result of autosomal dominant, autosomal recessive and X-linked inheritance. Environmental factors It is not well known if actions of the mother could predispose or cause the disease. Alcohol use during the first trimester, heavy smoking, use of certain medications like the selective serotonin-reuptake inhibitors and methimazole (anti-thyroid drug) during pregnancy, maternal febrile illness, IVF, parental consanguinity and obesity elevate the risk of a woman giving birth to a baby with exomphalos. Preventive methods that could be utilised by mothers include ingestion of a preconception multivitamin and supplementation with folic acid. Termination of pregnancy may be considered if a large exomphalos with associated congenital abnormalities is confirmed during prenatal diagnosis. Diagnosis Related conditions Gastroschisis is a similar birth defect, but in gastroschisis the umbilical cord is not involved and the intestinal protrusion is usually to the right of the midline. Parts of organs may be free in the amniotic fluid and not enclosed in a membranous (peritoneal) sac. Gastroschisis is less frequently associated with other defects than omphalocele.Omphaloceles occurs more frequently with increased maternal age. Other related syndromes are Shprintzen Goldberg, pentalogy of Cantrell, Beckwith–Wiedemann and OEIS complex (omphalocele, exstrophy of the cloaca, imperforate anus, spinal defects).After surgery a child with omphalocele will have some degree of intestinal malrotation. Due to intestinal malrotation 4.4% of children with omphalocele will experience a midgut volvulus in the days, months, or years after surgery. Parents of children with omphalocele should seek immediate medical attention if their child displays signs and symptoms of an intestinal obstruction at any point in their childhood to avoid the possibility of bowel necrosis or death.Some experts differentiate exomphalos and omphalocele as 2 related conditions, one worse than the other; in this sense, exomphalos involves a stronger covering of the hernia (with fascia and skin), whereas omphalocele involves a weaker covering of only a thin membrane. Others consider the terms synonymous names for any degree of herniation and covering. Screening An omphalocele is often detected through AFP screening or a detailed fetal ultrasound. Genetic counseling and genetic testing such as amniocentesis are usually offered during the pregnancy. Management There is no treatment that is required prenatally unless there is a rupture of the exomphalos within the mother. An intact exomphalos can be delivered safely vaginally and C-sections are also acceptable if obstetrical reasons require it. There appears to be no advantage for delivery by C-section unless it is for a giant exomphalos that contains most of the liver. In this case vaginal delivery may result in dystocia (inability of the baby to exit the pelvis during birth) and liver damage. Immediately after birth a nasogastric tube is required to decompress the intestines and an endotracheal intubation is needed to support respiration. The exomphalos sac is kept warm and covered with a moist saline gauze and plastic transparent bowel bag to prevent fluid loss. The neonate also requires fluid, vitamin K and antibiotic administration intravenously. After management strategies are applied, a baby with an intact sac is medically stable and does not require urgent surgery. This time is used to assess the newborn to rule out associated anomalies prior to surgical closure of the defect. Studies show there is no significant difference in survival between immediate or delayed closure.Surgery can be performed directly for small omphaloceles, which will require a short stay in the nursery department, or in a staged manner for large omphaloceles, which will require several weeks stay. Staged closure requires a temporary artificial holding sac (a silo) to be placed over the abdominal organs and sutured to the abdominal wall. This can be made of non-adhesive dressing. The silo is gradually reduced in size at least once daily until all of the viscera have been returned to the abdominal cavity. This is repeated for several days to a week until surgical closure of the fascia/skin can be done. Closure may require a patch that can be rigid or non-rigid and made of natural biomaterials such as a bovine pericardium or artificial materials. The skin is then closed over the patch and it is re-vascularised by the bodys liver blood vessels post-surgery. The staged surgery is required, as rushed reduction of the exomphalos compromises venous return and ventilation, as it raises intra-abdominal pressure. In some cases, stretching of the abdominal wall to accommodate intestinal contents may be required. Non-operative therapy uses escharotic ointments. This is used for infants with large omphaloceles that have been born prematurely with respiratory insufficiency and associated chromosomal defects, as they would not be able to tolerate surgery. The ointment causes the sac to granulate and epithelialize, which leaves a residual large ventral hernia, which can be repaired later with surgery when the baby is more stable. After surgery, for larger omphaloceles, mechanical ventilation and parenteral nutrition is needed to manage the baby. Society and culture Awareness Day International Omphalocele Awareness Day is celebrated annually in the US on January 31, as part of Birth Defect Awareness Month. Several U.S. states have passed resolutions to officially recognize the date. References Omphalocele Diagnosis and Treatment at SSM Health St. Louis Fetal Care Institute The Brown Fetal Treatment Program - Providence, Rhode Island at Brown University Fetal Treatment Center: Omphalocele at UCSF Gastroschisis Exomphalos Extrophies Parent Support (GEEPS) == External links ==
Spasm of accommodation
A spasm of accommodation (also known as a ciliary spasm, an accommodation, or accommodative spasm) is a condition in which the ciliary muscle of the eye remains in a constant state of contraction. Normal accommodation allows the eye to "accommodate" for near-vision. However, in a state of perpetual contraction, the ciliary muscle cannot relax when viewing distant objects. This causes vision to blur when attempting to view objects from a distance. This may cause pseudomyopia or latent hyperopia. Although antimuscarinic drops (homoatropine 5%) can be applied topically to relax the muscle, this leaves the individual without any accommodation and, depending on refractive error, unable to see well at near distances. Also, excessive pupil dilation may occur as an unwanted side effect. This dilation may pose a problem since a larger pupil is less efficient at focusing light (see pupil, aperture, and optical aberration for more.) Patients who have accommodative spasm may benefit from being given glasses or contacts that account for the problem or by using vision therapy techniques to regain control of the accommodative system. Possible clinical findings include: Normal Amplitude of accommodation Normal Near point of convergence Reduced Negative relative accommodation Difficulty clearing plus on facility testing Treatments Cycloplegic Eye Drops (Dilation) Spasm of accommodation is frequently resistant to treatment. However, some patients do find relief through the use of daily eye dilation with cycloplegic drops. One side effect of cycloplegic drops is that they often have BAK as a preservative ingredient, which, with daily use, can erode the tear shield: At each administration of an eye drop containing benzalkonium chloride, its detergent effect disrupts the lipid layer of the tear film. This cannot be regenerated and can no longer protect the aqueous layer of the tear film, which evaporates easily. In these circumstances, the cornea is exposed and eye dryness occurs. In addition, benzalkonium chloride has a cellular toxicity on caliciform cells, entailing a reduction in the amount of mucin, an additional reason for disrupting the tear film. In fact, none of the cycloplegic drops used to treat Spasm of Accommodation in the United States are available without BAK. This unfortunately makes treatment much more difficult as the side effect of dry eyes and corneal damage can occur. France, Australia, Canada, and the United Kingdom do have limited availability of BAK-free eye drops available in unidose, and they must be imported to the United States with a physicians letter to the FDA enclosed with the imported prescription. Due to the high potential of tear shield damage with long-term use and the associated dry eye condition caused by cycloplegic eye drops with BAK (preservative), many physicians do not recommend cycloplegic eye drops. In difficult cases, "cycloplegic agents are highly favored to break spasm quickly and may be more economical compared to other conventional therapies"Cyclopentolate, Atropine, Tropicamide, and Homatropine are the typical cycloplegic eye drops used once daily to treat spasm of accommodation by relaxing the ciliary muscle. One side effect is blurred vision since these induce dilation. Vision Training Vision therapy administered by a trained optometrist has shown a success rate of over 70%. Surgery Multifocal intraocular lens implantation is a new possible treatment involving clear lens extraction and multifocal intraocular lens implantation but it may not be appropriate for patients who have had resistant spasm of accommodation for a long period of time. Research Experimental Nitroglycerin and Nitric Oxide Animal studies have found nitroglycerin, a vasodilator used to treat angina, relaxes the ciliary muscle and may hold hope for those suffering from spasm of accommodation. Nitroglycerin is currently being investigated as a treatment for glaucoma, and has shown to decrease intraocular pressure and relax the ciliary muscle. According to Investigative Ophthalmology & Visual Science Journal. "In a nonhuman primate study, topical administration of nitroglycerin at a dose of 0.1% significantly decreased IOP in normotensive animals after 90 minutes". Further, according to Wiederholt, Sturm, and Lepple-Wienhues, "The data indicate (indicates [sic]) that an increase of intracellular cGMP by application of cGMP and organic nitrate or non-nitrate vasodilators induces relaxation of the bovine trabecular meshwork and ciliary muscle". Experimental Perilla frutescens Since spasm of accommodation is a result of contraction of the ciliary muscle, the goal would be to relax the ciliary muscle. New studies conducted on rats using perilla frutescens aqueous extract have shown to relax the ciliary muscle. Since there are no known drugs to treat this eye condition, perilla frutescens in an aqueous extract form may result in the relaxation of the ciliary muscle in humans as well. Perilla frutescens is currently used in traditional medicine in Korea, Japan, and China and a clinical study "showed that PFA (perilla frutescens extract) attenuates eye fatigue by improving visual accommodation" Prognosis For routine cases of spasm of accommodation, the American Optometric Association says the prognosis is fair and on average, the number of visits a patient needs will be 1-2 for evaluation and 10 follow up visits. Additionally, the AOA recommends the following management plan for spasm of accommodation: "Begin with plus lenses and VT; if VT fails, use cycloplegic agent temporarily; educate patient".For more chronic and acute cases that do not respond to vision training and cycloplegic drops, the eye muscles should weaken with advancing age providing intermittent or permanent relief from this condition. See also Miosis Cycloplegia Pseudomyopia == References ==
Nasal polyp
Nasal polyps (NP) are noncancerous growths within the nose or sinuses. Symptoms include trouble breathing through the nose, loss of smell, decreased taste, post nasal drip, and a runny nose. The growths are sac-like, movable, and nontender, though face pain may occasionally occur. They typically occur in both nostrils in those who are affected. Complications may include sinusitis and broadening of the nose.The exact cause is unclear. They may be related to chronic inflammation of the lining of the sinuses. They occur more commonly among people who have allergies, cystic fibrosis, aspirin sensitivity, or certain infections. The polyp itself represents an overgrowth of the mucous membranes. Diagnosis may be accomplished by looking up the nose. A CT scan may be used to determine the number of polyps and help plan surgery.Treatment is typically with steroids, often in the form of a nasal spray. If this is not effective, surgery may be considered. The condition often recurs following surgery; thus, continued use of a steroid nasal spray is often recommended. Antihistamines may help with symptoms but do not change the underlying disease. Antibiotics are not required for treatment unless an infection occurs.About 4% of people currently have nasal polyps while up to 40% of people develop them at some point in their life. They most often occur after the age of 20 and are more frequent in males than females. Nasal polyps have been described since the time of the Ancient Egyptians. Signs and symptoms Symptoms of polyps include nasal congestion, sinusitis, loss of smell, thick nasal discharge, facial pressure, nasal speech, and mouth breathing. Recurrent sinusitis can result from polyps. Long-term, nasal polyps can cause destruction of the nasal bones and widening of the nose.As polyps grow larger, they eventually prolapse into the nasal cavity, resulting in symptoms. The most prominent symptoms of nasal polyps is blockage of the nasal passage. People with nasal polyps due to aspirin intolerance often have symptoms known as Samters triad, which consists of asthma worse with aspirin, a skin rash caused by aspirin, and chronic nasal polyps. Causes The exact cause of nasal polyps is unclear. They are, however, commonly associated with conditions that cause long term inflammation of the sinuses. This includes chronic rhinosinusitis, asthma, aspirin sensitivity, and cystic fibrosis.Various additional diseases associated with polyp formation include: Chronic rhinosinusitis is a common medical condition characterized by symptoms of sinus inflammation lasting at least 12 weeks. The cause is unknown and the role of microorganisms remains unclear. It can be classified as either with or without nasal polyposis.Cystic fibrosis (CF) is the most common cause of nasal polyps in children. Therefore, any child under 12 to 20 years old with nasal polyps should be tested for CF. Half of people with CF will experience extensive polyps leading to nasal obstruction and requiring aggressive management. Pathophysiology The true cause of nasal polyps is unknown, but they are thought to be due to recurrent infection or inflammation. Polyps arise from the lining of the sinuses. Nasal mucosa, particularly in the region of middle meatus becomes swollen due to collection of extracellular fluid. This extracellular fluid collection causes polyp formation and protrusion into the nasal cavity or sinuses. Polyps which are sessile in the beginning become pedunculated due to gravity.In people with nasal polyps due to aspirin or NSAID sensitivity, the underlying mechanism is due to disorders in the metabolism of arachidonic acid. Exposure to cycloxygenase inhibitors such as aspirin and NSAIDs leads to shunting of products through the lipoxygenase pathway leading to an increased production of products that cause inflammation. In the airway, these inflammatory products lead to symptoms of asthma such as wheezing as well as nasal polyp formation. Diagnosis Nasal polyps can be seen on physical examination inside of the nose and are often detected during the evaluation of symptoms. On examination, a polyp will appear as a visible mass in the nostril. Some polyps may be seen with anterior rhinoscopy (looking in the nose with a nasal speculum and a light), but frequently, they are farther back in the nose and must be seen by nasal endoscopy. Nasal endoscopy involves passing a small, rigid camera with a light source into the nose. An image is projected onto a screen in the office so the doctor can examine the nasal passages and sinuses in greater detail. The procedure is not generally painful, but the person can be given a spray decongestant and local anesthetic to minimize discomfort.Attempts have been made to develop scoring systems to determine the severity of nasal polyps. Proposed staging systems take into account the extent of polyps seen on endoscopic exam and the number of sinuses affected on CT imaging. This staging system is only partially validated, but in the future, may be useful for communicating the severity of disease, assessing treatment response, and planning treatment. Types There are two primary types of nasal polyps: ethmoidal and antrochoanal. Ethmoidal polyps arise from the ethmoid sinuses and extend through the middle meatus into the nasal cavity. Antrochoanal polyps usually arise in the maxillary sinus and extend into the nasopharynx and represent only 4–6% of all nasal polyps.However, antrochoanal polyps are more common in children comprising one-third of all polyps in this population. Ethmoidal polyps are usually smaller and multiple while antrochoanal polyps are usually single and larger. CT scan CT scan can show the full extent of the polyp, which may not be fully appreciated with physical examination alone. Imaging is also required for planning surgical treatment. On a CT scan, a nasal polyp generally has an attenuation of 10–18 Hounsfield units, which is similar to that of mucus. Nasal polyps may have calcification. Histology On histologic examination, nasal polyps consist of hyperplastic edematous (excess fluid) connective tissue with some seromucous glands and cells representing inflammation (mostly neutrophils and eosinophils). Polyps have virtually no neurons. Therefore, the tissue that makes up the polyp does not have any tissue sensation and the polyp itself will not be painful. In early stages, the surface of the nasal polyp is covered by normal respiratory epithelium, but later it undergoes metaplastic change to squamous type epithelium with the constant irritation and inflammation. The submucosa shows large intercellular spaces filled with serous fluid. Differential diagnosis Other disorders can mimic the appearance of nasal polyps and should be considered if a mass is seen on exam. Examples include encephalocele, glioma, inverted papilloma, and cancer. Early biopsy is recommended for unilateral nasal polyps to rule out more serious conditions such as cancer, inverted papilloma, or fungal sinusitis. Treatment The first line of treatment for nasal polyps is topical steroids. Steroids decrease the inflammation of the sinus mucosa to decrease the size of the polyps and improve symptoms. Topical preparations are preferred in the form of a nasal spray but are often ineffective for people with many polyps. Steroids by mouth often provide drastic symptom relief, but should not be taken for long periods of time due to their side effects. Because steroids only shrink the size and swelling of the polyp, people often have recurrence of symptoms once the steroids are stopped. Decongestants do not shrink the polyps, but can decrease swelling and provide some relief. Antibiotics are only recommended if the person has a co-occurring bacterial infection.In people with nasal polyps caused by aspirin or NSAIDs, avoidance of these medications will help with symptoms. Aspirin desensitization has also been shown to be beneficial. Surgery Endoscopic sinus surgery, advocated and popularized by Professor Stammberger, is often very effective for most people, providing rapid symptom relief. Endoscopic sinus surgery is minimally-invasive and is done entirely through the nostril with the help of a camera. Surgery should be considered for those with complete nasal obstruction, uncontrolled runny nose, nasal deformity caused by polyps or continued symptoms despite medical management. Surgery serves to remove the polyps as well as the surrounding inflamed mucosa, open obstructed nasal passages, and clear the sinuses. This not only removes the obstruction caused by the polyps themselves, but allows medications such as saline irrigations and topical steroids to become more effective. It has been suggested that one of the main objectives in sinus surgery for polyps is to allow delivery of the steroids into those areas of the sinuses where polyps develop, namely, the ethmoid sinuses. Specially designed long nozzles had been developed to use postoperatively to deliver steroids into those areas after sinus surgery for polyps.Surgery lasts approximately 45 minutes to 1 hour and can be done under general or local anesthesia. Most people tolerate the surgery without much pain, though this can vary from person to person. The person should expect some discomfort, congestion, and drainage from the nose in the first few days after surgery, but this should be mild. Complications from endoscopic sinus surgery are rare, but can include bleeding and damage to other structures in the area including the eye or brain.Many physicians recommend a course of oral steroids prior to surgery to reduce mucosal inflammation, decrease bleeding during surgery, and help with visualization of the polyps. Nasal steroid sprays should be used preventatively after surgery to delay or prevent recurrence. People often have recurrence of polyps even following surgery. Therefore, continued follow up with a combination of medical and surgical management is preferred for the treatment of nasal polyps. Epidemiology Nasal polyps resulting from chronic rhinosinusitis affect approximately 4.3% of the population. Nasal polyps occur more frequently in men than women and are more common as people get older, increasing drastically after the age of 40.Of people with chronic rhinosinusitis, 10% to 54% also have allergies. An estimated 40% to 80% of people with sensitivity to aspirin will develop nasal polyposis. In people with cystic fibrosis, nasal polyps are noted in 37% to 48%. References == External links ==
Peripheral T-cell lymphoma
Peripheral T-cell lymphoma refers to a group of T-cell lymphomas that develop away from the thymus or bone marrow.Examples include: Cutaneous T-cell lymphomas Angioimmunoblastic T-cell lymphoma Extranodal natural killer/T-cell lymphoma, nasal type Enteropathy type T-cell lymphoma Subcutaneous panniculitis-like T-cell lymphoma Anaplastic large cell lymphoma Peripheral T-cell lymphoma-Not-Otherwise-SpecifiedIn ICD-10, cutaneous T-cell lymphomas are classified separately. References == External links ==
Confusional arousals
Confusional arousals are classified as “partial awakenings in which the state of consciousness remains impaired for several minutes without any accompanying major behavioural disorders or severe autonomic responses”. Complete or partial amnesia of the episodes may be present. Signs and symptoms Confusional arousals are accompanied by mental confusion and disorientation, relative lack of response to environmental stimuli, and difficulty of awakening the subject. Vocalisation accompanied with coherent speech is common. Patients may appear upset, and some of them become aggressive or agitated. As well as for children, attempting to awaken or console an adult patient may increase agitation. Confusional arousals can occur during or following an arousal of deep sleep (see slow-wave sleep) and upon an attempt of awakening the subject from sleep in the morning.In children, confusional arousals can often be reproduced artificially by awakening the child during deep sleep. However, it doesn’t have any clinical significance without deeper investigation. Children living an episode of confusional arousal typically sit up in bed, whimper, cry, moan, and may utter words like “no” or “go away”. They remain distressed and inconsolable despite all parental efforts. Paradoxically, parental efforts can rather increase agitation of the child. The onset of symptoms is usually within 2 and 3 hours of sleep onset (at the time of transition from slow-wave sleep to a lighter sleep stage) and those events can last from 10 to 30 minutes. Patients generally wake up without any recollection of the event. It is necessary to distinguish confusional arousals in adults from children. Neurological symptomatology Confusional arousals are associated with behavioural awakening with persistent slow-wave electroencephalographic activity (see slow-wave sleep) during Non-rapid eye movement sleep (NREM). It suggests that sensorimotor network is activated while non sensorimotor areas are still "asleep". The altered state of consciousness may be explained by a hypersynchronous delta activity (see delta wave) in network involving the frontoparietal cortices (suggesting to be "asleep"), and higher frequency activities in sensorimotor, orbitofrontal, and temporal lateral cortices (suggesting an "awakening"). Sleep-related violence and abnormal sexual behaviours Confusional arousals have often been linked to sleep-related violence (self-injury or injury to the bed partner). The latter highlights important medical and legal issues when such behaviours are suspected and purported to have caused a criminal offense. The first documented case of homicide as a result of confusional arousal was reported in medieval times by the case of the Silesian woodcutter Bernard Schedmaizig. Sleep-related abnormal sexual behaviours (also called sexsomnia or sleep sex) are mainly classified as confusional arousals and more rarely associated to sleepwalking (also known as somnambulism). Even if sleep-related violence may occur during an episode of confusional arousal, it remains extremely rare and there are no specific predisposition to aggression during these episodes. Distinction between sleepwalking and night terrors Violent behaviours in confusional arousals slightly differ from those in sleepwalking or night terrors. Above all, during an episode of confusional arousal the patient never leaves the bed unlike sleepwalking. A bed partner or parent who tries to calm or restrain the patient by grabbing him or her may trigger a violent reaction as with sleepwalkers. In case of a confusional arousal triggered by an attempt of awakening the patient, violent behaviours may occur almost spontaneously. Unlike confusional arousals and sleep walking, patients experiencing night terrors seem to react to some type of frightening image. Therefore, the violent reaction may occur if another individual is encountered or is in proximity. Classification International Classification of Sleep Disorders (ICSD) According to the 2nd edition of the International Classification of Sleep Disorders (ICSD-2), confusional arousals are classified in NREM parasomnias embedded in the non-epileptic paroxysmal motor events during sleep, which include (1) Parasomnia, (2) Sleep-related movement disorders and (3) Isolated symptoms, apparently normal variants and unresolved issues. NREM parasomnias (or disorders of arousal) also include sleep terrors (see night terror) and sleepwalking. Confusional arousals are characterised by more or less complex movements without leaving bed with whimpering, sitting up in bed and some articulation without walking or terror. In comparison of other arousal parasomnias the age onset of sleep walking is generally between 5 and 10 years whereas confusional arousals and sleep terror may occur 3 years earlier. Sleep terrors are mainly characterised by screaming, agitation, flushed face, sweating and only share the inconsolability with confusional arousals. The current 3rd edition of the International Classification of Sleep Disorders (ICSD-3) added the sleep-related eating disorders in the disorders of arousal from NREM sleep. Diagnostic and Statistical Manual of Mental Disorders (DSM) Confusional arousals are at the time not considered as a disorder in the current 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V). This absence may be explained by the fact that confusional arousals have been understudied by the scientific community. Diagnosis The evaluation "should include a comprehensive medical history, a physical, neurological, and developmental examination, and a detailed description of the nocturnal events, sleep-wake schedules, and daytime behaviour”. However, the episodes have a long duration and a low rate of same-night recurrence. Even if amnesia usually follows episodes of confusional arousal, it is not a distinct trait related to severity.A video-polysomnography (see polysomnography) might be required if life history is untypical. In case of suspicion parents are encouraged to use infrared camera to record the behaviour of their child during sleep. Association of video recordings of nocturnal episodes with historical features is an important tool for both understanding and correctly diagnosing the disorder differently from other episodes of parasomnia. Confusional arousals as well as arousal parasomnias in general must be distinguished from epileptic seizure on the basis of clinical and electroencephalographic features (see electroencephalography). Management Children mostly outgrow the condition by late adolescence if not sooner. Management includes mainly non-pharmacological treatments and daily behaviours guidelines, but may include safety measures and/or medications if the patient is in danger from his or her behaviour: Ensure regular and adequate sleep routines in order to prevent sleep-wake cycle to be disrupted. Use of safety measures for the patient and family by clearing the bedroom from obstacles, securing the windows, or installing locks or alarms. Medications are necessary if the patient is in danger from his or her behaviour. In this case, Imipramine or low-dose Clonazepam is beneficial. Epidemiology The current prevalence of confusional arousals varies according to the year and the sample population and is approximately 4% (4.2% in 1999 in UK sample population, 6.1% (15–24 years old), 3.3% (25–34 y.o.) and 2% (35+ y.o.) in 2000 in UK, Germany and Italy sample population, 6.9% in 2010 in Norway sample population with a lifetime prevalence of 18.5%). The current prevalence of confusional arousals in children (3–13 y.o.) is higher and around 17.3%. Confusional arousals without a known cause or associated condition is uncommon (for about 1% of cases ).The contribution of genetics and family link is strong and episodes of confusional arousals can occur in several members of the same family. Risk factors Some independent risk factors associated with confusional arousals have been identified. According to studies, they are shift work, hypnagogic hallucinations (also known as hypnagogia), excessive daytime sleepiness, insomnia and hypersomnia disorder, circadian rhythm sleep disorder, restless legs syndrome, obstructive sleep apnea syndrome (OSAS), bipolar disorder, daily smoking, and age of 15–24 years. These risk factors of confusional arousals are somehow related to mental disorders and medical conditions and affecting mostly younger subjects regardless of gender. Precipitating factors include sleep deprivation, use of hypnotics or tranquilisers before bedtime, and sudden awakening from sleep (e.g., telephone ringing, alarm clock).In the ICSD-2 alcohol intake had been considered as a precipitating factor of confusional arousals. In the ICSD-3 the relation between alcohol use and disorder or arousal have been excluded. Moreover, the alcohol blackout has been added as a differential diagnosis. These changes have important implications for forensic cases. == References ==
General paresis of the insane
General paresis, also known as general paralysis of the insane (GPI), paralytic dementia, or syphilitic paresis is a severe neuropsychiatric disorder, classified as an organic mental disorder and is caused by late-stage syphilis and the chronic meningoencephalitis and cerebral atrophy that are associated with this late stage of the disease when left untreated. GPI differs from mere paresis, as mere paresis can result from multiple other causes and usually does not affect cognitive function. Degenerative changes caused by GPI are associated primarily with the frontal and temporal lobar cortex. The disease affects approximately 7% of individuals infected with syphilis, and is far more common in third world countries where fewer options for timely treatment are available. It is more common among men. GPI was originally considered to be a type of madness due to a dissolute character, when first identified in the early 19th century. The conditions connection with syphilis was discovered in the late 1880s. Progressively, with the discovery of organic arsenicals such as Salvarsan and Neosalvarsan (1910s), the development of pyrotherapy (1920s), and the widespread availability and use of penicillin in the treatment of syphilis (1940s), the condition was rendered avoidable and curable. Prior to this, GPI was inevitably fatal, and it accounted for as much as 25% of the primary diagnoses for residents in public psychiatric hospitals. Signs and symptoms Symptoms of the disease first appear from 10 to 30 years after infection. Incipient GPI is usually manifested by neurasthenic difficulties, such as fatigue, headaches, insomnia, dizziness, etc. As the disease progresses, mental deterioration and personality changes occur. Typical symptoms include loss of social inhibitions, asocial behavior, gradual impairment of judgment, concentration and short-term memory, euphoria, mania, depression, or apathy. Subtle shivering, minor defects in speech and Argyll Robertson pupil may become noticeable. Delusions, common as the illness progresses, tend to be poorly systematized and absurd. They can be grandiose, melancholic, or paranoid. These delusions include ideas of great wealth, immortality, thousands of lovers, unfathomable power, apocalypsis, nihilism, self-guilt, self-blame, or bizarre hypochondriacal complaints. Later, the patient experiences dysarthria, intention tremors, hyperreflexia, myoclonic jerks, confusion, seizures and severe muscular deterioration. Eventually, the paretic dies bedridden, cachectic and completely disoriented, frequently in a state of status epilepticus. Diagnosis The diagnosis could be differentiated from other known psychoses and dementias by a characteristic abnormality in eye pupil reflexes (Argyll Robertson pupil), and, eventually, the development of muscular reflex abnormalities, seizures, memory impairment (dementia) and other signs of relatively pervasive neurocerebral deterioration. Definitive diagnosis is based on the analysis of cerebrospinal fluid and tests for syphilis. Prognosis Although there were recorded cases of remission of the symptoms, especially if they had not passed beyond the stage of psychosis, these individuals almost invariably experienced relapse within a few months to a few years. Otherwise, the patient was seldom able to return home because of the complexity, severity and unmanageability of the evolving symptom picture. Eventually, the patient would become completely incapacitated, bed ridden, and would die, the process taking about three to five years on average. History While retrospective studies have found earlier instances of what may have been the same disorder, the first clearly identified examples of paresis among the insane were described in Paris after the Napoleonic Wars. General paresis of the insane was first described as a distinct disease in 1822 by Antoine Laurent Jesse Bayle. General paresis most often struck people (men far more frequently than women) between 20 and 40 years of age. By 1877, for example, the superintendent of an asylum for men in New York reported that in his institution this disorder accounted for more than 12% of admissions and more than 2% of deaths. Originally, the cause was believed to be an inherent weakness of character or constitution. While Friedrich von Esmarch and the psychiatrist Peter Willers Jessen (junior) had asserted as early as 1857 that syphilis caused general paresis (progressive Paralyse), progress toward the general acceptance by the medical community of this idea was only accomplished later by the eminent 19th Century syphilographer Jean Alfred Fournier (1832—1914). In 1913 all doubt about the syphilitic nature of paresis was finally eliminated when Hideyo Noguchi and J. W. Moore demonstrated the syphilitic spirochaetes in the brains of paretics. In 1917 Julius Wagner-Jauregg discovered that malaria therapy (in this case, medical induction of a fever) involving infecting paretic patients with malaria could halt the progression of general paresis. He won a Nobel Prize for this discovery in 1927. After World War II the use of penicillin to treat syphilis made general paresis a rarity: even patients manifesting early symptoms of actual general paresis were capable of full recovery with a course of penicillin. The disorder is now virtually unknown outside developing countries, and even there the epidemiology is substantially reduced. Some notable cases of general paresis: General Ranald S. Mackenzie was retired from the US Army in 1884 for "general paresis of the insane" 5 years before his death in 1889. Theo Van Gogh, brother of painter Vincent van Gogh, died six months after Vincent in 1891 from "dementia parylitica" or what is now called syphilitic paresis.The Chicago gangster Al Capone died of syphilitic paresis, having contracted syphilis in a brothel in 1919, and not having been properly treated for it in time to prevent his later onset of syphilitic paresis. See also Karolina Olsson Neurosyphilis Tabes dorsalis Tuskegee experiment References == External links ==
Complex regional pain syndrome
Complex regional pain syndrome (CRPS) is any of several painful conditions that are characterized by a continuing (spontaneous and/or evoked) regional pain that is seemingly disproportionate in time or degree to the usual course of any known trauma or other lesion. Usually starting in a limb, it manifests as extreme pain, swelling, limited range of motion, and changes to the skin and bones. It may initially affect one limb and then spread throughout the body; 35% of affected people report symptoms throughout their whole bodies. Two types exist: reflex sympathetic dystrophy (RSD) and causalgia. Having both types is possible. Classification The classification system in use by the International Association for the Study of Pain (IASP) divides CRPS into two types. It is recognised that people may exhibit both types of CRPS. Signs and symptoms Clinical features of CRPS have been found to be inflammation resulting from the release of certain proinflammatory chemical signals from the nerves, sensitized nerve receptors that send pain signals to the brain, dysfunction of the local blood vessels ability to constrict and dilate appropriately, and maladaptive neuroplasticity.The signs and symptoms of CRPS usually manifest near the injury site. The most common symptoms are extreme pain, including burning, stabbing, grinding, and throbbing. The pain is out of proportion to the severity of the initial injury. Moving or touching the limb is often intolerable. With diagnosis of either CRPS types I or II, patients may develop burning pain and allodynia. Both syndromes are also characterized by autonomic dysfunction, which presents with temperature changes (usually localized, but can be global), cyanosis, and/or edema.The patient may also experience localized swelling; extreme sensitivity to nonpainful stimuli such as wind, water, noise, and vibrations; extreme sensitivity to touch (by themselves, other people, and even light clothing or bedding/blankets); abnormally increased sweating (or absent sweating); changes in skin temperature (alternating between overly warm and cold); changes in skin colouring (from white and mottled to bright red or reddish-violet); changes in skin texture (waxy, shiny, thin, tight skin); softening and thinning of bones; joint tenderness or stiffness; changes in nails and hair (delayed or increased growth, brittle nails/hair that easily break); muscle spasms; muscle loss (atrophy); tremors; dystonia; allodynia; hyperalgesia; and decreased/restricted ability and painful movement of affected body part. Drop attacks (falls), almost fainting, and fainting spells are infrequently reported, as are visual problems.The symptoms of CRPS vary in severity and duration. Since CRPS is a systemic problem, potentially any organ can be affected. Symptoms may change over time, and they can vary from person to person. The more dynamic symptoms, especially vascular aspects (edema, temperature) and location of pain, can change numerous times a day.Previously, CRPS was considered to have three stages however more recent studies suggest people affected by CRPS do not progress through sequential stages and the staging system is no longer in wide use. Growing evidence instead points towards distinct sub-types of CRPS. Cause Complex regional pain syndrome is uncommon, and its cause is not clearly understood. CRPS typically develops after an injury, surgery, heart attack, or stroke. Investigators estimate that 2–5% of those with peripheral nerve injury, and 13-70% of those with hemiplegia (paralysis of one side of the body) will develop CRPS. In addition, some studies have indicated that cigarette smoking was strikingly present in patients and is statistically linked to RSD. This may be involved in its pathology by enhancing sympathetic activity, vasoconstriction, or by some other unknown neurotransmitter-related mechanism. This hypothesis was based on a retrospective analysis of 53 patients with RSD, which showed that 68% of patients and only 37% of controls were smokers. The results are preliminary and are limited by their retrospective nature. 7% of people who have CRPS in one limb later develop it in another limb. Pathophysiology Inflammation and alteration of pain perception in the central nervous system are proposed to play important roles. The persistent pain and the perception of nonpainful stimuli as painful are thought to be caused by inflammatory molecules (IL-1, IL2, TNF-alpha) and neuropeptides (substance P) released from peripheral nerves. This release may be caused by inappropriate cross-talk between sensory and motor fibers at the affected site. CRPS is not a psychological illness, yet pain can cause psychological problems, such as anxiety and depression. Often, impaired social and occupational function occur.Complex regional pain syndrome is a multifactorial disorder with clinical features of neurogenic inflammation (swelling in the central nervous system), nociceptive sensitisation (which causes extreme sensitivity or allodynia), vasomotor dysfunction (blood flow problems which cause swelling and discolouration) and maladaptive neuroplasticity (where the brain changes and adapts with constant pain signals); CRPS is the result of an "aberrant [inappropriate] response to tissue injury". The "underlying neuronal matrix" of CRPS is seen to involve cognitive and motor as well as nociceptive processing; pinprick stimulation of a CRPS affected limb was painful (mechanical hyperalgesia) and showed a "significantly increased activation" of not just the S1 cortex (contralateral), S2 (bilateral) areas, and insula (bilateral) but also the associative-somatosensory cortices (contralateral), frontal cortices, and parts of the anterior cingulate cortex. In contrast to previous thoughts reflected in the name RSD, it appears that there is reduced sympathetic nervous system outflow, at least in the affected region (although there may be sympatho-afferent coupling). Wind-up (the increased sensation of pain with time) and central nervous system (CNS) sensitization are key neurologic processes that appear to be involved in the induction and maintenance of CRPS.Compelling evidence shows that the N-methyl-D-aspartate (NMDA) receptor has significant involvement in the CNS sensitization process. It is also hypothesized that elevated CNS glutamate levels promote wind-up and CNS sensitization. In addition, there exists experimental evidence demonstrating the presence of NMDA receptors in peripheral nerves. Because immunological functions can modulate CNS physiology, a variety of immune processes have also been hypothesized to contribute to the initial development and maintenance of peripheral and central sensitization. Furthermore, trauma-related cytokine release, exaggerated neurogenic inflammation, sympathetic afferent coupling, adrenoreceptor pathology, glial cell activation, cortical reorganisation, and oxidative damage (e.g., by free radicals) are all factors which have been implicated in the pathophysiology of CRPS. In addition, autoantibodies are present in a wide number of CRPS patients and IgG has been recognized as one of the causes of hypersensitivity that stimulates A and C nociceptors, attributing to the inflammation.The mechanisms leading to reduced bone mineral density (up to overt osteoporosis) are still unknown. Potential explanations include a dysbalance of the activities of sympathetic and parasympathetic autonomic nervous system and mild secondary hyperparathyroidism. However, the trigger of secondary hyperparathyroidism has not yet been identified.In summary, the pathophysiology of complex regional pain syndrome has not yet been defined; CRPS, with its variable manifestations, could be the result of multiple pathophysiological processes. Diagnosis Diagnosis is primarily based on clinical findings. The original diagnostic criteria for CRPS adopted by the International Association for the Study of Pain (IASP) in 1994 have now been superseded in both clinical practice and research by the "Budapest Criteria" which were created in 2003 and have been found to be more sensitive and specific. They have since been adopted by the IASP. The criteria require there to be pain as well as a history and clinical evidence of Sensory, Vasomotor, Sudomotor and Motor or Trophic changes. It is also stated that it is a diagnosis of exclusion.To make a clinical diagnosis all four of the following criteria must be met: Continuing pain, which is disproportionate to any inciting event Must report at least one symptom in three of the four following categories.. Sensory: Reports of hyperesthesia Vasomotor: Reports of temperature asymmetry and/or skin color changes and/or skin color asymmetry Sudomotor/Edema: Reports of edema and/or sweating changes and/or sweating asymmetry Motor/Trophic: Reports of decreased range of motion and/or motor dysfunction (weakness, tremor, dystonia) and/or trophic changes (hair, nail, skin) Must display at least one sign at time of evaluation in two or more of the following categories Sensory: Evidence of hyperalgesia (to pinprick) and/or allodynia (to light touch and/or temperature sensation and/or deep somatic pressure and/or joint movement) Vasomotor: Evidence of temperature asymmetry (>1 °C) and/or skin color changes and/or asymmetry Sudomotor/Edema: Evidence of edema and/or sweating changes and/or sweating asymmetry Motor/Trophic: Evidence of decreased range of motion and/or motor dysfunction (weakness, tremor, dystonia) and/or trophic changes (hair, nail, skin) There is no other diagnosis that better explains the signs and symptoms Diagnostic adjuncts No specific test is available for CRPS, which is diagnosed primarily through observation of the symptoms. However, thermography, sweat testing, X-rays, electrodiagnostics, and sympathetic blocks can be used to build up a picture of the disorder. Diagnosis is complicated by the fact that some patients improve without treatment. A delay in diagnosis and/or treatment for this syndrome can result in severe physical and psychological problems. Early recognition and prompt treatment provide the greatest opportunity for recovery. Thermography Presently, established empirical evidence suggests against thermographys efficacy as a reliable tool for diagnosing CRPS. Although CRPS may, in some cases, lead to measurably altered blood flow throughout an affected region, many other factors can also contribute to an altered thermographic reading, including the patients smoking habits, use of certain skin lotions, recent physical activity, and prior history of trauma to the region. Also, not all patients diagnosed with CRPS demonstrate such "vasomotor instability" — particularly those in the later stages of the disease. Thus, thermography alone cannot be used as conclusive evidence for—or against—a diagnosis of CRPS and must be interpreted in light of the patients larger medical history and prior diagnostic studies.In order to minimise the confounding influence of external factors, patients undergoing infrared thermographic testing must conform to special restrictions regarding the use of certain vasoconstrictors (namely, nicotine and caffeine), skin lotions, physical therapy, and other diagnostic procedures in the days prior to testing. Patients may also be required to discontinue certain pain medications and sympathetic blockers. After a patient arrives at a thermographic laboratory, he or she is allowed to reach thermal equilibrium in a 16–20 °C, draft-free, steady-state room wearing a loose fitting cotton hospital gown for approximately twenty minutes. A technician then takes infrared images of both the patients affected and unaffected limbs, as well as reference images of other parts of the patients body, including his or her face, upper back, and lower back. After capturing a set of baseline images, some labs further require the patient to undergo cold-water autonomic-functional-stress-testing to evaluate the function of their autonomic nervous systems peripheral vasoconstrictor reflex. This is performed by placing a patients unaffected limb in a cold water bath (approximately 20 °C) for five minutes while collecting images. In a normal, intact, functioning autonomic nervous system, a patients affected extremity will become colder. Conversely, warming of an affected extremity may indicate a disruption of the bodys normal thermoregulatory vasoconstrictor function, which may sometimes indicate underlying CRPS. Radiography Scintigraphy, plain radiographs, and magnetic resonance imaging may all be useful diagnostically. Patchy osteoporosis (post-traumatic osteoporosis), which may be due to disuse of the affected extremity, can be detected through X-ray imagery as early as two weeks after the onset of CRPS. A bone scan of the affected limb may detect these changes even sooner and can almost confirm the disease. Bone densitometry can also be used to detect changes in bone mineral density. It can also be used to monitor the results of treatment since bone densitometry parameters improve with treatment. Ultrasound Ultrasound-based osteodensitometry (ultrasonometry) may be potential future radiation-free technique to identify reduced bone mineral density in CRPS. Additionally, this method promises to quantify the bone architecture in the periphery of affected limbs. This method is still under experimental development. Electrodiagnostic testing Electromyography (EMG) and nerve conduction studies (NCS) are important ancillary tests in CRPS because they are among the most reliable methods of detecting nerve injury. They can be used as one of the primary methods to distinguish between CRPS types I and II, which differ based on evidence of actual nerve damage. EMG and NCS are also among the best tests for ruling in or out alternative diagnoses. CRPS is a "diagnosis of exclusion", which requires that no other diagnosis can explain the patients symptoms. This is very important to emphasise because patients otherwise can be given a wrong diagnosis of CRPS when they actually have a treatable condition that better accounts for their symptoms. An example is severe carpal tunnel syndrome (CTS), which can often present in a very similar way to CRPS. Unlike CRPS, CTS can often be corrected with surgery to alleviate the pain and avoid permanent nerve damage and malformation.Both EMG and NCS involve some measure of discomfort. EMG involves the use of a tiny needle inserted into specific muscles to test the associated muscle and nerve function. Both EMG and NCS involve very mild shocks that in normal patients are comparable to a rubber band snapping on the skin. Although these tests can be very useful in CRPS, thorough informed consent must be obtained prior to the procedure, particularly in patients experiencing severe allodynia. In spite of the utility of the test, these patients may wish to decline the procedure to avoid discomfort. Classification Type I, formerly known as reflex sympathetic dystrophy (RSD), Sudecks atrophy, or algoneurodystrophy, does not exhibit demonstrable nerve lesions. As the vast majority of patients diagnosed with CRPS have this type, it is most commonly referred to in medical literature as type I. Type II, formerly known as causalgia, has evidence of obvious nerve damage. Despite evidence of nerve injury, the cause or the mechanisms of CRPS type II are as unknown, as the mechanisms of type I.Patients are frequently classified into two groups based upon temperature: "warm" or "hot" CRPS in one group and "cold" CRPS in the other group. The majority of patients (about 70%) have the "hot" type, which is said to be an acute form of CRPS. Cold CRPS is said to be indicative of a more chronic CRPS and is associated with poorer McGill Pain Questionnaire scores, increased central nervous system involvement, and a higher prevalence of dystonia. Prognosis is not favourable for cold CRPS patients; longitudinal studies suggest these patients have "poorer clinical pain outcomes and show persistent signs of central sensitisation correlating with disease progression". Prevention Vitamin C may be useful in prevention of the syndrome following fracture of the forearm or foot and ankle. Treatment Treatment of CRPS often involves a number of modalities. Therapy Physical and occupational therapy have low-quality evidence to support their use. Physical therapy interventions may include transcutaneous electrical nerve stimulation, progressive weight bearing, graded tactile desensitization, massage, and contrast bath therapy. In a retrospective cohort (unblinded, non-randomised and with intention-to-treat) of fifty patients diagnosed with CRPS, the subjective pain and body perception scores of patients decreased after engagement with a two-week multidisciplinary rehabilitation programme. The authors call for randomised controlled trials to probe the true value of multidisciplinary programs for CRPS patients. Mirror box therapy Mirror box therapy uses a mirror box, or a stand-alone mirror, to create a reflection of the normal limb such that the patient thinks they are looking at the affected limb. Movement of this reflected normal limb is then performed so that it looks to the patient as though they are performing movement with the affected limb. Mirror box therapy appears to be beneficial at least in early CRPS. However, beneficial effects of mirror therapy in the long term is still unproven. Graded motor imagery Graded motor imagery appears to be useful for people with CRPS-1. Graded motor imagery is a sequential process that consists of (a) laterality reconstruction, (b) motor imagery, and (c) mirror therapy.Transcutaneous Electrical Nerve Stimulation (TENS) Transcutaneous Electrical Nerve Stimulation (TENS) is a therapy that uses low voltage electrical signals to provide pain relief through electrodes that are placed on the surface of the skin. Evidence supports its use in treating pain and edema associated with CRPS although it does not seem to increase functional ability in CRPS patients. Medications Tentative evidence supports the use of bisphosphonates, calcitonin, and ketamine. Nerve blocks with guanethidine appear to be harmful. Evidence for sympathetic nerve blocks generally is insufficient to support their use. Intramuscular botulinum injections may benefit people with symptoms localized to one extremity. Ketamine Ketamine, a dissociative anesthetic, appears promising as a treatment for CRPS. It may be used in low doses if other treatments have not worked. No benefit on either function or depression, however, has been seen. Bisphosphonate treatment As of 2013, high-quality evidence supports the use of bisphosphonates (either orally or via IV infusion) in the treatment of CRPS. Bisphosphonates inhibit osteoclasts which are cells involved in the resorption of bone. Bone remodeling, via osteoclast activity in resorption of bone, is thought to sometimes be hyperactive in CRPS. It is hypothesized that bone resorption causes acidification of the intercellular milieu and this activates nerves involved in nociception that densely innervate bone causing pain; therefore inhibiting bone resorption and remodeling is thought to help with regards to pain in CRPS. CRPS involving high levels of bone resorption, as seen on bone scan, is more likely to respond to bisphosphonate therapy. Opioids Opioids such as oxycodone, morphine, hydrocodone, and fentanyl have a controversial place in treatment of CRPS. These drugs must be prescribed and monitored under close supervision of a physician, as these drugs will lead to physical dependence and can lead to addiction. Thus far, no long-term studies of oral opioid use in treating neuropathic pain, including CRPS, have been performed. The consensus among experts is that opioids should not be a first line therapy and should only be considered after all other modalities (non-opioid medications, physical therapy, and procedures) have been trialed. Surgery Spinal cord stimulators Spinal cord stimulator appears to be an effective therapy in the management of patients with CRPS type I (level A evidence) and type II (level D evidence). While they improve pain and quality of life, evidence is unclear regarding effects on mental health and general functioning.Dorsal root ganglion stimulation is type of neurostimulation that is effective in the management of focal neuropathic pain. The FDA approved its use in February 2016. The ACCURATE Study demonstrated superiority of dorsal root ganglion stimulation over spinal (dorsal column) stimulation in the management of CRPS and causalgia. Sympathectomy Surgical, chemical, or radiofrequency sympathectomy — interruption of the affected portion of the sympathetic nervous system — can be used as a last resort in patients with impending tissue loss, edema, recurrent infection, or ischemic necrosis. However, little evidence supports these permanent interventions to alter the pain symptoms of the affected patients, and in addition to the normal risks of surgery, such as bleeding and infection, sympathectomy has several specific risks, such as adverse changes in how nerves function. Amputation No randomized study in medical literature has studied the response with amputation of patients who have failed the above-mentioned therapies and who continue to be miserable. Nonetheless, on average, about half of the patients will have resolution of their pain, while half will develop phantom limb pain and/or pain at the amputation site. As in any other chronic pain syndrome, the brain likely becomes chronically stimulated with pain, and late amputation may not work as well as it might be expected. In a survey of 15 patients with CRPS type 1, 11 responded that their lives were better after amputation. Since this is the ultimate treatment of a painful extremity, it should be left as a last resort. Prognosis The prognosis in CRPS is improved with early and aggressive treatment; with the risk of chronic, debilitating pain being reduced with the early treatment. If treatment is delayed, however, the disorder can quickly spread to the entire limb, and changes in bone, nerve, and muscle may become irreversible. The prognosis is worse with the chronic "cold" form of CRPS and with CRPS affecting the upper extremities. Disuse of the limb after an injury or psychological distress related to an injury are also associated with a poorer prognosis in CRPS. Some cases of CRPS may resolve spontaneously (with 74% of patients undergoing complete resolution of symptoms (often spontaneously) in a population based study in Minnesota), but others may develop chronic pain for many years. Once one is diagnosed with CRPS, should it go into remission, the likelihood of it resurfacing after going into remission is significant. Taking precautions and seeking immediate treatment upon any injury is important. Epidemiology CRPS can occur at any age, with the average age at diagnosis being 42. It affects both men and women; however, CRPS is three times more frequent in females than males.CRPS affects both adults and children, and the number of reported CRPS cases among adolescents and young adults has been increasing, with a recent observational study finding an incidence of 1.16/100,000 among children in Scotland. History The condition currently known as CRPS was originally described during the American Civil War by Silas Weir Mitchell, who is sometimes also credited with inventing the name "causalgia". However, this term was actually coined by Mitchells friend Robley Dunglison from the Greek words for heat and for pain. Contrary to what is commonly accepted, it emerges that these causalgias were certainly major by the importance of the vasomotor and sudomotor symptoms but stemmed from minor neurological lesions. In the 1940s, the term reflex sympathetic dystrophy came into use to describe this condition, based on the theory that sympathetic hyperactivity was involved in the pathophysiology. In 1959, Noordenbos observed in causalgia patients that "the damage of the nerve is always partial." Misuse of the terms, as well as doubts about the underlying pathophysiology, led to calls for better nomenclature. In 1993, a special consensus workshop held in Orlando, Florida, provided the umbrella term "complex regional pain syndrome", with causalgia and RSD as subtypes. Research The National Institute of Neurological Disorders and Stroke (NINDS), a part of the National Institutes of Health, supports and conducts research on the brain and central nervous system, including research relevant to RSDS, through grants to major medical institutions across the country. NINDS-supported scientists are working to develop effective treatments for neurological conditions and ultimately, to find ways of preventing them. Investigators are studying new approaches to treat CRPS and intervene more aggressively after traumatic injury to lower the patients chances of developing the disorder. In addition, NINDS-supported scientists are studying how signals of the sympathetic nervous system cause pain in CRPS patients. Using a technique called microneurography, these investigators are able to record and measure neural activity in single nerve fibers of affected patients. By testing various hypotheses, these researchers hope to discover the unique mechanism that causes the spontaneous pain of CRPS, and that discovery may lead to new ways of blocking pain. Other studies to overcome chronic pain syndromes are discussed in the pamphlet "Chronic Pain: Hope Through Research", published by the NINDS.Research into treating the condition with mirror visual feedback is being undertaken at the Royal National Hospital for Rheumatic Disease in Bath. Patients are taught how to desensitize in the most effective way, then progress to using mirrors to rewrite the faulty signals in the brain that appear responsible for this condition. However, while CRPS can go into remission, the chance of it reoccurring is significant.The Netherlands has the most comprehensive program of research into CRPS, as part of a multimillion-Euro initiative called TREND. German and Australian research teams are also pursuing better understanding and treatments for CRPS. In other animal species CRPS has also been described in animals, such as cattle. Notable cases Nia Frazier, Dance Moms star Paula Abdul, singer, actor, TV personality Jill Kinmont Boothe, US ski slalom champion Gemma Collis-McCann, British paralympic fencer Shin Dong-wook, South Korean actor and model Howard Hughes, American business tycoon, aviator, inventor, filmmaker, and philanthropist Rachel Morris, British paralympic cyclist. Cynthia Toussaint, author and media personality Danielle Brown, British paralympic archer Radene Marie Cook, former Los Angeles radio broadcaster, artist, and advocate Marieke Vervoort, Belgian Paralympic athlete Bruno Soriano, Spanish footballer References External links Complex regional pain syndrome at Curlie Reflex sympathetic dystrophy at Curlie
Horners syndrome
Horners syndrome, also known as oculosympathetic paresis, is a combination of symptoms that arises when a group of nerves known as the sympathetic trunk is damaged. The signs and symptoms occur on the same side (ipsilateral) as it is a lesion of the sympathetic trunk. It is characterized by miosis (a constricted pupil), partial ptosis (a weak, droopy eyelid), apparent anhidrosis (decreased sweating), with apparent enophthalmos (inset eyeball).The nerves of the sympathetic trunk arise from the spinal cord in the chest, and from there ascend to the neck and face. The nerves are part of the sympathetic nervous system, a division of the autonomic (or involuntary) nervous system. Once the syndrome has been recognized, medical imaging and response to particular eye drops may be required to identify the location of the problem and the underlying cause. Signs and symptoms Signs that are found in people with Horners syndrome on the affected side of the face include the following: ptosis (drooping of the upper eyelid) anhidrosis (decreased sweating) miosis (constriction of the pupil) Enophthalmos (sinking of the eyeball into the face) inability to completely close or open the eyelid facial flushing headaches loss of ciliospinal reflex bloodshot conjunctiva, depending on the site of lesion. unilateral straight hair (in congenital Horners syndrome); the hair on the affected side may be straight in some cases. heterochromia iridum (in congenital Horners syndrome)Interruption of sympathetic pathways leads to several implications. It inactivates the dilator muscle and thereby produces miosis. It inactivates the superior tarsal muscle which produces ptosis. It reduces sweat secretion in the face. Patients may have apparent enophthalmos (affected eye looks to be slightly sunken in) but this is not always the case. The ptosis from inactivation of the superior tarsal muscle causes the eye to appear sunken in, but when actually measured, enophthalmos is not present. The phenomenon of enophthalmos is seen in Horners syndrome in cats, rats, and dogs.Sometimes there is flushing on the affected side of the face due to dilation of blood vessels under the skin. The pupils light reflex is maintained as this is controlled via the parasympathetic nervous system.In children, Horners syndrome sometimes leads to heterochromia, a difference in eye color between the two eyes. This happens because a lack of sympathetic stimulation in childhood interferes with melanin pigmentation of the melanocytes in the superficial stroma of the iris.In veterinary medicine, signs can include partial closure of the third eyelid, or nictitating membrane. Causes Horners syndrome is usually acquired as a result of disease, but may also be congenital (inborn, associated with heterochromatic iris) or iatrogenic (caused by medical treatment). In rare cases, Horners syndrome may be the result of repeated, minor head trauma, such as being hit with a soccer ball. Although most causes are relatively benign, Horners syndrome may reflect serious disease in the neck or chest (such as a Pancoast tumor (tumor in the apex of the lung) or thyrocervical venous dilatation).Causes can be divided according to the presence and location of anhidrosis: Central (anhidrosis of face, arm and trunk) Syringomyelia Multiple sclerosis Encephalitis Brain tumors Lateral medullary syndrome Preganglionic (anhidrosis of face) Cervical rib traction on stellate ganglion Thyroid carcinoma Thyroidectomy Goiter Bronchogenic carcinoma of the superior fissure (Pancoast tumor) on apex of lung Klumpke paralysis Trauma - base of neck, usually blunt trauma, sometimes surgery. As a complication of tube thoracostomy Thoracic aortic aneurysm Postganglionic (no anhidrosis) Cluster headache - combination termed Hortons headache An episode of Horners syndrome may occur during a migraine attack and be relieved afterwards Carotid artery dissection/carotid artery aneurysm/trauma Cavernous sinus thrombosis Middle ear infection Sympathectomy Nerve blocks, such as cervical plexus block, stellate ganglion or interscalene block Pathophysiology Horner syndrome is due to a deficiency of sympathetic activity. The site of lesion to the sympathetic outflow is on the ipsilateral side of the symptoms. The following are examples of conditions that cause the clinical appearance of Horners syndrome: First-order neuron disorder: Central lesions that involve the hypothalamospinal tract (e.g. transection of the cervical spinal cord). Second-order neuron disorder: Preganglionic lesions (e.g. compression of the sympathetic chain by a lung tumor) that releases acetylcholine. Third-order neuron disorder: Postganglionic lesions at the level of the internal carotid artery (e.g. a tumor in the cavernous sinus or a carotid artery dissection) that releases norepinephrine. Partial Horners syndrome: In case of a third-neuron disorder, anhidrosis is limited to the middle part of the forehead or can be absent, resulting in a partial Horners syndrome.If patients have impaired sweating above the waist affecting only one side of the body, and they do not have clinically apparent Horners syndrome, then their lesions are just below the stellate ganglion in the sympathetic chain. Diagnosis Three tests are useful in confirming the presence and severity of Horner syndrome: Cocaine drop test: Cocaine eyedrops block the reuptake of post-ganglionic norepinephrine resulting in the dilation of a normal pupil from retention of norepinephrine in the synapse. However, in Horners syndrome the lack of norepinephrine in the synaptic cleft causes mydriatic failure. A more recently introduced approach that is more dependable and obviates the difficulties in obtaining cocaine is to apply the alpha-agonist apraclonidine to both eyes and observe the increased mydriatic effect (due to hypersensitivity) on the affected side of Horner syndrome (the opposite effect to what the cocaine test would produce in the presence of Horners). Paredrine test: This test helps to localize the cause of the miosis. If the third order neuron (the last of three neurons in the pathway which ultimately discharges norepinephrine into the synaptic cleft) is intact, then the amphetamine causes neurotransmitter vesicle release, thus releasing norepinephrine into the synaptic cleft and resulting in robust mydriasis of the affected pupil. If the lesion itself is of the third order neuron, then the amphetamine will have no effect and the pupil remains constricted. There is no pharmacological test to differentiate between a first and second order neuron lesion. Dilation lag testIt is important to distinguish the ptosis caused by Horners syndrome from the ptosis caused by a lesion to the oculomotor nerve. In the former, the ptosis occurs with a constricted pupil (due to a loss of sympathetics to the eye), whereas in the latter, the ptosis occurs with a dilated pupil (due to a loss of innervation to the sphincter pupillae). In a clinical setting, these two ptoses are fairly easy to distinguish. In addition to the blown pupil in a CNIII (oculomotor nerve) lesion, this ptosis is much more severe, occasionally occluding the whole eye. The ptosis of Horner syndrome can be quite mild or barely noticeable (partial ptosis).When anisocoria occurs and the examiner is unsure whether the abnormal pupil is the constricted or dilated one, if a one-sided ptosis is present then the abnormally sized pupil can be presumed to be on the side of the ptosis. History The syndrome is named after Johann Friedrich Horner, the Swiss ophthalmologist who first described the syndrome in 1869. Several others had previously described cases, but "Horners syndrome" is most prevalent. In France and Italy, Claude Bernard is also eponymized with the condition (Claude Bernard–Horner syndrome, abbreviated CBH). In France, Francois Pourfour du Petit is also credited with describing this syndrome. Children The most common causes in young children are birth trauma and a type of cancer called neuroblastoma. The cause of about a third of cases in children is unknown. See also Anisocoria Harlequin syndrome References == External links ==
Cutaneous amoebiasis
Cutaneous amoebiasis, refers to a form of amoebiasis that presents primarily in the skin. It can be caused by Acanthamoeba or Entamoeba histolytica.: 421  When associated with Acanthamoeba, it is also known as "cutaneous acanthamoebiasis". Balamuthia mandrillaris can also cause cutaneous amoebiasis, but can prove fatal if the amoeba enters the bloodstream Diagnosis It is characterized by ulcers. Diagnosis of amebiasis cutis calls for high degree of clinical suspicion. This needs to be backed with demonstration of trophozoites from lesions. Unless an early diagnosis can be made such patients can develop significant morbidity. See also Skin lesion References == External links ==
Ehrlichiosis ewingii infection
Ehrlichiosis ewingii infection is an infectious disease caused by an intracellular bacteria, Ehrlichia ewingii. The infection is transmitted to humans by the tick, Amblyomma americanum. This tick can also transmit Ehrlichia chaffeensis, the bacteria that causes human monocytic ehrlichiosis (HME). Symptoms and signs Patients can present with fever, headache, myalgias, and malaise. Laboratory tests may reveal thrombocytopenia, leukopenia, and evidence of liver damage. Mechanism Humans contract the disease after a bite by an infected tick of the species Amblyomma americanum. Those with an underlying immunodeficiency (such as HIV) appear to be at greater risk of contracting the disease. Compared to HME, ewingii ehrlichiosis has a decreased incidence of complications.Like Anaplasma phagocytophilum, the causative agent of human granulocytic ehrlichiosis, Ehrlichia ewingii infects neutrophils. Infection with E. ewingii may delay neutrophil apoptosis. Diagnosis In endemic areas, a high index of suspicion is warranted, especially with a known exposure to ticks. The diagnosis can be confirmed by using PCR. A peripheral blood smear can also be examined for intracytoplasmic inclusions called morulae. Treatment The treatment of choice is doxycycline. See also Human monocytic ehrlichiosis Human granulocytic ehrlichiosis Ehrlichiosis (canine) References == External links ==
Dust storm
A dust storm, also called a sandstorm, is a meteorological phenomenon common in arid and semi-arid regions. Dust storms arise when a gust front or other strong wind blows loose sand and dirt from a dry surface. Fine particles are transported by saltation and suspension, a process that moves soil from one place and deposits it in another. The arid regions of North Africa, the Arabian peninsula, Central Asia and China are the main terrestrial sources of airborne dust. It has been argued that poor management of Earths drylands, such as neglecting the fallow system, are increasing the size and frequency of dust storms from desert margins and changing both the local and global climate, as well as impacting local economies.The term sandstorm is used most often in the context of desert dust storms, especially in the Sahara Desert, or places where sand is a more prevalent soil type than dirt or rock, when, in addition to fine particles obscuring visibility, a considerable amount of larger sand particles are blown closer to the surface. The term dust storm is more likely to be used when finer particles are blown long distances, especially when the dust storm affects urban areas. Causes As the force of dust passing over loosely held particles increases, particles of sand first start to vibrate, then to move across the surface in a process called saltation. As they repeatedly strike the ground, they loosen and break off smaller particles of dust which then begin to travel in suspension. At wind speeds above that which causes the smallest to suspend, there will be a population of dust grains moving by a range of mechanisms: suspension, saltation and creep.A study from 2008 finds that the initial saltation of sand particles induces a static electric field by friction. Saltating sand acquires a negative charge relative to the ground which in turn loosens more sand particles which then begin saltating. This process has been found to double the number of particles predicted by previous theories.Particles become loosely held mainly due to a prolonged drought or arid conditions, and high wind speeds. Gust fronts may be produced by the outflow of rain-cooled air from an intense thunderstorm. Or, the wind gusts may be produced by a dry cold front: that is, a cold front that is moving into a dry air mass and is producing no precipitation—the type of dust storm which was common during the Dust Bowl years in the U.S. Following the passage of a dry cold front, convective instability resulting from cooler air riding over heated ground can maintain the dust storm initiated at the front. In desert areas, dust and sand storms are most commonly caused by either thunderstorm outflows, or by strong pressure gradients which cause an increase in wind velocity over a wide area. The vertical extent of the dust or sand that is raised is largely determined by the stability of the atmosphere above the ground as well as by the weight of the particulates. In some cases, dust and sand may be confined to a relatively-shallow layer by a low-lying temperature inversion. In other instances, dust (but not sand) may be lifted as high as 6,000 m (20,000 ft). Drought and wind contribute to the emergence of dust storms, as do poor farming and grazing practices by exposing the dust and sand to the wind. One poor farming practice which contributes to dust storms is dryland farming. Particularly poor dryland farming techniques are intensive tillage or not having established crops or cover crops when storms strike at particularly vulnerable times prior to revegetation. In a semi-arid climate, these practices increase susceptibility to dust storms. However, soil conservation practices may be implemented to control wind erosion. Physical and environmental effects A sandstorm can transport and carry large volumes of sand unexpectedly. Dust storms can carry large amounts of dust, with the leading edge being composed of a wall of thick dust as much as 1.6 km (5,200 ft) high. Dust and sand storms which come off the Sahara Desert are locally known as a simoom or simoon (sîmūm, sîmūn). The haboob (həbūb) is a sandstorm prevalent in the region of Sudan around Khartoum, with occurrences being most common in the summer. The Sahara desert is a key source of dust storms, particularly the Bodélé Depression and an area covering the confluence of Mauritania, Mali, and Algeria. Sahara dust is frequently emitted into the Mediterranean atmosphere and transported by the winds sometimes as far north as central Europe and Great Britain.Saharan dust storms have increased approximately 10-fold during the half-century since the 1950s, causing topsoil loss in Niger, Chad, northern Nigeria, and Burkina Faso. In Mauritania there were just two dust storms a year in the early 1960s; there are about 80 a year since 2007, according to English geographer Andrew Goudie, professor at the University of Oxford. Levels of Saharan dust coming off the east coast of Africa in June 2007 were five times those observed in June 2006, and were the highest observed since at least 1999, which may have cooled Atlantic waters enough to slightly reduce hurricane activity in late 2007. Dust storms have also been shown to increase the spread of disease across the globe. Virus spores in the ground are blown into the atmosphere by the storms with the minute particles and interact with urban air pollution.Short-term effects of exposure to desert dust include immediate increased symptoms and worsening of the lung function in individuals with asthma, increased mortality and morbidity from long-transported dust from both Saharan and Asian dust storms suggesting that long-transported dust storm particles adversely affects the circulatory system. Dust pneumonia is the result of large amounts of dust being inhaled. Prolonged and unprotected exposure of the respiratory system in a dust storm can also cause silicosis, which, if left untreated, will lead to asphyxiation; silicosis is an incurable condition that may also lead to lung cancer. There is also the danger of keratoconjunctivitis sicca ("dry eyes") which, in severe cases without immediate and proper treatment, can lead to blindness. Economic impact Dust storms cause soil loss from the drylands, and worse, they preferentially remove organic matter and the nutrient-rich lightest particles, thereby reducing agricultural productivity. Also, the abrasive effect of the storm damages young crop plants. Dust storms also reduce visibility, affecting aircraft and road transportation. Dust can also have beneficial effects where it deposits: Central and South American rainforests get most of their mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the mid-western U.S., ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. On Mars Dust storms are not limited to Earth and have been known to form on other planets such as Mars. These dust storms can extend over larger areas than those on Earth, sometimes encircling the planet, with wind speeds as high as 25 m/s (60 mph). However, given Mars much lower atmospheric pressure (roughly 1% that of Earths), the intensity of Mars storms could never reach the kind of hurricane-force winds that are experienced on Earth. Martian dust storms are formed when solar heating warms the Martian atmosphere and causes the air to move, lifting dust off the ground. The chance for storms is increased when there are great temperature variations like those seen at the equator during the Martian summer. See also Coccidioidomycosis Dry line Dust storm warning Haboob Iberulites List of dust storms Mineral dust Saharan Air Layer Shamal (wind) Sirocco References External links 12-hour U.S. map of surface dust concentrations Mouse-over an hour block on the row for Surface Dust Concentrations Dust in the Wind Archived 2020-04-16 at the Wayback Machine Photos of the April 14 1935 and September 2 1934 dust storms in the Texas Panhandle hosted by the Portal to Texas History. University of Arizona Dust Model Page Photos of a sandstorm in Riyadh in 2009 from the BBC Newsbeat website Dust storm in Phoenix Arizona via YouTube
Gnathostomiasis
Gnathostomiasis (also known as larva migrans profundus: 436 ) is the human infection caused by the nematode (roundworm) Gnathostoma spinigerum and/or Gnathostoma hispidum, which infects vertebrates. Symptoms and signs A few days after ingestion epigastric pain, fever, vomiting, and loss of appetite resulting from migration of larvae through intestinal wall to the abdominal cavity will appear in the patient. Migration in the subcutaneous tissues (under the skin) causes intermittent, migratory, painful, pruritic swellings (cutaneous larva migrans). Patches of edema appear after the above symptoms clear and are usually found on the abdomen. These lesions vary in size and can be accompanied by pruritus (itchy anus), rash, and stabbing pain. Swellings may last for 1 to 4 weeks in a given area and then reappear in a different location. Migration to other tissues (visceral larva migrans), can result in cough, hematuria (blood in urine), ocular (eye) involvement, meningitis, encephalitis and eosinophilia. Eosinophilic myeloencephalitis may also result from invasion of the central nervous system by the larvae. Causes Human gnathostomiasis is infection by the migrating third-stage larvae of any of five species of Gnathostoma, which is type of worm (more specifically a type of nematode). The most common cause in Asia is G. spinigerum, and the most common cause in the Americas is G. binucleatum. G. hispidium and G. doloresi occur in East and Southeast Asia; the former has also been found in Eastern Europe. G. nipponicum occurs only in Japan and China. There is one unconfirmed report of G. malaysiae causing disease in humans. Transmission Gnathostomiasis is transmitted by the ingestion of raw or insufficiently cooked definitive hosts such as fresh water fish, poultry, or frogs.In Thailand and Vietnam, the most common cause appears to be consumption of undercooked Asian swamp eels (Monopterus albus, also called Fluta alba) which transmit G. spinigerum. Monopterus albus is an invasive species in North America, but no Gnathostoma infections in humans have yet been conclusively identified in the US. Hosts Intermediate host The primary intermediate host is the minute crustaceans of the genus Cyclops. These crustaceans are then ingested by a second intermediate host, such as frogs. Definitive host The definitive hosts for gnathostomiasis include cats, dogs, tigers, leopards, lions, mink, opossums, raccoons, poultry, frogs, freshwater fish, snakes or birds. Incubation period The incubation period for gnathostomiasis is 3–4 weeks when the larvae begin to migrate through the subcutaneous tissue of the body. Morphology The adult parasite is reddish-brown in color and has a globular cephalic dome that is separated from the rest of the body by constriction. The posterior portion of the nematode is smooth while the anterior half is covered with fine leaf-like spines. The head is round and contains 4 to 8 transverse rows of hooklets that are protected by a pair of fleshy lips . The males are shorter than the females, 11–25 mm (0.43–0.98 in) compared to 25–54 mm (0.98–2.13 in) respectively. Eggs are oval and have a mucoid plug at one end. Life cycle Life cycle in definitive hosts Adult worms are found in a tumor located in the gastric wall of the definitive hosts and release eggs into the hosts digestive tract. The eggs are then released with feces and in about a week hatch in water to develop into first stage larva. Larvae are then ingested by minute copepods of the genus Cyclops. Once entering the copepod, the larvae penetrate the gastric wall of their intermediate host and begin to develop into second-stage and even early third-stage larvae. The copepods are then ingested by a second intermediate host such as fish, frogs, or snakes. Within this second intermediate or definitive host the larva repeat a similar pattern of penetrating the gastric wall, but then continue to migrate to muscular tissue and develop into advanced third-stage larvae. These larvae then encyst within the musculature of the new host. If the cyst containing flesh of these hosts is ingested by a definitive host, such as dogs, and cats, the cysts are ingested and the larvae escape the cysts and penetrate the gastric wall. These released larvae travel to the connective tissue and muscle as observed before and after 4 weeks they return to the gastric wall as adults. Here they form a tumor and continue to mature into adults for the next 6–8 months. Worms mate and females begin to excrete fertilized eggs with feces 8–12 months after ingestion of cysts. They are passed out in the feces and eaten by another fish. Life cycle in humans Infection of humans by gnathostomiasis is accidental because humans are not one of the definitive hosts of the parasite and do not allow the parasite to complete its life cycle. Infection in humans follows ingestion of raw or insufficiently cooked infected intermediate hosts. The ingested third stage larva migrates from the gastric wall and its migration results in the symptoms associated with infection by gnathostomiasis. The third stage larvae dont return to the gastric wall preventing it from maturing into adult worms, leaving the life cycle incomplete. Instead the larvae continue to migrate unpredictably unable to develop into adults, so eggs are seldom found in diagnostic tests. This also means the number of worms present in humans is a reflection of the number of third stage larvae ingested. Diagnosis Diagnosis of gnathostomiasis is possible (with microscopy) after removal of the worm. The primary form of diagnosis of gnathostomiasis is the identification of larva in the tissue. Serological testing such as enzyme-linked immunosorbent assay (ELISA) or the Western blot are also reliable but may not be easily accessible in endemic areas.CT scanning or MRI can be used to help identify a soft tissue worm and when looking at CNS disease it can be used to reveal the presence of the worm. The presence of haemorrhagic tracks on gradient-echo T2-weighted MRI is characteristic and possibly diagnostic. Prevention The best strategies for preventing accidental infection of humans is to educate those living in endemic areas to only consume fully cooked meat. The inability of the parasite to complete its life cycle within humans means that transmission can easily be contained by adequate preparation of meat from intermediate hosts. This is especially useful because of the difficulty and lack of feasibility inherent in eliminating all intermediate hosts of gnathostomiasis. So instead, individuals in endemic areas should avoid eating raw and undercooked meat in endemic areas, but this may be difficult in these areas. The dish ceviche is native to Peru and a favorite of Mexico. It consists of onion, cubed fish, lime or lemon juice and Andean spices including salt and chili. The ingredients are mixed together and they are allowed to marinate several hours before being served at room temperature. Then in endemic areas in Southeast Asia there are traditional dishes associated with these areas that also include raw uncooked fish, such as koipla in Thailand, goi ca song in Vietnam, sashimi and sushi in Japan.Acknowledging these cultural traditions, individuals in these cultural can be educated on methods of adapting their food preparation activities in order to remove the larvae without greatly altering these traditional dishes. For instance, meat should be marinated in vinegar for six hours or in soy sauce for 12 hours in order to successfully kill the larvae. In areas with reliable electricity, meat can be frozen at -20 degrees Celsius for 3–5 days to achieve the same results of killing the larvae present. Treatment Surgical removal or treatment with albendazole or ivermectin is recommended. The most prescribed treatment for gnathostomiasis is surgical removal of the larvae but this is only effective when the worms are located in an accessible location. In addition to surgical excision, albendazole and ivermectin have been noted in their ability to eliminate the parasite. Albendazole is recommended to be administered at 400 mg daily for 21 days as an adjunct to surgical excision, while ivermectin is better tolerated as a single dose. Ivermectin can also serve as a replacement for those that cant handle albendazole 200 ug/kg p.o. as a single dose. However, ivermectin has been shown to be less effective than albendazole. Epidemiology Endemic areas include Asia, Mexico, India and parts of South Africa. Originally believed to be confined to Asia, in the 1970s gnathostomiasis was discovered in Mexico, and found in Australia in 2011. Even though it is endemic in areas of Southeast Asia and Latin America, it is an uncommon disease. However, researchers have noticed recently an increase in incidence. This disease is most common in both Thailand and Japan, but in Thailand it is responsible for most of the observed parasitic CNS infection. It has long been recognised in China, but reports have only recently appeared in the English literature. History The first case of Gnathostoma infection was identified by Sir Richard Owen when inspecting the stomach of a young tiger that had died at London Zoo from a ruptured aorta. However it was not until 1889 that the first human case was described by Levinson when he found the Gnathostoma larva in an infested Thai woman. The lifecycle of G. spinigerum was described by Svasti Daengsvang and Chalerm Prommas from Thailand in 1933 and 1936. This delay in identification of the parasite in humans is due to the fact that humans are not a definitive host for this parasite making infection from this parasite rare. Gnathostomiasis infection is rare because the parasite must be digested when it has reached its third larvae stage, providing only a short time frame in which the parasite is capable of infecting humans. It is uncommon for the larvae to penetrate the skin of individuals exposed to contaminated food or water without ingestion. See also List of parasites (human) List of migrating cutaneous conditions References External links Gnathostomiasis Archived 2013-02-15 at the Wayback Machine at CDC Gnathostomiasis at eMedicine
Restless legs syndrome
Restless legs syndrome (RLS), also known as Willis-Ekbom disease (WED), is generally a long-term disorder that causes a strong urge to move ones legs. There is often an unpleasant feeling in the legs that improves somewhat by moving them. This is often described as aching, tingling, or crawling in nature. Occasionally, arms may also be affected. The feelings generally happen when at rest and therefore can make it hard to sleep. Due to the disturbance in sleep, people with RLS may have daytime sleepiness, low energy, irritability and a depressed mood. Additionally, many have limb twitching during sleep. RLS is not the same as habitual foot tapping or leg rocking.Risk factors for RLS include low iron levels, kidney failure, Parkinsons disease, diabetes mellitus, rheumatoid arthritis, pregnancy and celiac disease. A number of medications may also trigger the disorder including antidepressants, antipsychotics, antihistamines, and calcium channel blockers. There are two main types. One is early onset RLS which starts before age 45, runs in families and worsens over time. The other is late onset RLS which begins after age 45, starts suddenly, and does not worsen. Diagnosis is generally based on a persons symptoms after ruling out other potential causes.Restless legs syndrome may resolve if the underlying problem is addressed. Otherwise treatment includes lifestyle changes and medication. Lifestyle changes that may help include stopping alcohol and tobacco use, and sleep hygiene. Medications used include a dopamine agonist such as pramipexole. RLS affects an estimated 2.5–15% of the American population. Females are more commonly affected than males, and it becomes increasingly common with age. Signs and symptoms RLS sensations range from pain or an aching in the muscles, to "an itch you cant scratch", a "buzzing sensation", an unpleasant "tickle that wont stop", a "crawling" feeling, or limbs jerking while awake. The sensations typically begin or intensify during quiet wakefulness, such as when relaxing, reading, studying, or trying to sleep.It is a "spectrum" disease with some people experiencing only a minor annoyance and others having major disruption of sleep and impairments in quality of life.The sensations—and the need to move—may return immediately after ceasing movement or at a later time. RLS may start at any age, including childhood, and is a progressive disease for some, while the symptoms may remit in others. In a survey among members of the Restless Legs Syndrome Foundation, it was found that up to 45% of patients had their first symptoms before the age of 20 years. "An urge to move, usually due to uncomfortable sensations that occur primarily in the legs, but occasionally in the arms or elsewhere."The sensations are unusual and unlike other common sensations. Those with RLS have a hard time describing them, using words or phrases such as uncomfortable, painful, antsy, electrical, creeping, itching, pins and needles, pulling, crawling, buzzing, and numbness. It is sometimes described similar to a limb falling asleep or an exaggerated sense of positional awareness of the affected area. The sensation and the urge can occur in any body part; the most cited location is legs, followed by arms. Some people have little or no sensation, yet still, have a strong urge to move."Motor restlessness, expressed as activity, which relieves the urge to move."Movement usually brings immediate relief, although temporary and partial. Walking is most common; however, stretching, yoga, biking, or other physical activity may relieve the symptoms. Continuous, fast up-and-down movements of the leg, and/or rapidly moving the legs toward then away from each other, may keep sensations at bay without having to walk. Specific movements may be unique to each person."Worsening of symptoms by relaxation."Sitting or lying down (reading, plane ride, watching TV) can trigger the sensations and urge to move. Severity depends on the severity of the persons RLS, the degree of restfulness, duration of the inactivity, etc."Variability over the course of the day-night cycle, with symptoms worse in the evening and early in the night."Some experience RLS only at bedtime, while others experience it throughout the day and night. Most people experience the worst symptoms in the evening and the least in the morning."restless legs feel similar to the urge to yawn, situated in the legs or arms."These symptoms of RLS can make sleeping difficult for many patients and a 2005 National Sleep Foundation poll shows the presence of significant daytime difficulties resulting from this condition. These problems range from being late for work to missing work or events because of drowsiness. Patients with RLS who responded reported driving while drowsy more than patients without RLS. These daytime difficulties can translate into safety, social and economic issues for the patient and for society.RLS may contribute to higher rates of depression and anxiety disorders in RLS patients. Primary and secondary RLS is categorized as either primary or secondary. Primary RLS is considered idiopathic or with no known cause. Primary RLS usually begins slowly, before approximately 40–45 years of age and may disappear for months or even years. It is often progressive and gets worse with age. RLS in children is often misdiagnosed as growing pains. Secondary RLS often has a sudden onset after age 40, and may be daily from the beginning. It is most associated with specific medical conditions or the use of certain drugs (see below). Causes While the cause is generally unknown, it is believed to be caused by changes in the nerve transmitter dopamine resulting in an abnormal use of iron by the brain. RLS is often due to iron deficiency (low total body iron status). Other associated conditions may include end-stage kidney disease and hemodialysis, folate deficiency, magnesium deficiency, sleep apnea, diabetes, peripheral neuropathy, Parkinsons disease, and certain autoimmune diseases, such as multiple sclerosis. RLS can worsen in pregnancy, possibly due to elevated estrogen levels. Use of alcohol, nicotine products, and caffeine may be associated with RLS. A 2014 study from the American Academy of Neurology also found that reduced leg oxygen levels were strongly associated with restless legs Syndrome symptom severity in untreated patients. ADHD An association has been observed between attention deficit hyperactivity disorder (ADHD) and RLS or periodic limb movement disorder. Both conditions appear to have links to dysfunctions related to the neurotransmitter dopamine, and common medications for both conditions among other systems, affect dopamine levels in the brain. A 2005 study suggested that up to 44% of people with ADHD had comorbid (i.e. coexisting) RLS, and up to 26% of people with RLS had confirmed ADHD or symptoms of the condition. Medications Certain medications may cause or worsen RLS, or cause it secondarily, including: certain antiemetics (antidopaminergic ones) certain antihistamines (especially the sedating, first generation H1 antihistamines often in over-the-counter cold medications) many antidepressants (both older TCAs and newer SSRIs) antipsychotics and certain anticonvulsants a rebound effect of sedative-hypnotic drugs such as a benzodiazepine withdrawal syndrome from discontinuing benzodiazepine tranquilizers or sleeping pills alcohol withdrawal can also cause restless legs syndrome and other movement disorders such as akathisia and parkinsonism usually associated with antipsychotics opioid withdrawal is associated with causing and worsening RLSBoth primary and secondary RLS can be worsened by surgery of any kind; however, back surgery or injury can be associated with causing RLS.The cause vs. effect of certain conditions and behaviors observed in some patients (ex. excess weight, lack of exercise, depression or other mental illnesses) is not well established. Loss of sleep due to RLS could cause the conditions, or medication used to treat a condition could cause RLS. Genetics More than 60% of cases of RLS are familial and are inherited in an autosomal dominant fashion with variable penetrance.Research and brain autopsies have implicated both dopaminergic system and iron insufficiency in the substantia nigra. Iron is well understood to be an essential cofactor for the formation of L-dopa, the precursor of dopamine. Six genetic loci found by linkage are known and listed below. Other than the first one, all of the linkage loci were discovered using an autosomal dominant model of inheritance. The first genetic locus was discovered in one large French Canadian family and maps to chromosome 12q. This locus was discovered using an autosomal recessive inheritance model. Evidence for this locus was also found using a transmission disequilibrium test (TDT) in 12 Bavarian families. The second RLS locus maps to chromosome 14q and was discovered in one Italian family. Evidence for this locus was found in one French Canadian family. Also, an association study in a large sample 159 trios of European descent showed some evidence for this locus. This locus maps to chromosome 9p and was discovered in two unrelated American families. Evidence for this locus was also found by the TDT in a large Bavarian family, in which significant linkage to this locus was found. This locus maps to chromosome 20p and was discovered in a large French Canadian family with RLS. This locus maps to chromosome 2p and was found in three related families from population isolated in South Tyrol. The sixth locus is located on chromosome 16p12.1 and was discovered by Levchenko et al. in 2008.Three genes, MEIS1, BTBD9 and MAP2K5, were found to be associated to RLS. Their role in RLS pathogenesis is still unclear. More recently, a fourth gene, PTPRD was found to be associated with RLS.There is also some evidence that periodic limb movements in sleep (PLMS) are associated with BTBD9 on chromosome 6p21.2, MEIS1, MAP2K5/SKOR1, and PTPRD. The presence of a positive family history suggests that there may be a genetic involvement in the etiology of RLS. Mechanism Although it is only partly understood, pathophysiology of restless legs syndrome may involve dopamine and iron system anomalies. There is also a commonly acknowledged circadian rhythm explanatory mechanism associated with it, clinically shown simply by biomarkers of circadian rhythm, such as body temperature. The interactions between impaired neuronal iron uptake and the functions of the neuromelanin-containing and dopamine-producing cells have roles in RLS development, indicating that iron deficiency might affect the brain dopaminergic transmissions in different ways.Medial thalamic nuclei may also have a role in RLS as part as the limbic system modulated by the dopaminergic system which may affect pain perception. Improvement of RLS symptoms occurs in people receiving low-dose dopamine agonists. Diagnosis There are no specific tests for RLS, but non-specific laboratory tests are used to rule out other causes such as vitamin deficiencies. Five symptoms are used to confirm the diagnosis: A strong urge to move the limbs, usually associated with unpleasant or uncomfortable sensations. It starts or worsens during inactivity or rest. It improves or disappears (at least temporarily) with activity. It worsens in the evening or night. These symptoms are not caused by any medical or behavioral condition.These symptoms are not essential, like the ones above, but occur commonly in RLS patients: genetic component or family history with RLS good response to dopaminergic therapy periodic leg movements during day or sleep most strongly affected are people who are middle-aged or older other sleep disturbances are experienced decreased iron stores can be a risk factor and should be assessedAccording to the International Classification of Sleep Disorders (ICSD-3), the main symptoms have to be associated with a sleep disturbance or impairment in order to support RLS diagnosis. As stated by this classification, RLS symptoms should begin or worsen when being inactive, be relieved when moving, should happen exclusively or mostly in the evening and at night, not be triggered by other medical or behavioral conditions, and should impair ones quality of life. Generally, both legs are affected, but in some cases there is an asymmetry. Differential diagnosis The most common conditions that should be differentiated with RLS include leg cramps, positional discomfort, local leg injury, arthritis, leg edema, venous stasis, peripheral neuropathy, radiculopathy, habitual foot tapping/leg rocking, anxiety, myalgia, and drug-induced akathisia.Peripheral artery disease and arthritis can also cause leg pain but this usually gets worse with movement.There are less common differential diagnostic conditions included myelopathy, myopathy, vascular or neurogenic claudication, hypotensive akathisia, orthostatic tremor, painful legs, and moving toes. Treatment If RLS is not linked to an underlying cause, its frequency may be reduced by lifestyle modifications such as adopting improving sleep hygiene, regular exercise, and stopping smoking. Medications used may include dopamine agonists or gabapentin in those with daily restless legs syndrome, and opioids for treatment of resistant cases.Treatment of RLS should not be considered until possible medical causes are ruled out. Secondary RLS may be cured if precipitating medical conditions (anemia) are managed effectively. Physical measures Stretching the leg muscles can bring temporary relief. Walking and moving the legs, as the name "restless legs" implies, brings temporary relief. In fact, those with RLS often have an almost uncontrollable need to walk and therefore relieve the symptoms while they are moving. Unfortunately, the symptoms usually return immediately after the moving and walking ceases. A vibratory counter-stimulation device has been found to help some people with primary RLS to improve their sleep. Iron There is some evidence that intravenous iron supplementation moderately improves restlessness for people with RLS. Medications For those whose RLS disrupts or prevents sleep or regular daily activities, medication may be useful. Evidence supports the use of dopamine agonists including: pramipexole, ropinirole, rotigotine, and cabergoline. They reduce symptoms, improve sleep quality and quality of life. Levodopa is also effective. However, pergolide and cabergoline are less recommended due to their association with increased risk of valvular heart disease. Ropinirole has a faster onset with shorter duration. Rotigotine is commonly used as a transdermal patch which continuously provides stable plasma drug concentrations, resulting in its particular therapeutic effect on patients with symptoms throughout the day. One 2008 review found pramipexole to be better than ropinirole.There are, however, issues with the use of dopamine agonists including augmentation. This is a medical condition where the drug itself causes symptoms to increase in severity and/or occur earlier in the day. Dopamine agonists may also cause rebound when symptoms increase as the drug wears off. In many cases, the longer dopamine agonists have been used the higher the risk of augmentation and rebound as well as the severity of the symptoms. Also, a recent study indicated that dopamine agonists used in restless legs syndrome can lead to an increase in compulsive gambling. Gabapentin or pregabalin, a non-dopaminergic treatment for moderate to severe primary RLS Opioids are only indicated in severe cases that do not respond to other measures due to their very high abuse liability and high rate of side effects, which may include constipation, fatigue and headache.One possible treatment for RLS is dopamine agonists, unfortunately patients can develop dopamine dysregulation syndrome, meaning that they can experience an addictive pattern of dopamine replacement therapy. Additionally, they can exhibit some behavioral disturbances such as impulse control disorders like pathologic gambling, compulsive purchasing and compulsive eating. There are some indications that stopping the dopamine agonist treatment has an impact on the resolution or at least improvement of the impulse control disorder, even though some people can be particularly exposed to dopamine agonist withdrawal syndrome.Benzodiazepines, such as diazepam or clonazepam, are not generally recommended, and their effectiveness is unknown. They however are sometimes still used as a second line, as add on agents. Quinine is not recommended due to its risk of serious side effects involving the blood. Prognosis RLS symptoms may gradually worsen with age, although more slowly for those with the idiopathic form of RLS than for people who also have an associated medical condition. Current therapies can control the disorder, minimizing symptoms and increasing periods of restful sleep. In addition, some people have remissions, periods in which symptoms decrease or disappear for days, weeks, or months, although symptoms usually eventually reappear. Being diagnosed with RLS does not indicate or foreshadow another neurological disease, such as Parkinsons disease. RLS symptoms can worsen over time when dopamine-related drugs are used for therapy, an effect called "augmentation" which may represent symptoms occurring throughout the day and affect movements of all limbs. There is no cure for RLS. Epidemiology RLS affects an estimated 2.5–15% of the American population. A minority (around 2.7% of the population) experience daily or severe symptoms. RLS is twice as common in women as in men, and Caucasians are more prone to RLS than people of African descent. RLS occurs in 3% of individuals from the Mediterranean or Middle Eastern regions, and in 1–5% of those from East Asia, indicating that different genetic or environmental factors, including diet, may play a role in the prevalence of this syndrome. RLS diagnosed at an older age runs a more severe course. RLS is even more common in individuals with iron deficiency, pregnancy, or end-stage kidney disease. The National Sleep Foundations 1998 Sleep in America poll showed that up to 25 percent of pregnant women developed RLS during the third trimester. Poor general health is also linked.There are several risk factors for RLS, including old age, family history, and uremia. The prevalence of RLS tends to increase with age, as well as its severity and longer duration of symptoms. People with uremia receiving renal dialysis have a prevalence from 20% to 57%, while those having kidney transplant improve compared to those treated with dialysis.RLS can occur at all ages, although it typically begins in the third or fourth decade. Genome‐wide association studies have now identified 19 risk loci associated with RLS. Neurological conditions linked to RLS include Parkinsons disease, spinal cerebellar atrophy, spinal stenosis, lumbosacral radiculopathy and Charcot–Marie–Tooth disease type 2. History The first known medical description of RLS was by Sir Thomas Willis in 1672. Willis emphasized the sleep disruption and limb movements experienced by people with RLS. Initially published in Latin (De Anima Brutorum, 1672) but later translated to English (The London Practice of Physick, 1685), Willis wrote: Wherefore to some, when being abed they betake themselves to sleep, presently in the arms and legs, leapings and contractions on the tendons, and so great a restlessness and tossings of other members ensue, that the diseased are no more able to sleep, than if they were in a place of the greatest torture. The term "fidgets in the legs" has also been used as early as the early nineteenth century.Subsequently, other descriptions of RLS were published, including those by François Boissier de Sauvages (1763), Magnus Huss (1849), Theodur Wittmaack (1861), George Miller Beard (1880), Georges Gilles de la Tourette (1898), Hermann Oppenheim (1923) and Frederick Gerard Allison (1943). However, it was not until almost three centuries after Willis, in 1945, that Karl-Axel Ekbom (1907–1977) provided a detailed and comprehensive report of this condition in his doctoral thesis, restless legs: clinical study of hitherto overlooked disease. Ekbom coined the term "restless legs" and continued work on this disorder throughout his career. He described the essential diagnostic symptoms, differential diagnosis from other conditions, prevalence, relation to anemia, and common occurrence during pregnancy.Ekboms work was largely ignored until it was rediscovered by Arthur S. Walters and Wayne A. Hening in the 1980s. Subsequent landmark publications include 1995 and 2003 papers, which revised and updated the diagnostic criteria. Journal of Parkinsonism and RLS is the first peer-reviewed, online, open access journal dedicated to publishing research about Parkinsons disease and was founded by a Canadian neurologist Dr. Abdul Qayyum Rana. Nomenclature In 2013, the Restless Legs Syndrome Foundation renamed itself the Willis–Ekbom Disease Foundation; however, it reverted to its original name in 2015 “to better support its mission”.A point of confusion is that RLS and delusional parasitosis are entirely different conditions that have both been called "Ekbom syndrome", as both syndromes were described by the same person, Karl-Axel Ekbom. Today, calling WED/RLS "Ekbom syndrome" is outdated usage, as the unambiguous names (WED or RLS) are preferred for clarity. Controversy Some doctors express the view that the incidence of restless legs syndrome is exaggerated by manufacturers of drugs used to treat it. Others believe it is an underrecognized and undertreated disorder. Further, GlaxoSmithKline (GSK) ran advertisements that, while not promoting off-licence use of their drug (ropinirole) for treatment of RLS, did link to the Ekbom Support Group website. That website contained statements advocating the use of ropinirole to treat RLS. The Association of the British Pharmaceutical Industry (ABPI) ruled against GSK in this case. Research Different measurements have been used to evaluate treatments in RLS. Most of them are based on subjective rating scores, such as IRLS rating scale (IRLS), Clinical Global Impression (CGI), Patient Global Impression (PGI), and Quality of life (QoL). These questionnaires provide information about the severity and progress of the disease, as well as the persons quality of life and sleep. Polysomnography (PSG) and actigraphy (both related to sleep parameters) are more objective resources that provide evidences of sleep disturbances associated with RLS symptoms. See also Periodic limb movement disorder References External links Restless legs syndrome at Curlie
Elsewhere
Elsewhere may refer to: Film Elsewhere (2001 film), a 2001 Austrian documentary by Nikolaus Geyrhalter Elsewhere (2009 film), an American thriller starring Anna Kendrick Elsewhere (2019 film), an American comedy-drama directed by Hernán Jiménez Literature Elsewhere, a 1991 novel by Will Shetterly Elsewhere (anthology), a 2003 Australian speculative-fiction anthology Elsewhere (Blatty novel), a 2009 novel by William Peter Blatty "Elsewhere" (short story), a 1941 science-fiction short story by Robert Heinlein Elsewhere (Zevin novel), a 2005 novel by Gabrielle Zevin Elsewhere: A Memoir, a 2012 memoir by novelist Richard Russo Music Elsewhere, an EP by Gretta Ray, 2016 Elsewhere (Scott Matthews album) Elsewhere (Joe Morris album) Elsewhere (Pinegrove album), 2017 "Elsewhere", a song by Sarah McLachlan from Fumbling Towards Ecstasy Places Elsewhere, a museum and artist residency in Greensboro, North Carolina The name of a town in Calloway County, Kentucky Elsewhere (music venue), music venue in Bushwick, Brooklyn Other Elsewhere (website), a music, arts and travel website run by New Zealand journalist Graham Reid Elsewhere, the stage name of dancer David Bernal In special relativity, the region of spacetime outside a light cone Elsweyr, a fictional nation in the Elder Scrolls videogame series See also St. Elsewhere, an American television drama series St. Elsewhere (album), a 2006 album by Gnarls Barkley Dispatches from Elsewhere, a 2020 American television drama series Somewhere Else (disambiguation)
Alcoholic beverage
An alcoholic beverage (also called an alcoholic drink, adult beverage, or a drink) is a drink that contains ethanol, a type of alcohol that acts as a drug and is produced by fermentation of grains, fruits, or other sources of sugar. The consumption of alcoholic drinks, often referred to as "drinking", plays an important social role in many cultures. Most countries have laws regulating the production, sale, and consumption of alcoholic beverages. Regulations may require the labeling of the percentage alcohol content (as ABV or proof) and the use of a warning label. Some countries ban such activities entirely, but alcoholic drinks are legal in most parts of the world. The global alcoholic drink industry exceeded $1 trillion in 2018.Alcohol is a depressant, which in low doses causes euphoria, reduces anxiety, and increases sociability. In higher doses, it causes drunkenness, stupor, unconsciousness, or death. Long-term use can lead to an alcohol use disorder, an increased risk of developing several types of cancer, cardiovascular disease, and physical dependence. Alcohol is one of the most widely used recreational drugs in the world, and about 33% of all humans currently drink alcohol. In 2015, among Americans, 86% of adults had consumed alcohol at some point, with 70% drinking it in the last year and 56% in the last month. Alcoholic drinks are typically divided into three classes—beers, wines, and spirits—and typically their alcohol content is between 3% and 50%. Discovery of late Stone Age jugs suggest that intentionally fermented drinks existed at least as early as the Neolithic period (c. 10,000 BC). Several other animals are affected by alcohol similarly to humans and, once they consume it, will consume it again if given the opportunity, though humans are the only species known to produce alcoholic drinks intentionally. Fermented drinks Beer Beer is a beverage fermented from grain mash. It is typically made from barley or a blend of several grains and flavored with hops. Most beer is naturally carbonated as part of the fermentation process. If the fermented mash is distilled, then the drink becomes a spirit. Beer is the most consumed alcoholic beverage in the world. Cider Cider or cyder ( SY-dər) is a fermented alcoholic drink made from any fruit juice; apple juice (traditional and most common), peaches, pears ("Perry" cider) or other fruit. Cider alcohol content varies from 1.2% ABV to 8.5% or more in traditional English ciders. In some regions, cider may be called "apple wine". Fermented tea Fermented tea (also known as post-fermented tea or dark tea) is a class of tea that has undergone microbial fermentation, from several months to many years. The tea leaves and the liquor made from them become darker with oxidation. Thus, the various kinds of fermented teas produced across China are also referred to as dark tea, not be confused with black tea. The most famous fermented tea is kombucha which is often homebrewed, pu-erh, produced in Yunnan Province, and the Anhua dark tea produced in Anhua County of Hunan Province. The majority of kombucha on the market are under 0.5% ABV. Fermented water Fermented water is an ethanol-based water solution with approximately 15-17% ABV without sweet reserve. Fermented water is exclusively fermented with white sugar, yeast, and water. Fermented water is clarified after the fermentation to produce a colorless or off-white liquid with no discernible taste other than that of ethanol. Fermented sugar water Fermented sugar water is fermented water with added refined sugar. Mead Mead () is an alcoholic drink made by fermenting honey with water, sometimes with various fruits, spices, grains, or hops. The alcoholic content of mead may range from as low as 3% ABV to more than 20%. The defining characteristic of mead is that the majority of the drinks fermentable sugar is derived from honey. Mead can also be referred to as "honeywine." Pulque Pulque is the Mesoamerican fermented drink made from the "honey water" of maguey, Agave americana. The drink distilled from pulque is tequila or mescal Mezcal. Rice wine Sake, huangjiu, mijiu, and cheongju are popular examples of East Asian rice wine. Wine Wine is a fermented beverage produced from grapes and sometimes other fruits. Wine involves a longer fermentation process than beer and a long aging process (months or years), resulting in an alcohol content of 9%–16% ABV. Others Fruit wines are made from fruits other than grapes, such as plums, cherries, or apples. Sparkling wine like French Champagne, Catalan Cava or Italian Prosecco can be made by means of a secondary fermentation. Distilled beverages Distilled beverages (also called liquors or spirit drinks) are alcoholic drinks produced by distilling (i.e., concentrating by distillation) ethanol produced by means of fermenting grain, fruit, or vegetables. Unsweetened, distilled, alcoholic drinks that have an alcohol content of at least 20% ABV are called spirits. For the most common distilled drinks, such as whiskey and vodka, the alcohol content is around 40%. The term hard liquor is used in North America to distinguish distilled drinks from undistilled ones (implicitly weaker). Vodka, gin, baijiu, shōchū, soju, tequila, whiskey, brandy and rum are examples of distilled drinks. Distilling concentrates the alcohol and eliminates some of the congeners. Freeze distillation concentrates ethanol along with methanol and fusel alcohols (fermentation by-products partially removed by distillation) in applejack. Fortified wine is wine, such as port or sherry, to which a distilled beverage (usually brandy) has been added. Fortified wine is distinguished from spirits made from wine in that spirits are produced by means of distillation, while fortified wine is wine that has had a spirit added to it. Many different styles of fortified wine have been developed, including port, sherry, madeira, marsala, commandaria, and the aromatized wine vermouth. Rectified spirit Rectified spirit, also called "neutral grain spirit", is alcohol which has been purified by means of "rectification" (i.e. repeated distillation). The term neutral refers to the spirits lack of flavor that would have been present if the mash ingredients had been distilled to a lower level of alcoholic purity. Rectified spirit also lacks any flavoring added to it after distillation (as is done, for example, with gin). Other kinds of spirits, such as whiskey, are distilled to a lower alcohol percentage to preserve the flavor of the mash. Rectified spirit is a clear, colorless, flammable liquid that may contain as much as 95% ABV. It is often used for medicinal purposes. It may be a grain spirit or it may be made from other plants. It is used in mixed drinks, liqueurs, and tinctures, and also as a household solvent. Congeners In the alcoholic drinks industry, congeners are substances produced during fermentation. These substances include small amounts of chemicals such as occasionally desired other alcohols, like propanol and 3-methyl-1-butanol, but also compounds that are never desired such as acetone, acetaldehyde and glycols. Congeners are responsible for most of the taste and aroma of distilled alcoholic drinks, and contribute to the taste of non-distilled drinks. It has been suggested that these substances contribute to the symptoms of a hangover. Tannins are congeners found in wine in the presence of phenolic compounds. Wine tannins add bitterness, have a drying sensation, taste herbaceous and are often described as astringent. Wine tannins adds balance, complexity, structure and makes a wine last longer, so they play an important role in the aging of wine. Amount of use The average number of people who drink as of 2016 was 39% for males and 25% for females (2.4 billion people in total). Females on average drink 0.7 drinks per day while males drink 1.7 drinks per day. The rates of drinking varies significantly in different areas of the world. Reasons for use Apéritifs and digestifs An apéritif is any alcoholic beverage usually served before a meal to stimulate the appetite, while a digestif is any alcoholic beverage served after a meal for the stated purpose of improving digestion. Fortified wine, liqueurs, and dry champagne are common apéritifs. Because apéritifs are served before dining, they are usually dry rather than sweet. One example is Cinzano, a brand of vermouth. Digestifs include brandy, fortified wines and herb-infused spirits (Drambuie). Caloric content The USDA uses a figure of 6.93 kilocalories (29.0 kJ) per gram of alcohol (5.47 kcal or 22.9 kJ per ml) for calculating food energy. For distilled spirits, a standard serving in the United States is 44 ml (1.5 US fl oz), which at 40% ethanol (80 proof), would be 14 grams and 98 calories. For other than distille spirits, many alcoholic drinks contain carbohydrates, which adds to the calories per serving.Alcoholic drinks are considered empty calorie foods because other than food energy they contribute no essential nutrients. According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women in the US ages 20 and up consume on average 6.8 grams/day and men consume on average 15.5 grams/day.Alcohol is known to potentiate the insulin response of the human body to glucose, which, in essence, "instructs" the body to convert consumed carbohydrates into fat and to suppress carbohydrate and fat oxidation. Ethanol is directly processed in the liver to acetyl CoA, the same intermediate product as in glucose metabolism. Because ethanol is mostly metabolized and consumed by the liver, chronic excessive use can lead to fatty liver. This leads to a chronic inflammation of the liver and eventually alcoholic liver disease. Flavoring Pure ethanol tastes bitter to humans; some people also describe it as sweet. However, ethanol is also a moderately good solvent for many fatty substances and essential oils. This facilitates the use of flavoring and coloring compounds in alcoholic drinks as a taste mask, especially in distilled drinks. Some flavors may be naturally present in the beverages raw material. Beer and wine may also be flavored before fermentation, and spirits may be flavored before, during, or after distillation. Sometimes flavor is obtained by allowing the beverage to stand for months or years in oak barrels, usually made of American or French oak. A few brands of spirits may also have fruit or herbs inserted into the bottle at the time of bottling. Wine is important in cuisine not just for its value as an accompanying beverage, but as a flavor agent, primarily in stocks and braising, since its acidity lends balance to rich savory or sweet dishes. Wine sauce is an example of a culinary sauce that uses wine as a primary ingredient. Natural wines may exhibit a broad range of alcohol content, from below 9% to above 16% ABV, with most wines being in the 12.5–14.5% range. Fortified wines (usually with brandy) may contain 20% alcohol or more. Alcohol measurement Alcohol concentration The concentration of alcohol in a beverage is usually stated as the percentage of alcohol by volume (ABV, the number of milliliters (ml) of pure ethanol in 100 ml of beverage) or as proof. In the United States, proof is twice the percentage of alcohol by volume at 60 degrees Fahrenheit (e.g. 80 proof = 40% ABV). Degrees proof were formerly used in the United Kingdom, where 100 degrees proof was equivalent to 57.1% ABV. Historically, this was the most dilute spirit that would sustain the combustion of gunpowder. Ordinary distillation cannot produce alcohol of more than 95.6% by weight, which is about 97.2% ABV (194.4 proof) because at that point alcohol is an azeotrope with water. A spirit which contains a very high level of alcohol and does not contain any added flavoring is commonly called a neutral spirit. Generally, any distilled alcoholic beverage of 170 US proof or higher is considered to be a neutral spirit.Most yeasts cannot reproduce when the concentration of alcohol is higher than about 18%, so that is the practical limit for the strength of fermented drinks such as wine, beer, and sake. However, some strains of yeast have been developed that can reproduce in solutions of up to 25% ABV. Serving measures Shot sizes Shot sizes vary significantly from country to country. In the United Kingdom, serving size in licensed premises is regulated under the Weights and Measures Act (1985). A single serving size of spirits (gin, whisky, rum, and vodka) are sold in 25 ml or 35 ml quantities or multiples thereof. Beer is typically served in pints (568 ml), but is also served in half-pints or third-pints. In Israel, a single serving size of spirits is about twice as much, 50 or 60 mL. The shape of a glass can have a significant effect on how much one pours. A Cornell University study of students and bartenders pouring showed both groups pour more into short, wide glasses than into tall, slender glasses. Aiming to pour one shot of alcohol (1.5 ounces or 44.3 ml), students on average poured 45.5 ml & 59.6 ml (30% more) respectively into the tall and short glasses. The bartenders scored similarly, on average pouring 20.5% more into the short glasses. More experienced bartenders were more accurate, pouring 10.3% less alcohol than less experienced bartenders. Practice reduced the tendency of both groups to over pour for tall, slender glasses but not for short, wide glasses. These misperceptions are attributed to two perceptual biases: (1) Estimating that tall, slender glasses have more volume than shorter, wider glasses; and (2) Over focusing on the height of the liquid and disregarding the width. Standard drinks There is no single standard, but a standard drink of 10g alcohol, which is used in the WHO AUDIT (Alcohol Use Disorders Identification Test)s questionnaire form example, have been adopted by more countries than any other amount. 10 grams is equivalent to 12.7 millilitres. A standard drink is a notional drink that contains a specified amount of pure alcohol. The standard drink is used in many countries to quantify alcohol intake. It is usually expressed as a measure of beer, wine, or spirits. One standard drink always contains the same amount of alcohol regardless of serving size or the type of alcoholic beverage. The standard drink varies significantly from country to country. For example, it is 7.62 ml (6 grams) of alcohol in Austria, but in Japan it is 25 ml (19.75 grams). In the United Kingdom, there is a system of units of alcohol which serves as a guideline for alcohol consumption. A single unit of alcohol is defined as 10 ml. The number of units present in a typical drink is sometimes printed on bottles. The system is intended as an aid to people who are regulating the amount of alcohol they drink; it is not used to determine serving sizes. In the United States, the standard drink contains 0.6 US fluid ounces (18 ml) of alcohol. This is approximately the amount of alcohol in a 12-US-fluid-ounce (350 ml) glass of beer, a 5-US-fluid-ounce (150 ml) glass of wine, or a 1.5-US-fluid-ounce (44 ml) glass of a 40% ABV (80 US proof) spirit. Laws Alcohol laws regulate the manufacture, packaging, labelling, distribution, sale, consumption, blood alcohol content of motor vehicle drivers, open containers, and transportation of alcoholic drinks. Such laws generally seek to reduce the adverse health and social impacts of alcohol consumption. In particular, alcohol laws set the legal drinking age, which usually varies between 15 and 21 years old, sometimes depending upon the type of alcoholic drink (e.g., beer vs wine vs hard liquor or distillates). Some countries do not have a legal drinking or purchasing age, but most countries set the minimum age at 18 years. Some countries, such as the U.S., have the drinking age higher than the legal age of majority (18), at age 21 in all 50 states. Such laws may take the form of permitting distribution only to licensed stores, monopoly stores, or pubs and they are often combined with taxation, which serves to reduce the demand for alcohol (by raising its price) and it is a form of revenue for governments. These laws also often limit the hours or days (e.g., "blue laws") on which alcohol may be sold or served, as can also be seen in the "last call" ritual in US and Canadian bars, where bartenders and servers ask patrons to place their last orders for alcohol, due to serving hour cutoff laws. In some countries, alcohol cannot be sold to a person who is already intoxicated. Alcohol laws in many countries prohibit drunk driving. In some jurisdictions, alcoholic drinks are totally prohibited for reasons of religion (e.g., Islamic countries with sharia law) or for reasons of local option, public health, and morals (e.g., Prohibition in the United States from 1920 to 1933). In jurisdictions which enforce sharia law, the consumption of alcoholic drinks is an illegal offense, although such laws may exempt non-Muslims. History 10,000–5000 BC: Discovery of late Stone Age jugs suggests that intentionally fermented drinks existed at least as early as the Neolithic period. 7000–5600 BC: Examination and analysis of ancient pottery jars from the neolithic village of Jiahu in the Henan province of northern China revealed residue left behind by the alcoholic drinks they had once contained. According to a study published in the Proceedings of the National Academy of Sciences, chemical analysis of the residue confirmed that a fermented drink made of grape and hawthorn fruit wine, honey mead and rice beer was being produced in 7000–5600 BC (McGovern et al., 2005; McGovern 2009). The results of this analysis were published in December 2004. 9th–10th centuries AD: Medieval Muslim chemists such as Jābir ibn Ḥayyān (Latin: Geber, ninth century) and Abū Bakr al-Rāzī (Latin: Rhazes, c. 865–925) experimented extensively with the distillation of various substances. The distillation of wine is attested in Arabic works attributed to al-Kindī (c. 801–873 CE) and to al-Fārābī (c. 872–950), and in the 28th book of al-Zahrāwīs (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). 12th century: The process of distillation spread from the Middle East to Italy, where distilled alcoholic drinks were recorded in the mid-12th century. In China, archaeological evidence indicates that the true distillation of alcohol began during the 12th century Jin or Southern Song dynasties. A still has been found at an archaeological site in Qinglong, Hebei, dating to the 12th century. 14th century: In India, the true distillation of alcohol was introduced from the Middle East, and was in wide use in the Delhi Sultanate by the 14th century. By the early 14th century, distilled alcoholic drinks had spread throughout the European continent. See also Alcohol (drug) Alcoholic drinks in China Beer and breweries by region Cooking with alcohol Holiday heart syndrome Homebrewing Liquor List of alcoholic drinks List of countries by alcohol consumption per capita List of national drinks Mixed drink References External links About 37 percent of college students could now be considered alcoholics, Daily Emerald Alcohol, Health-EU Portal, Health-EU Portal What Is a Standard Drink?, National Institute on Alcohol Abuse and Alcoholism
Anosmia
Anosmia, also known as smell blindness, is the loss of the ability to detect one or more smells. Anosmia may be temporary or permanent. It differs from hyposmia, which is a decreased sensitivity to some or all smells.Anosmia can be due to a number of factors, including an inflammation of the nasal mucosa, blockage of nasal passages or a destruction of one temporal lobe. Inflammation is due to chronic mucosa changes in the lining of the paranasal sinus and in the middle and superior turbinates.When anosmia is caused by inflammatory changes in the nasal passageways, it is treated simply by reducing inflammation. It can be caused by chronic meningitis and neurosyphilis that would increase intracranial pressure over a long period of time, and in some cases by ciliopathy, including ciliopathy due to primary ciliary dyskinesia.The term derives from the New Latin anosmia, based on Ancient Greek ἀν- (an-) + ὀσμή (osmḗ smell; another related term, hyperosmia, refers to an increased ability to smell). Some people may be anosmic for one particular odor, a condition known as "specific anosmia". The absence of the sense of smell from birth is known as congenital anosmia.In the United States, 3% of people aged over 40 are affected by anosmia.Anosmia is a common symptom of COVID-19 and can persist as long COVID. Definition Anosmia is the inability to smell. It may be partial or total, and can be specific to certain smells. Reduced sensitivity to some or all smells is hyposmia. Signs and symptoms Anosmia can have a number of harmful effects. People with sudden onset anosmia may find food less appetizing, though congenital anosmics rarely complain about this, and none report a loss in weight. Loss of smell can also be dangerous because it hinders the detection of gas leaks, fire, and spoiled food. The common view of anosmia as trivial can make it more difficult for a patient to receive the same types of medical aid as someone who has lost other senses, such as hearing or sight.Many experience one sided loss of smell, often as a result of minor head trauma. This type of anosmia is normally only detected if both of the nostrils are tested separately. Using this method of testing each nostril separately will often show a reduced or even completely absent sense of smell in either one nostril or both, something which is often not revealed if both nostrils are simultaneously tested.Losing an established and sentimental smell memory (e.g. the smell of grass, of the grandparents attic, of a particular book, of loved ones, or of oneself) has been known to cause feelings of depression.Loss of the ability to smell may lead to the loss of libido, though this usually does not apply to loss of smell present at birth.Often people who have loss of smell at birth report that they pretended to be able to smell as children because they thought that smelling was something that older/mature people could do, or did not understand the concept of smelling but did not want to appear different from others. When children get older, they often realize and report to their parents that they do not actually possess a sense of smell, often to the surprise of their parents. Causes A temporary loss of smell can be caused by a blocked nose or infection. In contrast, a permanent loss of smell may be caused by death of olfactory receptor neurons in the nose or by brain injury in which there is damage to the olfactory nerve or damage to brain areas that process smell (see olfactory system). The lack of the sense of smell at birth, usually due to genetic factors, is referred to as congenital anosmia. Family members of the patient with congenital anosmia are often found with similar histories; this suggests that the anosmia may follow an autosomal dominant pattern. Anosmia may very occasionally be an early sign of a degenerative brain disease such as Parkinsons disease and Alzheimers disease.Another specific cause of permanent loss could be from damage to olfactory receptor neurons because of use of certain types of nasal spray; i.e., those that cause vasoconstriction of the nasal microcirculation. To avoid such damage and the subsequent risk of loss of smell, vasoconstricting nasal sprays should be used only when absolutely necessary and then for only a short amount of time. Non-vasoconstricting sprays, such as those used to treat allergy-related congestion, are safe to use for prescribed periods of time. Anosmia can also be caused by nasal polyps. These polyps are found in people with allergies, histories of sinusitis, and family history. Individuals with cystic fibrosis often develop nasal polyps.Amiodarone is a drug used in the treatment of arrhythmias of the heart. A clinical study demonstrated that the use of this drug induced anosmia in some patients. Although rare, there was a case in which a 66-year-old male was treated with amiodarone for ventricular tachycardia. After the use of the drug he began experiencing olfactory disturbance, however after decreasing the dosage of amiodarone, the severity of the anosmia decreased accordingly, suggesting a relationship between use of amiodarone to the development of anosmia. COVID-19-related anosmia Chemosensory disturbances, including loss of smell or taste, are the predominant neurological symptom of COVID-19. As many as 80% of COVID-19 patients exhibit some change in chemesthesis, including smell. Loss of smell has also been found to be more predictive of COVID-19 than all other symptoms, including fever, cough or fatigue, based on a survey of 2 million participants in the UK and US. Google searches for "smell", "loss of smell", "anosmia", and other similar terms increased since the early months of the pandemic, and strongly correlated with increases in daily cases and deaths. Research into the mechanisms underlying these symptoms is currently ongoing.Many countries list anosmia as an official COVID-19 symptom, and some have developed "smell tests" as potential screening tools.In 2020, the Global Consortium for Chemosensory Research, a collaborative research organization of international smell and taste researchers, formed to investigate loss of smell and related chemosensory symptoms. List of causes Diagnosis Doctors will begin with a detailed elicitation of history. Then the doctor will ask for any related injuries in relation to anosmia which could include upper respiratory infections or head injury. Psychophysical Assessment of order and taste identification can be used to identify anosmia. A nervous system examination is performed to see if the cranial nerves are damaged. The diagnosis, as well as the degree of impairment, can now be tested much more efficiently and effectively than ever before thanks to "smell testing kits" that have been made available as well as screening tests which use materials that most clinics would readily have. Occasionally, after accidents, there is a change in a patients sense of smell. Particular smells that were present before are no longer present. On occasion, after head traumas, there are patients who have unilateral anosmia. The sense of smell should be tested individually in each nostril.Many cases of congenital anosmia remain unreported and undiagnosed. Since the disorder is present from birth the individual may have little or no understanding of the sense of smell, hence is unaware of the deficit. It may also lead to reduction of appetite. Treatment Though anosmia caused by brain damage cannot be treated, anosmia caused by inflammatory changes in the mucosa may be treated with glucocorticoids. Reduction of inflammation through the use of oral glucocorticoids such as prednisone, followed by long term topical glucocorticoid nasal spray, would easily and safely treat the anosmia. A prednisone regimen is adjusted based on the degree of the thickness of mucosa, the discharge of oedema and the presence or absence of nasal polyps. However, the treatment is not permanent and may have to be repeated after a short while. Together with medication, pressure of the upper area of the nose must be mitigated through aeration and drainage.Anosmia caused by a nasal polyp may be treated by steroidal treatment or removal of the polyp.One experiment, where two people were given a single dose of 1,000 mg of turmeric, reported to find improvements in COVID-19-induced anosmia (and ageusia), however actual studies have yet to be done regarding this.Although very early in development, gene therapy has restored a sense of smell in mice with congenital anosmia when caused by ciliopathy. In this case, a genetic condition had affected cilia in their bodies which normally enabled them to detect air-borne chemicals, and an adenovirus was used to implant a working version of the IFT88 gene into defective cells in the nose, which restored the cilia and allowed a sense of smell. Epidemiology In the United States 3% of people aged over 40 are affected by anosmia.In 2012, smell was assessed in persons aged 40 years and older with rates of anosmia/severe hyposmia of 0.3% at age 40–49 rising to 14.1% at age 80+. Rates of hyposmia were much higher: 3.7% at age 40–49 and 25.9% at 80+. See also Phantosmia Parosmia Anosmia Awareness Day Zicam, a medicine that caused some users to permanently lose their sense of smell Ageusia, the loss of the sense of taste References Further reading == External links ==
Microcheilia
Microcheilia is a congenital disorder where ones lips are unusually small. References == External links ==
Field
Field may refer to: Expanses of open ground Field (agriculture), an area of land used for agricultural purposes Airfield, an aerodrome that lacks the infrastructure of an airport Battlefield Lawn, an area of mowed grass Meadow, a grassland that is either natural or allowed to grow unmowed and ungrazed Playing field, used for sports or games Arts and media In decorative art, the main area of a decorated zone, often contained within a border, often the background for motifs Field (heraldry), the background of a shield In flag terminology, the background of a flag FIELD (magazine), a literary magazine published by Oberlin College in Oberlin, Ohio Field (sculpture), by Anthony Gormley Organizations Field department, the division of a political campaign tasked with organizing local volunteers and directly contacting voters Field Enterprises, a defunct private holding company Field Communications, a division of Field Enterprises Field Museum of Natural History, in Chicago People Field (surname) Field Cate (born 1997), American child actor Places Field, British Columbia, Canada Field, Kentucky, United States Field, Minneapolis, Minnesota, United States Field, Ontario, Canada Field, Staffordshire, England, United Kingdom Field, South Australia Field Hill, British Columbia, Canada Field Island, Nunavut, Canada Mount Field (disambiguation), mountains in Canada, the United States, Australia and Antarctica Science, technology, and mathematics Computing Field (computer science), a smaller piece of data from a larger collection (e.g., database fields) Field-programmability, an electronic devices capability of being reprogrammed with new logic Geology Field (mineral deposit), a mineral deposit containing valuable resources in a cost-competitive concentration Polje or karst field, a characteristic landform in karst topography Mathematics Field (mathematics), type of algebraic structure Number field, specific type of the above algebraic structure Scalar field, assignment of a scalar to each point in a mathematical space Tensor field, assignment of a tensor to each point in a mathematical space Vector field, assignment of a vector to each point in a mathematical space Field of sets, a mathematical structure of sets in an abstract space Field of a binary relation, union of its domain and its range Optics Field of view, the area of a view imaged by a lens Visual field, the part of the field of view which can be perceived by the eyes retina Depth of field, the distance from before to beyond the subject that appears to be in focus (and likewise, field, in the context of depth, is the portion of a scene for which objects within its range are or would be in focus) Physics Field (physics), a mathematical construct for analysis of remote effects Electric field, term in physics to describe the energy that surrounds electrically charged particles Magnetic field, force produced by moving electric charges Electromagnetic field, combination of an electric field and magnetic field Gravitational field, a representation of the combined effects of remote masses on a test particle at each point Sociology Field (Bourdieu), a sociological term coined by Pierre Bourdieu to describe the system of objective relations constituted by various species of capital Sexual field, the systems of objective relations within collective sexual life Other uses in science and technology Field (geography), a spatially dependent variable Field (video), one half of a frame in an interlaced display Field coil, of an electric motor or generator Field experiment Field magnet, a magnet used to produce a magnetic field Field research or fieldwork, the collection of information outside a laboratory, library or workplace setting Field of heliostats, an assembly of heliostats acting together Sports Pitch (sports field) Other uses Field of study, a subdivision of an academic discipline Field of use, permissible operation by the licensee of a patent Track and field, a group of sports See also The Field (disambiguation) Fields (disambiguation) The Fields (disambiguation) Fielding (disambiguation) Feeld, a location-based social discovery service application for iOS and Android Feild, surname All pages with titles beginning with Field All pages with titles containing Field
Paragonimiasis
Paragonimiasis is a food-borne parasitic disease caused by several species of lung flukes belonging to genus Paragonimus. Infection is acquired by eating crustaceans such as crabs and crayfishes which host the infective forms called metacercariae, or by eating raw or undercooked meat of mammals harboring the metacercariae from crustaceans.More than 40 species of Paragonimus have been identified; 10 of these are known to cause disease in humans. The most common cause of human paragonimiasis is P. westermani, the oriental lung fluke.About 22 million people are estimated to be affected yearly worldwide. It is particularly common in East Asia. Paragonimiasis is easily mistaken for other diseases with which it shares clinical symptoms, such as tuberculosis and lung cancer. Life cycle Not all Paragonimus species infect humans. However, all of them target mammals as their final (definitive) hosts. In mammalian lung tissue, the adult flukes live as encapsulated pairs. As hermaphrodites, they produce and fertilise their own eggs that are released through the respiratory tract. The eggs are excreted to the environment either through the sputum or swallowed and passed out along the faeces.In the external environment, the eggs remain unembryonated until ideal conditions of temperature and humidity are encountered. Then, they embryonate and develop into ciliated larvae called miracidia. As the egg shells disintegrate, the motile miracidia hatch and swim to seek the first intermediate host, a snail, and penetrate its soft tissues. Each miracidium goes through several developmental stages inside the snail: firstly into a series of daughter cells called sporocysts and then into rediae, which give rise to many worm-like larvae called cercariae. The cercariae penetrate through the body of the snail, emerging into the water. Development in the snail takes about 9 to 13 weeks.The cercariae then infect the second intermediate host, a crustacean such as a crab or crayfish, where they encyst and become metacercariae. Encystment occurs in the liver, gills, intestine, skeletal muscles and sometimes in the heart. These cysts are the infective stage for the mammalian host. Freshwater crab species of genera Potamiscus, Potamon, Paratelphusa, Eriocheir, Geothelphusa, Barytelphusa, crayfish species of genus Camberoides and shrimps of genera Acrohrachium and Caridina commonly serve as the secondary intermediate hosts. The secondary intermediate hosts are infected either by directly eating the snail or penetration of the body by free-swimming cercariae.Human infection with P. westermani — the best understood species — occurs by eating inadequately cooked or pickled crab or crayfish that harbour metacercariae of the parasite. The metacercariae excyst in the duodenum, penetrate through the intestinal wall into the peritoneal cavity, then through the abdominal wall and diaphragm into the lungs, where they become encapsulated and develop into adults (7.5 to 12 mm by 4 to 6 mm). Unlike most other trematodes, after they migrate from the intestine, they remain in the peritoneal cavity until they find a suitable partner. Only then do the couples enter the lung tissues to form capsules. The flukes can also reach other organs and tissues, such as the brain and skeletal muscles. However, when this takes place completion of the life cycles is not achieved, because the eggs laid cannot exit these sites. Time from infection to laying of eggs is 65 to 90 days. Infections may persist for 20 years in humans. Animals such as pigs, dogs, and a variety of feline species can also harbor P. westermani. For other species, rodents and deer are also additional (paratenic) hosts. By consuming infected animals of these reservoir species, even animals and humans that do not eat crustaceans directly can become infected. Background The first human case was seen in 1879 in Taiwan. An autopsy was done and adult trematodes were found in the lungs. The adult flukes have a reddish-brown in color with an ovoid shape. They have two muscular suckers, the first an oral sucker located anteriorly and the second a ventral sucker located mid-body. The adult flukes can live up to 20 years. The eggs are golden brown in color and are asymmetrically ovoid. They have a very thick shell. As seen above, these trematodes have a very complex life cycle with seven distinct phases involving intermediate hosts and humans. These seven phases are outlined as follows: eggs reach fresh water where they develop into miracidia. These penetrate many species of aquatic snails (first intermediate host) where they go through three distinct stages: first sporocysts, then rediae, and finally cercariae, also referred to as the larvae. These larvae released into water and penetrate crabs, crayfish and other crustaceans (second intermediate host). The cercariae situate themselves into the gills, liver and muscles where they further develop into metacercariae. When the parasite-filled crustacean is eaten, the metacercariae hatch in the intestine. These young worms penetrate intestinal wall, peritoneum, the diaphragm and the pleura where they finally reach the lungs. Here they live in pairs, lay eggs that are coughed up in sputum to restart the cycle. Geographic distribution There are more than 30 known species of Paragonimus. Species of Paragonimus are widely distributed in Asia, Africa, and North and South America. P. westermani is found in southeast Asia and Japan, while P. kellicotti is endemic to North America. P. africanus is found in Africa and P. mexicanus is found in central and South America. Just as the species names imply, paragonimiasis is more prominent in Asians, Africans and Hispanics because of their habitats and cultures. Prominence increases with age from older children to young adults then decreases with age. It is also higher among the female populations. This is a very common parasite of crustacean-eating mammals. Symptoms and diagnosis Paragonimiasis causes pneumonia with characteristic symptoms including prolonged cough, chest pain, shortness of breath, and hemoptysis. Owing to the diverse symptoms it presents, the disease is variously known as endemic haemoptysis, oriental lung fluke infection, pulmonary distomiasis, parasitical haemoptysis, and parasitare haemopte. Pulmonary paragonimiasis is the most common clinical manifestation, accounting for 76 to 90% of all infections. It has the classic symptoms of pneumonia. Extra-pulmonary infection is due to migration of the young worms away from the normal route to the lungs. In such case, any other part of the body can be infected. Cutaneous paragonimiasis is common in children and is generally indicated by skin nodules that move from one place to another. Cerebral paragonimias is most severe extra-pulmonary symptoms that affect the brain and leads to seizure, headache, visual disturbance, and motor and sensory disturbances.The acute phase (invasion and migration) may be marked by diarrhea, abdominal pain, fever, cough, urticaria, hepatosplenomegaly, pulmonary abnormalities, and eosinophilia. During the chronic phase, pulmonary manifestations include cough, expectoration of discolored sputum containing clumps of eggs, hemoptysis, and chest radiographic abnormalities. Extrapulmonary locations of the adult worms result in more severe manifestations, especially when the brain is involved. Diagnosis is based on microscopic demonstration of eggs in stool or sputum, but these are not present until 2 to 3 months after infection. (Eggs are also occasionally encountered in effusion fluid or biopsy material.) Concentration techniques may be necessary in patients with light infections. Biopsy may allow diagnostic confirmation and species identification when an adult or developing fluke is recovered.Diagnosis is done by microscopic examination of sputum and stool samples, and presence of the eggs is a confirmation. However, eggs are not always to be found. In such case, serological tests based on antibody detection using ELISA is a better method. A more arduous method like immunoblotting is also used. For brain infection, radiological examinations including plain skull x-rays, brain CT, and MR scans are used. A rapid antibody detection kit, dot-immunogold filtration assay (DIGFA), was developed for P. Wertermani in China in 2005.Misdiagnosis is a serious issue in paragonimiasis. It is commonly misdiagnosed as tuberculosis because it presents similar symptoms. In China, there were between 69 and 89% of misdiagnostic cases in 10 ten years, from 2009 to 2019. It is also frequently misidentified as malignancy or chronic obstructive pulmonary disease. Treatment The drug of choice to treat paragonimiasis is praziquantel, although bithionol may also be used. Triclabendazole is useful in P. uterobilateralis, P. mexicanus, and P. skrjabini infections but not in P. westermani infection. See also Drunken shrimp Eating live seafood Odori ebi References == External links ==
Ganglion
A ganglion is a group of neuron cell bodies in the peripheral nervous system. In the somatic nervous system this includes dorsal root ganglia and trigeminal ganglia among a few others. In the autonomic nervous system there are both sympathetic and parasympathetic ganglia which contain the cell bodies of postganglionic sympathetic and parasympathetic neurons respectively. A pseudoganglion looks like a ganglion, but only has nerve fibers and has no nerve cell bodies. Structure Ganglia are primarily made up of somata and dendritic structures which are bundled or connected. Ganglia often interconnect with other ganglia to form a complex system of ganglia known as a plexus. Ganglia provide relay points and intermediary connections between different neurological structures in the body, such as the peripheral and central nervous systems. Among vertebrates there are three major groups of ganglia: Dorsal root ganglia (also known as the spinal ganglia) contain the cell bodies of sensory (afferent) neurons. Cranial nerve ganglia contain the cell bodies of cranial nerve neurons. Autonomic ganglia contain the cell bodies of autonomic nerves.In the autonomic nervous system, fibers from the central nervous system to the ganglia are known as preganglionic fibers, while those from the ganglia to the effector organ are called postganglionic fibers. Basal ganglia The term "ganglion" refers to the peripheral nervous system.However, in the brain (part of the central nervous system), the "basal ganglia" is a group of nuclei interconnected with the cerebral cortex, thalamus, and brainstem, associated with a variety of functions: motor control, cognition, emotions, and learning. Partly due to this ambiguity, the Terminologia Anatomica recommends using the term basal nuclei instead of basal ganglia; however, this usage has not been generally adopted. Pseudoganglion A pseudoganglion is a localized thickening of the main part or trunk of a nerve that has the appearance of a ganglion but has only nerve fibers and no nerve cell bodies. Pseudoganglia are found in the teres minor muscle and radial nerve. See also Sympathetic ganglion Ganglion cyst Nervous system Neuron Chiasm References External links Media related to Ganglia at Wikimedia Commons
Subacute thyroiditis
Subacute thyroiditis is a form of thyroiditis that can be a cause of both thyrotoxicosis and hypothyroidism. It is uncommon and can affect individuals of both sexes, occurring three times as often in women than in men. and people of all ages. The most common form, subacute granulomatous, or de Quervains, thyroiditis manifests as a sudden and painful enlargement of the thyroid gland accompanied with fever, malaise and muscle aches. Indirect evidence has implicated viral infection in the etiology of subacute thyroiditis. This evidence is limited to preceding upper respiratory tract infection, elevated viral antibody levels, and both seasonal and geographical clustering of cases. There may be a genetic predisposition. Nishihara and coworkers studied the clinical features of subacute thyroiditis in 852 mostly 40- to 50-year-old women in Japan. They noted seasonal clusters (summer to early autumn) and most subjects presented with neck pain. Fever and symptoms of thyrotoxicosis were present in two thirds of subjects. Upper respiratory tract infections in the month preceding presentation were reported in only 1 in 5 subjects. Recurrent episodes following resolution of the initial episode were rare, occurring in just 1.6% of cases. Laboratory markers for thyroid inflammation and dysfunction typically peaked within one week of onset of illness. Types Subacute granulomatous thyroiditis (De Quervain thyroiditis) Subacute lymphocytic thyroiditis Postpartum thyroiditis Palpation thyroiditis References == External links ==
Periodic fever syndrome
Periodic fever syndromes are a set of disorders characterized by recurrent episodes of systemic and organ-specific inflammation. Unlike autoimmune disorders such as systemic lupus erythematosus, in which the disease is caused by abnormalities of the adaptive immune system, people with autoinflammatory diseases do not produce autoantibodies or antigen-specific T or B cells. Instead, the autoinflammatory diseases are characterized by errors in the innate immune system.The syndromes are diverse, but tend to cause episodes of fever, joint pains, skin rashes, abdominal pains and may lead to chronic complications such as amyloidosis.Most autoinflammatory diseases are genetic and present during childhood. The most common genetic autoinflammatory syndrome is familial Mediterranean fever, which causes short episodes of fever, abdominal pain, serositis, lasting less than 72 hours. It is caused by mutations in the MEFV gene, which codes for the protein pyrin.Pyrin is a protein normally present in the inflammasome. The mutated pyrin protein is thought to cause inappropriate activation of the inflammasome, leading to release of the pro-inflammatory cytokine IL-1β. Most other autoinflammatory diseases also cause disease by inappropriate release of IL-1β. Thus, IL-1β has become a common therapeutic target, and medications such as anakinra, rilonacept, and canakinumab have revolutionized the treatment of autoinflammatory diseases.However, there are some autoinflammatory diseases that are not known to have a clear genetic cause. This includes PFAPA, which is the most common autoinflammatory disease seen in children, characterized by episodes of fever, aphthous stomatitis, pharyngitis, and cervical adenitis. Other autoinflammatory diseases that do not have clear genetic causes include adult-onset Stills disease, systemic-onset juvenile idiopathic arthritis, Schnitzler syndrome, and chronic recurrent multifocal osteomyelitis. It is likely that these diseases are multifactorial, with genes that make people susceptible to these diseases, but they require an additional environmental factor to trigger the disease. Individual periodic fever syndromes See also Kawasaki disease - possible autoinflammatory mechanism Multisystem inflammatory syndrome in children List of cutaneous conditions Further reading Hobart A. Reimann, Periodic Disease: a probable syndrome including periodic fever, benign paroxysmal peritonitis, cyclic neutropenia and intermittent arthralgia. JAMA, 1948. Hobart A Reimann, Periodic Disease: periodic fever, periodic abdominalgia, cyclic neutropenia, intermittent arthralgia, angioneurotic edema, anaphylactoid purpura and periodic paralysis. JAMA, 1949. Hobart A Reimann, Moadié, J; Semerdjian, S; Sahyoun, PF, Periodic Peritonitis—Heredity & Pathology: report of seventy-two cases. JAMA, 1954. Hobart A Reimann, Periodic fever, an entity: A collection of 52 cases. AmJMedSci, 1962. References External links Understanding Autoinflammatory Diseases - US National Institute of Arthritis and Musculoskeletal and Skin Diseases
Idiopathic interstitial pneumonia
Idiopathic interstitial pneumonia (IIP), or noninfectious pneumonia are a class of diffuse lung diseases. These diseases typically affect the pulmonary interstitium, although some also have a component affecting the airways (for instance, cryptogenic organizing pneumonitis). There are seven recognized distinct subtypes of IIP. Diagnosis Classification can be complex, and the combined efforts of clinicians, radiologists, and pathologists can help in the generation of a more specific diagnosis.Idiopathic interstitial pneumonia can be subclassified based on histologic appearance into the following patterns: Usual interstitial pneumonia is the most common type. Development Table 1: Development of the (histologic) idiopathic interstitial pneumonia classification UIP=usual interstitial pneumonia; DAD=diffuse alveolar damage; NSIP=non-specific interstitial pneumonia; DIP=desquamative interstitial pneumonia; RB=respiratory bronchiolitis; BIP=bronchiolitis obliterans interstitial pneumonia; OP=organizing pneumonia; LIP=lymphoid interstitial pneumonia; LPD=lymphoproliferative disease (not considered a diffuse lung disease); GIP=giant cell interstitial pneumonia; HMF=heavy metal fibrosis, no longer grouped with diffuse lung disease Lymphoid interstitial pneumonia was originally included in this category, then excluded, then included again. References == External links ==
Avascular necrosis
Avascular necrosis (AVN), also called osteonecrosis or bone infarction, is death of bone tissue due to interruption of the blood supply. Early on, there may be no symptoms. Gradually joint pain may develop which may limit the ability to move. Complications may include collapse of the bone or nearby joint surface.Risk factors include bone fractures, joint dislocations, alcoholism, and the use of high-dose steroids. The condition may also occur without any clear reason. The most commonly affected bone is the femur. Other relatively common sites include the upper arm bone, knee, shoulder, and ankle. Diagnosis is typically by medical imaging such as X-ray, CT scan, or MRI. Rarely biopsy may be used.Treatments may include medication, not walking on the affected leg, stretching, and surgery. Most of the time surgery is eventually required and may include core decompression, osteotomy, bone grafts, or joint replacement. About 15,000 cases occur per year in the United States. People 30 to 50 years old are most commonly affected. Males are more commonly affected than females. Signs and symptoms In many cases, there is pain and discomfort in a joint which increases over time. While it can affect any bone, about half of cases show multiple sites of damage.Avascular necrosis most commonly affects the ends of long bones such as the femur. Other common sites include the humerus, knees, shoulders, ankles and the jaw. Causes The main risk factors are bone fractures, joint dislocations, alcoholism, and the use of high-dose steroids. Other risk factors include radiation therapy, chemotherapy, and organ transplantation. Osteonecrosis is also associated with cancer, lupus, sickle cell disease, HIV infection, Gauchers disease, and Caisson disease (dysbaric osteonecrosis). The condition may also occur without any clear reason.Bisphosphonates are associated with osteonecrosis of the mandible. Prolonged, repeated exposure to high pressures (as experienced by commercial and military divers) has been linked to AVN, though the relationship is not well understood.In children, avascular osteonecrosis can have several causes. It can occur in the hip as part of Legg–Calvé–Perthes syndrome, and it can also occur as a result after malignancy treatment such as acute lymphoblastic leukemia and allotransplantation. Pathophysiology The hematopoietic cells are most sensitive to low oxygen and are the first to die after reduction or removal of the blood supply, usually within 12 hours. Experimental evidence suggests that bone cells (osteocytes, osteoclasts, osteoblasts etc.) die within 12–48 hours, and that bone marrow fat cells die within 5 days.Upon reperfusion, repair of bone occurs in 2 phases. First, there is angiogenesis and movement of undifferentiated mesenchymal cells from adjacent living bone tissue grow into the dead marrow spaces, as well as entry of macrophages that degrade dead cellular and fat debris. Second, there is cellular differentiation of mesenchymal cells into osteoblasts or fibroblasts. Under favorable conditions, the remaining inorganic mineral volume forms a framework for establishment of new, fully functional bone tissue. Diagnosis In the early stages, bone scintigraphy and MRI are the preferred diagnostic tools.X-ray images of avascular necrosis in the early stages usually appear normal. In later stages it appears relatively more radio-opaque due to the nearby living bone becoming resorbed secondary to reactive hyperemia. The necrotic bone itself does not show increased radiographic opacity, as dead bone cannot undergo bone resorption which is carried out by living osteoclasts. Late radiographic signs also include a radiolucency area following the collapse of subchondral bone (crescent sign) and ringed regions of radiodensity resulting from saponification and calcification of marrow fat following medullary infarcts. Types When AVN affects the scaphoid bone, it is known as Preiser disease. Another named form of AVN is Köhler disease, which affects the navicular bone of the foot, primarily in children. Yet another form of AVN is Kienböcks disease, which affects the lunate bone in the wrist. Treatment A variety of methods may be used to treat the most common being the total hip replacement (THR). However, THRs have a number of downsides including long recovery times and short life spans of the hip joints. THRs are an effective means of treatment in the older population; however, in younger people, they may wear out before the end of a persons life.Other techniques such as metal on metal resurfacing may not be suitable in all cases of avascular necrosis; its suitability depends on how much damage has occurred to the femoral head. Bisphosphonates which reduce the rate of bone breakdown may prevent collapse (specifically of the hip) due to AVN. Core decompression Other treatments include core decompression, where internal bone pressure is relieved by drilling a hole into the bone, and a living bone chip and an electrical device to stimulate new vascular growth are implanted; and the free vascular fibular graft (FVFG), in which a portion of the fibula, along with its blood supply, is removed and transplanted into the femoral head. A 2016 Cochrane review found no clear improvement between people who have had hip core decompression and participate in physical therapy, versus physical therapy alone. There is additionally no strong research on the effectiveness of hip core decompression for people with sickle cell disease.Progression of the disease could possibly be halted by transplanting nucleated cells from bone marrow into avascular necrosis lesions after core decompression, although much further research is needed to establish this technique. Prognosis The amount of disability that results from avascular necrosis depends on what part of the bone is affected, how large an area is involved, and how effectively the bone rebuilds itself. The process of bone rebuilding takes place after an injury as well as during normal growth. Normally, bone continuously breaks down and rebuilds—old bone is resorbed and replaced with new bone. The process keeps the skeleton strong and helps it to maintain a balance of minerals. In the course of avascular necrosis, however, the healing process is usually ineffective and the bone tissues break down faster than the body can repair them. If left untreated, the disease progresses, the bone collapses, and the joint surface breaks down, leading to pain and arthritis. Epidemiology Avascular necrosis usually affects people between 30 and 50 years of age; about 10,000 to 20,000 people develop avascular necrosis of the head of the femur in the US each year. Society and culture Cases of avascular necrosis have been identified in a few high-profile athletes. It abruptly ended the career of American football running-back Bo Jackson in 1991. Doctors discovered Jackson to have lost all of the cartilage supporting his hip while he was undergoing tests following a hip-injury he had on the field during a 1991 NFL Playoff game. Avascular necrosis of the hip was also identified in a routine medical check-up on quarterback Brett Favre following his trade to the Green Bay Packers in 1991. However, Favre would go on to have a long career at the Packers.Another high-profile athlete was American road racing cyclist Floyd Landis, winner of the 2006 Tour de France, the title being subsequently stripped from his record by cyclings governing bodies after his blood samples tested positive for banned substances. During that tour, Landis was allowed cortisone shots to help manage his ailment, despite cortisone also being a banned substance in professional cycling at the time. Rafael Nadal successfully continued his tennis career after having surgery for Mueller-Weiss syndrome (osteonecrosis of the navicular.) References External links Osteonecrosis / Avascular Necrosis at the National Institute of Health Osteonecrosis / Avascular necrosis at Merck Manual for patients Osteonecrosis / Avascular necrosis at Merck Manual for medical professionals
Side
Side or Sides may refer to: Geometry Edge (geometry) of a polygon (two-dimensional shape) Face (geometry) of a polyhedron (three-dimensional shape) Places Side (Ainis), a town of Ainis, ancient Thessaly, Greece Side (Caria), a town of ancient Caria, Anatolia Side (Laconia), a town of ancient Laconia, Greece Side (Pontus), a town of ancient Pontus, Anatolia Side, Turkey, a city in Turkey Side, Iran, a village in Iran Side, Gloucestershire, or Syde, a village in England Music Side (recording), the A-side or B-side of a record The Side, a Scottish rock band Sides (album), a 1979 album by Anthony Phillips Sides, a 2020 album by Emily King "Side" (song), a 2001 song by Travis "Sides", a song by Flobots from the album The Circle in the Square, 2012 "Sides", a song by Allday from the album Speeding, 2017 Teams Side (cue sports technique) Side, a team, in particular: Sports team Other uses Side (mythology), one of three mythological figures Side, a Morris dance team Sideboard (cards), known as a "side" in some collectible card games Sides (surname), a surname Side dish, a food item accompanying a main course School of Isolated and Distance Education, a public school in Perth, Western Australia Secretaría de Inteligencia, the premier intelligence agency of the Argentine Republic Social identity model of deindividuation effects, in social psychology Side (gay sex), a term describes gay men who are not interested in anal sex See also All pages with titles beginning with Side All pages with titles containing side Cide (disambiguation) Relative direction, left and right Sidle (disambiguation)
Farmers lung
Farmers lung (not to be confused with silo-fillers disease) is a hypersensitivity pneumonitis induced by the inhalation of biologic dusts coming from hay dust or mold spores or any other agricultural products. It results in a type III hypersensitivity inflammatory response and can progress to become a chronic condition which is considered potentially dangerous. Signs and symptoms Acute Stage: Appears four to eight hours after exposure. Symptoms such as headache, irritating cough, and shortness of breath upon physical exertion. Subacute Stage: Symptoms persist without further exposure, and increase in severity. Symptoms include: shortness of breath upon exertion, chronic coughing, physical weakness, occasional fever and sweating, decrease in appetite, aches and pains. Chronic Stage: Debilitating effects are now considered long-term. Symptoms include: severe shortness of breath, chronic coughing, physical weakness, occasional fever and sweating at night, decrease in appetite, and general aches and pains.These symptoms develop between four and eight hours after exposure to the antigens. In acute attacks, the symptoms mimic pneumonia or flu. In chronic attacks, there is a possibility of the victim going into shock and dying from the attack. Causes Permanent lung damage can arise due to ones inability to recognize the cause of symptoms. Farmers lung occurs because repeated exposure to antigens, found in the mold spores of hay, crops, and animal feed, triggers an allergic reaction within the farmers immune system. The defense mechanisms of the body present as cold and flu-like symptoms that occur in individuals who experience either acute or chronic reactions.The mold spores are inhaled and provoke the creation of IgE antibodies that circulate in the bloodstream, these types of immune response are most often initiated by exposure to thermophilic actinomycetes (most commonly Saccharopolyspora rectivirgula), which generate IgG-type antibodies. Following a subsequent exposure, IgG antibodies combined with the inhaled allergen to form immune complexes in the walls of the alveoli in the lungs. This causes fluid, protein, and cells to accumulate in the alveolar wall which slows blood-gas interchange and compromises the function of the lung. After multiple exposures, it takes less and less of the antigens to set off the reaction in the lung. Prevention Farmers lung disease (FLD) is permanent and cannot be reversed, therefore in order to prevent the onset of further stages, farmers should inform their doctor of their occupation and if they have mold in their work environment. Prevention of this respiratory illness can be facilitated through the ventilation of work areas, drying of materials, and the use of a mask when working in confined areas with moldy hay or crops. Diagnosis Diagnoses of Farmers lung is difficult due to its similarity to cold and flu-like symptoms. Doctors diagnose patients with Farmers lung under the following conditions: A clinical history of symptoms such as cough, fever, and labored breathing when exposed to mold in work environment. The presence of diffuse lung disease in chronic cases. Presentation of antibodies when exposed to thermophilic Actinomyces.Examination procedures may include: • taking a blood test• taking a chest x-ray• administering a breathing capacity test• administering an inhalation challenge• examining lung tissue• performing an immunological investigation• performing a lung function test• reviewing the clinical history Treatment Depending on the severity of the symptoms, FLD can last from one to two weeks, or it can last for the rest of ones life. Acute FLD has the ability to be treated because hypersensitivity to the antigens has not yet developed. The main treatment options are: rest and reducing the exposure to the antigens through masks and increased airflow in confined spaces where the antigens are present. Any exposure to the antigens once hypersensitivity has occurred can set off another chronic reaction. For chronic FLD, there are no true treatments because the patient has developed hypersensitivity meaning that their condition will last the rest of their life. Epidemiology The growth of mold spores occurs when hay is not dried properly. The growth of these mold spores accumulates over time and will infect the host upon release from the source. When in the air, the farmer may inhale the particles and induce an allergic reaction. The hay at risk for increased volumes of spores is found at the bottom of the pile. The presence of Farmers Lung Disease peaks during late winter and early spring and is mostly seen after the harvest season when symptoms have set in. This disease is most prevalent in damp climates. See also Organic dust toxic syndrome == References ==
Megaureter
Megaureter is a medical anomaly whereby the ureter is abnormally dilated. Congenital megaureter is an uncommon condition which is more common in males, may be bilateral, and is often associated with other congenital anomalies. The cause is thought to be aperistalsis of the distal ureter, leading to dilatation.The cutoff value for megaureter is when it is wider than 6 or 7 mm.A functional obstruction at the lower end of the ureter leads to progressive dilatation and a tendency to infection. The ureteric orifice appears normal and a ureteric catheter passes easily.Definitive surgical treatment involves refashioning the lower end of the affected ureter so that a tunnelled reimplantation into the bladder can be done to prevent reflux. References Bailey and Loves Short Practice of Surgery == External links ==
Decompression sickness
Decompression sickness (abbreviated DCS; also called divers disease, the bends, aerobullosis, and caisson disease) is a medical condition caused by dissolved gases emerging from solution as bubbles inside the body tissues during decompression. DCS most commonly occurs during or soon after a decompression ascent from underwater diving, but can also result from other causes of depressurisation, such as emerging from a caisson, decompression from saturation, flying in an unpressurised aircraft at high altitude, and extravehicular activity from spacecraft. DCS and arterial gas embolism are collectively referred to as decompression illness. Since bubbles can form in or migrate to any part of the body, DCS can produce many symptoms, and its effects may vary from joint pain and rashes to paralysis and death. DCS often causes air bubbles to settle in major joints like knees or elbows, causing individuals to bend over in excruciating pain, hence its common name, the bends. Individual susceptibility can vary from day to day, and different individuals under the same conditions may be affected differently or not at all. The classification of types of DCS according to symptoms has evolved since its original description in the 19th century. The severity of symptoms varies from barely noticeable to rapidly fatal. The risk of DCS caused by diving can be managed through proper decompression procedures, and contracting the condition has become uncommon. Its potential severity has driven much research to prevent it, and divers almost universally use dive tables or dive computers to limit their exposure and to monitor their ascent speed. If DCS is suspected, it is treated by hyperbaric oxygen therapy in a recompression chamber. Where a chamber is not accessible within a reasonable time frame, in-water recompression may be indicated for a narrow range of presentations, if there are suitably skilled personnel and appropriate equipment available on site. Diagnosis is confirmed by a positive response to the treatment. Early treatment results in a significantly higher chance of successful recovery. Classification DCS is classified by symptoms. The earliest descriptions of DCS used the terms: "bends" for joint or skeletal pain; "chokes" for breathing problems; and "staggers" for neurological problems. In 1960, Golding et al. introduced a simpler classification using the term "Type I (simple)" for symptoms involving only the skin, musculoskeletal system, or lymphatic system, and "Type II (serious)" for symptoms where other organs (such as the central nervous system) are involved. Type II DCS is considered more serious and usually has worse outcomes. This system, with minor modifications, may still be used today. Following changes to treatment methods, this classification is now much less useful in diagnosis, since neurological symptoms may develop after the initial presentation, and both Type I and Type II DCS have the same initial management. Decompression illness and dysbarism The term dysbarism encompasses decompression sickness, arterial gas embolism, and barotrauma, whereas decompression sickness and arterial gas embolism are commonly classified together as decompression illness when a precise diagnosis cannot be made. DCS and arterial gas embolism are treated very similarly because they are both the result of gas bubbles in the body. The U.S. Navy prescribes identical treatment for Type II DCS and arterial gas embolism. Their spectra of symptoms also overlap, although the symptoms from arterial gas embolism are generally more severe because they often arise from an infarction (blockage of blood supply and tissue death). Signs and symptoms While bubbles can form anywhere in the body, DCS is most frequently observed in the shoulders, elbows, knees, and ankles. Joint pain ("the bends") accounts for about 60% to 70% of all altitude DCS cases, with the shoulder being the most common site for altitude and bounce diving, and the knees and hip joints for saturation and compressed air work. Neurological symptoms are present in 10% to 15% of DCS cases with headache and visual disturbances being the most common symptom. Skin manifestations are present in about 10% to 15% of cases. Pulmonary DCS ("the chokes") is very rare in divers and has been observed much less frequently in aviators since the introduction of oxygen pre-breathing protocols. The table below shows symptoms for different DCS types. Frequency The relative frequencies of different symptoms of DCS observed by the U.S. Navy are as follows: Onset Although onset of DCS can occur rapidly after a dive, in more than half of all cases symptoms do not begin to appear for at least an hour. In extreme cases, symptoms may occur before the dive has been completed. The U.S. Navy and Technical Diving International, a leading technical diver training organization, have published a table that documents time to onset of first symptoms. The table does not differentiate between types of DCS, or types of symptom. Causes DCS is caused by a reduction in ambient pressure that results in the formation of bubbles of inert gases within tissues of the body. It may happen when leaving a high-pressure environment, ascending from depth, or ascending to altitude. A closely related condition of bubble formation in body tissues due to isobaric counterdiffusion can occur with no change of pressure. Ascent from depth DCS is best known as a diving disorder that affects divers having breathed gas that is at a higher pressure than the surface pressure, owing to the pressure of the surrounding water. The risk of DCS increases when diving for extended periods or at greater depth, without ascending gradually and making the decompression stops needed to slowly reduce the excess pressure of inert gases dissolved in the body. The specific risk factors are not well understood and some divers may be more susceptible than others under identical conditions. DCS has been confirmed in rare cases of breath-holding divers who have made a sequence of many deep dives with short surface intervals, and may be the cause of the disease called taravana by South Pacific island natives who for centuries have dived by breath-holding for food and pearls.Two principal factors control the risk of a diver developing DCS: the rate and duration of gas absorption under pressure – the deeper or longer the dive the more gas is absorbed into body tissue in higher concentrations than normal (Henrys Law); the rate and duration of outgassing on depressurization – the faster the ascent and the shorter the interval between dives the less time there is for absorbed gas to be offloaded safely through the lungs, causing these gases to come out of solution and form "micro bubbles" in the blood.Even when the change in pressure causes no immediate symptoms, rapid pressure change can cause permanent bone injury called dysbaric osteonecrosis (DON). DON can develop from a single exposure to rapid decompression. Leaving a high-pressure environment When workers leave a pressurized caisson or a mine that has been pressurized to keep water out, they will experience a significant reduction in ambient pressure. A similar pressure reduction occurs when astronauts exit a space vehicle to perform a space-walk or extra-vehicular activity, where the pressure in their spacesuit is lower than the pressure in the vehicle.The original name for DCS was "caisson disease". This term was introduced in the 19th century, when caissons under pressure were used to keep water from flooding large engineering excavations below the water table, such as bridge supports and tunnels. Workers spending time in high ambient pressure conditions are at risk when they return to the lower pressure outside the caisson if the pressure is not reduced slowly. DCS was a major factor during construction of Eads Bridge, when 15 workers died from what was then a mysterious illness, and later during construction of the Brooklyn Bridge, where it incapacitated the project leader Washington Roebling. On the other side of the Manhattan island during construction of the Hudson River Tunnel contractors agent Ernest William Moir noted in 1889 that workers were dying due to decompression sickness and pioneered the use of an airlock chamber for treatment. Ascent to altitude The most common health risk on ascent to altitude is not decompression sickness but altitude sickness, or acute mountain sickness (AMS), which has an entirely different and unrelated set of causes and symptoms. AMS results not from the formation of bubbles from dissolved gasses in the body but from exposure to a low partial pressure of oxygen and alkalosis. However, passengers in unpressurized aircraft at high altitude may also be at some risk of DCS.Altitude DCS became a problem in the 1930s with the development of high-altitude balloon and aircraft flights but not as great a problem as AMS, which drove the development of pressurized cabins, which coincidentally controlled DCS. Commercial aircraft are now required to maintain the cabin at or below a pressure altitude of 2,400 m (7,900 ft) even when flying above 12,000 m (39,000 ft). Symptoms of DCS in healthy individuals are subsequently very rare unless there is a loss of pressurization or the individual has been diving recently. Divers who drive up a mountain or fly shortly after diving are at particular risk even in a pressurized aircraft because the regulatory cabin altitude of 2,400 m (7,900 ft) represents only 73% of sea level pressure.Generally, the higher the altitude the greater the risk of altitude DCS but there is no specific, maximum, safe altitude below which it never occurs. There are very few symptoms at or below 5,500 m (18,000 ft) unless patients had predisposing medical conditions or had dived recently. There is a correlation between increased altitudes above 5,500 m (18,000 ft) and the frequency of altitude DCS but there is no direct relationship with the severity of the various types of DCS. A US Air Force study reports that there are few occurrences between 5,500 m (18,000 ft) and 7,500 m (24,600 ft) and 87% of incidents occurred at or above 7,500 m (24,600 ft). High-altitude parachutists may reduce the risk of altitude DCS if they flush nitrogen from the body by pre-breathing pure oxygen. Predisposing factors Although the occurrence of DCS is not easily predictable, many predisposing factors are known. They may be considered as either environmental or individual. Decompression sickness and arterial gas embolism in recreational diving are associated with certain demographic, environmental, and dive style factors. A statistical study published in 2005 tested potential risk factors: age, gender, body mass index, smoking, asthma, diabetes, cardiovascular disease, previous decompression illness, years since certification, dives in the last year, number of diving days, number of dives in a repetitive series, last dive depth, nitrox use, and drysuit use. No significant associations with risk of decompression sickness or arterial gas embolism were found for asthma, diabetes, cardiovascular disease, smoking, or body mass index. Increased depth, previous DCI, larger number of consecutive days diving, and being male were associated with higher risk for decompression sickness and arterial gas embolism. Nitrox and drysuit use, greater frequency of diving in the past year, increasing age, and years since certification were associated with lower risk, possibly as indicators of more extensive training and experience. Environmental The following environmental factors have been shown to increase the risk of DCS: the magnitude of the pressure reduction ratio – a large pressure reduction ratio is more likely to cause DCS than a small one. repetitive exposures – repetitive dives within a short period of time (a few hours) increase the risk of developing DCS. Repetitive ascents to altitudes above 5,500 metres (18,000 ft) within similar short periods increase the risk of developing altitude DCS. the rate of ascent – the faster the ascent the greater the risk of developing DCS. The U.S. Navy Diving Manual indicates that ascent rates greater than about 20 m/min (66 ft/min) when diving increase the chance of DCS, while recreational dive tables such as the Bühlmann tables require an ascent rate of 10 m/min (33 ft/min) with the last 6 m (20 ft) taking at least one minute. An individual exposed to a rapid decompression (high rate of ascent) above 5,500 metres (18,000 ft) has a greater risk of altitude DCS than being exposed to the same altitude but at a lower rate of ascent. the duration of exposure – the longer the duration of the dive, the greater is the risk of DCS. Longer flights, especially to altitudes of 5,500 m (18,000 ft) and above, carry a greater risk of altitude DCS. underwater diving before flying – divers who ascend to altitude soon after a dive increase their risk of developing DCS even if the dive itself was within the dive table safe limits. Dive tables make provisions for post-dive time at surface level before flying to allow any residual excess nitrogen to outgas. However, the pressure maintained inside even a pressurized aircraft may be as low as the pressure equivalent to an altitude of 2,400 m (7,900 ft) above sea level. Therefore, the assumption that the dive table surface interval occurs at normal atmospheric pressure is invalidated by flying during that surface interval, and an otherwise-safe dive may then exceed the dive table limits. diving before travelling to altitude – DCS can occur without flying if the person moves to a high-altitude location on land immediately after diving, for example, scuba divers in Eritrea who drive from the coast to the Asmara plateau at 2,400 m (7,900 ft) increase their risk of DCS. diving at altitude – diving in water whose surface pressure is significantly below sea level pressure — for example, Lake Titicaca is at 3,800 m (12,500 ft). Versions of decompression tables for altitudes exceeding 300 m (980 ft), or dive computers with high-altitude settings or surface pressure sensors may be used to reduce this risk. Individual The following individual factors have been identified as possibly contributing to increased risk of DCS: dehydration – Studies by Walder concluded that decompression sickness could be reduced in aviators when the serum surface tension was raised by drinking isotonic saline, and the high surface tension of water is generally regarded as helpful in controlling bubble size. Maintaining proper hydration is recommended. There is no convincing evidence that overhydration has any benefits, and it is implicated in immersion pulmonary oedema. patent foramen ovale – a hole between the atrial chambers of the heart in the fetus is normally closed by a flap with the first breaths at birth. In about 20% of adults the flap does not completely seal, however, allowing blood through the hole when coughing or during activities that raise chest pressure. In diving, this can allow venous blood with microbubbles of inert gas to bypass the lungs, where the bubbles would otherwise be filtered out by the lung capillary system, and return directly to the arterial system (including arteries to the brain, spinal cord and heart). In the arterial system, bubbles (arterial gas embolism) are far more dangerous because they block circulation and cause infarction (tissue death, due to local loss of blood flow). In the brain, infarction results in stroke, and in the spinal cord it may result in paralysis. a persons age – there are some reports indicating a higher risk of altitude DCS with increasing age. previous injury – there is some indication that recent joint or limb injuries may predispose individuals to developing decompression-related bubbles. ambient temperature – there is some evidence suggesting that individual exposure to very cold ambient temperatures may increase the risk of altitude DCS. Decompression sickness risk can be reduced by increased ambient temperature during decompression following dives in cold water. body type – typically, a person who has a high body fat content is at greater risk of DCS. This is due to nitrogens five times greater solubility in fat than in water, leading to greater amounts of total body dissolved nitrogen during time at pressure. Fat represents about 15–25 percent of a healthy adults body, but stores about half of the total amount of nitrogen (about 1 litre) at normal pressures. alcohol consumption – although alcohol consumption increases dehydration and therefore may increase susceptibility to DCS, a 2005 study found no evidence that alcohol consumption increases the incidence of DCS. Mechanism Depressurisation causes inert gases, which were dissolved under higher pressure, to come out of physical solution and form gas bubbles within the body. These bubbles produce the symptoms of decompression sickness. Bubbles may form whenever the body experiences a reduction in pressure, but not all bubbles result in DCS. The amount of gas dissolved in a liquid is described by Henrys Law, which indicates that when the pressure of a gas in contact with a liquid is decreased, the amount of that gas dissolved in the liquid will also decrease proportionately. On ascent from a dive, inert gas comes out of solution in a process called "outgassing" or "offgassing". Under normal conditions, most offgassing occurs by gas exchange in the lungs. If inert gas comes out of solution too quickly to allow outgassing in the lungs then bubbles may form in the blood or within the solid tissues of the body. The formation of bubbles in the skin or joints results in milder symptoms, while large numbers of bubbles in the venous blood can cause lung damage. The most severe types of DCS interrupt — and ultimately damage — spinal cord function, leading to paralysis, sensory dysfunction, or death. In the presence of a right-to-left shunt of the heart, such as a patent foramen ovale, venous bubbles may enter the arterial system, resulting in an arterial gas embolism. A similar effect, known as ebullism, may occur during explosive decompression, when water vapour forms bubbles in body fluids due to a dramatic reduction in environmental pressure. Inert gases The main inert gas in air is nitrogen, but nitrogen is not the only gas that can cause DCS. Breathing gas mixtures such as trimix and heliox include helium, which can also cause decompression sickness. Helium both enters and leaves the body faster than nitrogen, so different decompression schedules are required, but, since helium does not cause narcosis, it is preferred over nitrogen in gas mixtures for deep diving. There is some debate as to the decompression requirements for helium during short-duration dives. Most divers do longer decompressions; however, some groups like the WKPP have been experimenting with the use of shorter decompression times by including deep stops. The balance of evidence as of 2020 does not indicate that deep stops increase decompression efficiency. Any inert gas that is breathed under pressure can form bubbles when the ambient pressure decreases. Very deep dives have been made using hydrogen-oxygen mixtures (hydrox), but controlled decompression is still required to avoid DCS. Isobaric counterdiffusion DCS can also be caused at a constant ambient pressure when switching between gas mixtures containing different proportions of inert gas. This is known as isobaric counterdiffusion, and presents a problem for very deep dives. For example, after using a very helium-rich trimix at the deepest part of the dive, a diver will switch to mixtures containing progressively less helium and more oxygen and nitrogen during the ascent. Nitrogen diffuses into tissues 2.65 times slower than helium but is about 4.5 times more soluble. Switching between gas mixtures that have very different fractions of nitrogen and helium can result in "fast" tissues (those tissues that have a good blood supply) actually increasing their total inert gas loading. This is often found to provoke inner ear decompression sickness, as the ear seems particularly sensitive to this effect. Bubble formation The location of micronuclei or where bubbles initially form is not known. The most likely mechanisms for bubble formation are tribonucleation, when two surfaces make and break contact (such as in joints), and heterogeneous nucleation, where bubbles are created at a site based on a surface in contact with the liquid. Homogeneous nucleation, where bubbles form within the liquid itself is less likely because it requires much greater pressure differences than experienced in decompression. The spontaneous formation of nanobubbles on hydrophobic surfaces is a possible source of micronuclei, but it is not yet clear if these can grow large enough to cause symptoms as they are very stable.Once microbubbles have formed, they can grow by either a reduction in pressure or by diffusion of gas into the gas from its surroundings. In the body, bubbles may be located within tissues or carried along with the bloodstream. The speed of blood flow within a blood vessel and the rate of delivery of blood to capillaries (perfusion) are the main factors that determine whether dissolved gas is taken up by tissue bubbles or circulation bubbles for bubble growth. Pathophysiology The primary provoking agent in decompression sickness is bubble formation from excess dissolved gases. Various hypotheses have been put forward for the nucleation and growth of bubbles in tissues, and for the level of supersaturation which will support bubble growth. The earliest bubble formation detected is subclinical intravascular bubbles detectable by doppler ultrasound in the venous systemic circulation. The presence of these "silent" bubbles is no guarantee that they will persist and grow to be symptomatic.Vascular bubbles formed in the systemic capillaries may be trapped in the lung capillaries, temporarily blocking them. If this is severe, the symptom called "chokes" may occur. If the diver has a patent foramen ovale (or a shunt in the pulmonary circulation), bubbles may pass through it and bypass the pulmonary circulation to enter the arterial blood. If these bubbles are not absorbed in the arterial plasma and lodge in systemic capillaries they will block the flow of oxygenated blood to the tissues supplied by those capillaries, and those tissues will be starved of oxygen. Moon and Kisslo (1988) concluded that "the evidence suggests that the risk of serious neurological DCI or early onset DCI is increased in divers with a resting right-to-left shunt through a PFO. There is, at present, no evidence that PFO is related to mild or late onset bends. Bubbles form within other tissues as well as the blood vessels. Inert gas can diffuse into bubble nuclei between tissues. In this case, the bubbles can distort and permanently damage the tissue. As they grow, the bubbles may also compress nerves, causing pain. Extravascular or autochthonous bubbles usually form in slow tissues such as joints, tendons and muscle sheaths. Direct expansion causes tissue damage, with the release of histamines and their associated affects. Biochemical damage may be as important as, or more important than mechanical effects.Bubble size and growth may be affected by several factors - gas exchange with adjacent tissues, the presence of surfactants, coalescence and disintegration by collision. Vascular bubbles may cause direct blockage, aggregate platelets and red blood cells, and trigger the coagulation process, causing local and downstream clotting.Arteries may be blocked by intravascular fat aggregation. Platelets accumulate in the vicinity of bubbles. Endothelial damage may be a mechanical effect of bubble pressure on the vessel walls, a toxic effect of stabilised platelet aggregates and possibly toxic effects due to the association of lipids with the air bubbles. Protein molecules may be denatured by reorientation of the secondary and tertiary structure when non-polar groups protrude into the bubble gas and hydrophilic groups remain in the surrounding blood, which may generate a cascade of pathophysiological events with consequent production of clinical signs of decompression sickness.The physiological effects of a reduction in environmental pressure depend on the rate of bubble growth, the site, and surface activity. A sudden release of sufficient pressure in saturated tissue results in a complete disruption of cellular organelles, while a more gradual reduction in pressure may allow accumulation of a smaller number of larger bubbles, some of which may not produce clinical signs, but still cause physiological effects typical of a blood/gas interface and mechanical effects. Gas is dissolved in all tissues, but decompression sickness is only clinically recognised in the central nervous system, bone, ears, teeth, skin and lungs.Necrosis has frequently been reported in the lower cervical, thoracic, and upper lumbar regions of the spinal cord. A catastrophic pressure reduction from saturation produces explosive mechanical disruption of cells by local effervescence, while a more gradual pressure loss tends to produce discrete bubbles accumulated in the white matter, surrounded by a protein layer. Typical acute spinal decompression injury occurs in the columns of white matter. Infarcts are characterised by a region of oedema, haemorrhage and early myelin degeneration, and are typically centred on small blood vessels. The lesions are generally discrete. Oedema usually extends to the adjacent grey matter. Microthrombi are found in the blood vessels associated with the infarcts.Following the acute changes there is an invasion of lipid phagocytes and degeneration of adjacent neural fibres with vascular hyperplasia at the edges of the infarcts. The lipid phagocytes are later replaced by a cellular reaction of astrocytes. Vessels in surrounding areas remain patent but are collagenised. Distribution of spinal cord lesions may be related to vascular supply. There is still uncertainty regarding the aetiology of decompression sickness damage to the spinal cord.Dysbaric osteonecrosis lesions are typically bilateral and usually occur at both ends of the femur and at the proximal end of the humerus Symptoms are usually only present when a joint surface is involved, which typically does not occur until a long time after the causative exposure to a hyperbaric environment. The initial damage is attributed to the formation of bubbles, and one episode can be sufficient, however incidence is sporadic and generally associated with relatively long periods of hyperbaric exposure and aetiology is uncertain. Early identification of lesions by radiography is not possible, but over time areas of radiographic opacity develop in association with the damaged bone. Diagnosis Diagnosis of decompression sickness relies almost entirely on clinical presentation, as there are no laboratory tests that can incontrovertibly confirm or reject the diagnosis. Various blood tests have been proposed, but they are not specific for decompression sickness, they are of uncertain utility and are not in general use.Decompression sickness should be suspected if any of the symptoms associated with the condition occurs following a drop in pressure, in particular, within 24 hours of diving. In 1995, 95% of all cases reported to Divers Alert Network had shown symptoms within 24 hours. This window can be extended to 36 hours for ascent to altitude and 48 hours for prolonged exposure to altitude following diving. An alternative diagnosis should be suspected if severe symptoms begin more than six hours following decompression without an altitude exposure or if any symptom occurs more than 24 hours after surfacing. The diagnosis is confirmed if the symptoms are relieved by recompression. Although MRI or CT can frequently identify bubbles in DCS, they are not as good at determining the diagnosis as a proper history of the event and description of the symptoms. Test of pressure There is no gold standard for diagnosis, and DCI experts are rare. Most of the chambers open to treatment of recreational divers and reporting to Divers Alert Network see fewer than 10 cases per year, making it difficult for the attending doctors to develop experience in diagnosis. A method used by commercial diving supervisors when considering whether to recompress as first aid when they have a chamber on site, is known as the test of pressure. The diver is checked for contraindications to recompression, and if none are present, recompressed. If the symptoms resolve or reduce during recompression, it is considered likely that a treatment schedule will be effective. The test is not entirely reliable, and both false positives and false negatives are possible, however in the commercial diving environment it is often considered worth treating when there is doubt, and very early recompression has a history of very high success rates and reduced number of treatments needed for complete resolution and minimal sequelae. Differential diagnosis Symptoms of DCS and arterial gas embolism can be virtually indistinguishable. The most reliable way to tell the difference is based on the dive profile followed, as the probability of DCS depends on duration of exposure and magnitude of pressure, whereas AGE depends entirely on the performance of the ascent. In many cases it is not possible to distinguish between the two, but as the treatment is the same in such cases it does not usually matter.Other conditions which may be confused with DCS include skin symptoms cutis marmorata due to DCS and skin barotrauma due to dry suit squeeze, for which no treatment is necessary. Dry suit squeeze produces lines of redness with possible bruising where the skin was pinched between folds of the suit, while the mottled effect of cutis marmorata is usually on skin
Decompression sickness
where there is subcutaneous fat, and has no linear pattern.Transient episodes of severe neurological incapacitation with rapid spontaneous recovery shortly after a dive may be attributed to hypothermia, but may be symptomatic of short term CNS involvement, which may have residual problems or relapses. These cases are thought to be under-diagnosed.Inner ear decompression sickness (IEDCS) can be confused with inner ear barotrauma (IEBt), alternobaric vertigo, caloric vertigo and reverse squeeze. A history of difficulty in equalising the ears during the dive makes ear barotrauma more likely, but does not always eliminate the possibility of inner ear DCS, which is usually associated with deep, mixed gas dives with decompression stops. Both conditions may exist concurrently, and it can be difficult to distinguish whether a person has IEDCS, IEBt, or both. Numbness and tingling are associated with spinal DCS, but can also be caused by pressure on nerves (compression neurapraxia). In DCS the numbness or tingling is generally confined to one or a series of dermatomes, while pressure on a nerve tends to produce characteristic areas of numbness associated with the specific nerve on only one side of the body distal to the pressure point. A loss of strength or function is likely to be a medical emergency. A loss of feeling that lasts more than a minute or two indicates a need for immediate medical attention. It is only partial sensory changes, or paraesthesias, where this distinction between trivial and more serious injuries applies.Large areas of numbness with associated weakness or paralysis, especially if a whole limb is affected, are indicative of probable brain involvement and require urgent medical attention. Paraesthesias or weakness involving a dermatome indicate probable spinal cord or spinal nerve root involvement. Although it is possible that this may have other causes, such as an injured intervertebral disk, these symptoms indicate an urgent need for medical assessment. In combination with weakness, paralysis or loss of bowel or bladder control, they indicate a medical emergency. Prevention Underwater diving To prevent the excess formation of bubbles that can lead to decompression sickness, divers limit their ascent rate—the recommended ascent rate used by popular decompression models is about 10 metres (33 ft) per minute—and follow a decompression schedule as necessary. This schedule may require the diver to ascend to a particular depth, and remain at that depth until sufficient inert gas has been eliminated from the body to allow further ascent. Each of these is termed a "decompression stop", and a schedule for a given bottom time and depth may contain one or more stops, or none at all. Dives that contain no decompression stops are called "no-stop dives", but divers usually schedule a short "safety stop" at 3 to 6 m (10 to 20 ft), depending on the training agency or dive computer.The decompression schedule may be derived from decompression tables, decompression software, or from dive computers, and these are generally based upon a mathematical model of the bodys uptake and release of inert gas as pressure changes. These models, such as the Bühlmann decompression algorithm, are modified to fit empirical data and provide a decompression schedule for a given depth and dive duration using a specified breathing gas mixture.Since divers on the surface after a dive may still have excess inert gas in their bodies, decompression from any subsequent dive before this excess is eliminated needs to modify the schedule to take account of the residual gas load from the previous dive. This will result in a shorter allowable time under water without obligatory decompression stops, or an increased decompression time during the subsequent dive. The total elimination of excess gas may take many hours, and tables will indicate the time at normal pressures that is required, which may be up to 18 hours.Decompression time can be significantly shortened by breathing mixtures containing much less inert gas during the decompression phase of the dive (or pure oxygen at stops in 6 metres (20 ft) of water or less). The reason is that the inert gas outgases at a rate proportional to the difference between the partial pressure of inert gas in the divers body and its partial pressure in the breathing gas; whereas the likelihood of bubble formation depends on the difference between the inert gas partial pressure in the divers body and the ambient pressure. Reduction in decompression requirements can also be gained by breathing a nitrox mix during the dive, since less nitrogen will be taken into the body than during the same dive done on air.Following a decompression schedule does not completely protect against DCS. The algorithms used are designed to reduce the probability of DCS to a very low level, but do not reduce it to zero. The mathematical implications of all current decompression models are that provided that no tissue is ingassing, longer decompression stops will decrease decompression risk, or at worst not increase it. Efficient decompression requires the diver to ascend fast enough to establish as high a decompression gradient, in as many tissues, as safely possible, without provoking the development of symptomatic bubbles. This is facilitated by the highest acceptably safe oxygen partial pressure in the breathing gas, and avoiding gas changes that could cause counterdiffusion bubble formation or growth. The development of schedules that are both safe and efficient has been complicated by the large number of variables and uncertainties, including personal variation in response under varying environmental conditions and workload, attributed to variations of body type, fitness and other risk factors. Exposure to altitude One of the most significant breakthroughs in the prevention of altitude DCS is oxygen pre-breathing. Breathing pure oxygen significantly reduces the nitrogen loads in body tissues by reducing the partial pressure of nitrogen in the lungs, which induces diffusion of nitrogen from the blood into the breathing gas, and this effect eventually lowers the concentration of nitrogen in the other tissues of the body. If continued for long enough, and without interruption, this provides effective protection upon exposure to low-barometric pressure environments. However, breathing pure oxygen during flight alone (ascent, en route, descent) does not decrease the risk of altitude DCS as the time required for ascent is generally not sufficient to significantly desaturate the slower tissues.Pure aviator oxygen which has moisture removed to prevent freezing of valves at altitude is readily available and routinely used in general aviation mountain flying and at high altitudes. Most small general aviation aircraft are not pressurized, therefore oxygen use is an FAA requirement at higher altitudes. Although pure oxygen pre-breathing is an effective method to protect against altitude DCS, it is logistically complicated and expensive for the protection of civil aviation flyers, either commercial or private. Therefore, it is currently used only by military flight crews and astronauts for protection during high-altitude and space operations. It is also used by flight test crews involved with certifying aircraft, and may also be used for high-altitude parachute jumps. Astronauts aboard the International Space Station preparing for extra-vehicular activity (EVA) "camp out" at low atmospheric pressure, 10.2 psi (0.70 bar), spending eight sleeping hours in the Quest airlock chamber before their spacewalk. During the EVA they breathe 100% oxygen in their spacesuits, which operate at 4.3 psi (0.30 bar), although research has examined the possibility of using 100% O2 at 9.5 psi (0.66 bar) in the suits to lessen the pressure reduction, and hence the risk of DCS. Treatment Recompression on air was shown to be an effective treatment for minor DCS symptoms by Keays in 1909. Evidence of the effectiveness of recompression therapy utilizing oxygen was first shown by Yarbrough and Behnke, and has since become the standard of care for treatment of DCS. Recompression is normally carried out in a recompression chamber. At a dive site, a riskier alternative is in-water recompression.Oxygen first aid has been used as an emergency treatment for diving injuries for years. Particularly if given within the first four hours of surfacing, it increases the success of recompression therapy as well as decreasing the number of recompression treatments required. Most fully closed-circuit diving rebreathers can deliver sustained high concentrations of oxygen-rich breathing gas and could be used as a means of supplying oxygen if dedicated equipment is not available.It is beneficial to give fluids, as this helps reduce dehydration. It is no longer recommended to administer aspirin, unless advised to do so by medical personnel, as analgesics may mask symptoms. People should be made comfortable and placed in the supine position (horizontal), or the recovery position if vomiting occurs. In the past, both the Trendelenburg position and the left lateral decubitus position (Durants maneuver) have been suggested as beneficial where air emboli are suspected, but are no longer recommended for extended periods, owing to concerns regarding cerebral edema. First aid All cases of decompression sickness should be treated initially with the highest available concentration of oxygen until hyperbaric oxygen therapy (100% oxygen delivered in a hyperbaric chamber) can be provided. Mild cases of the "bends" and some skin symptoms may disappear during descent from high altitude; however, it is recommended that these cases still be evaluated. Neurological symptoms, pulmonary symptoms, and mottled or marbled skin lesions should be treated with hyperbaric oxygen therapy if seen within 10 to 14 days of development. Early recompression has a history of better outcomes and less treatment being needed.Normobaric oxygen administered at as close to 100% as practicable is known to be beneficial based on observed bubble reduction and symptom resolution. For this reason diver training in oxygen administration, and a system for administering a high percentage of inspired oxygen at quantities sufficient for plausible evacuation scenarios is desirable. Where oxygenation may be compromised the administration rate should be adjusted to ensure that the best practicable supplementation is maintained until supplies can be replenished.A horizontal position is preferable during evacuation if possible, with the recovery position recommended for unconscious divers, as there is evidence that inert gas washout is improved in horizontal subjects, and that large arterial bubbles tend to distribute towards the head in upright positions. A head down position is thought to be harmful in DCS.Oral hydration is recommended in fully conscious persons, and fluids should ideally be isotonic, without alcohol, carbonation or caffeine, as diving is known to cause dehydration, and rehydration is known to reduce post-dive venous gas emboli.Intravascular rehydration is recommended if suitably competent responders are present. Glucose free isotonic crystalloid solutions are preferred. Case evidence shows that aggressive rehydration can be life-saving in severe cases.If there are no contraindications, a non-steroidal anti-inflammatory drug along with hyperbatic oxygen is likely to improve rate of recovery. Corticosteroids, pentoxyphylline, aspirin, lidocaine and nicergoline have been used in early management of DCS, but there is insufficient evidence on their effectiveness.Divers should be kept comfortably warm, but not overheated, as warm subjects are known to eliminate gas more quickly, but overheating aggravates neurological injury. Delay of recompression Observational evidence shows that outcomes after recompression are likely to be better after immediate recompression, which is only possible when on-site recompression is possible, although the 2004 workshop on decompression came to the conclusion that for cases with mild symptoms, a delay before recompression is unlikely to cause any worsening of long-term outcomes.In more serious cases recompression should be done as soon as safely possible. There is some evidence that delays longer than six hours result in slower or less complete recovery, and the number of treatments required may be increased. Transport of a symptomatic diver Exposing a case of decompression sickness to reduced ambient pressure will cause the bubbles to expand if not constrained by a rigid local tissue environment. This can aggravate the symptoms, and should be avoided if reasonably practicable. If a diver with DCS is transported by air, cabin pressure should be kept as close to sea level atmospheric pressure as possible, preferably not more than 150 m, either by cabin pressurisation or by remaining at low altitude throughout the flight. The risk of deterioration at higher altitudes must be considered against the risk of deterioration if not transported. Some divers with symptoms or signs of mild decompression sickness may be evacuated by pressurised commercial airliner for further treatment after a surface interval of at least 24 hours. The 2004 workshop considered it unlikely for this to cause a worse outcome. Most experience has been for short flights of less than two hours. There is little known about the effects of longer flights. Where possible, pre-flight and in-flight oxygen breathing at the highest available percentage is considered best practice. Similar precautions apply to surface transport through higher altitudes. In-water recompression Recompression and hyperbaric oxygen administered in a recompression chamber is recognised as the definitive treatment for DCI, but when there is no readily available access to a suitable hyperbaric chamber, and if symptoms are significant or progressing, in-water recompression (IWR) with oxygen is a medically recognised option where a group of divers including the symptomatic diver already have relevant training and equipment that provides a sufficient understanding of the associated risks and allows the involved parties to collectively accept responsibility for a decision to proceed with IWR.In-water recompression (IWR) or underwater oxygen treatment is the emergency treatment of decompression sickness by returning the diver underwater to help the gas bubbles in the tissues, which are causing the symptoms, to resolve. It is a procedure that exposes the diver to significant risk which should be compared with the risk associated with the other available options. Some authorities recommend that it is only to be used when the time to travel to the nearest recompression chamber is too long to save the victims life, others take a more pragmatic approach, and accept that in some circumstances IWR is the best available option. The risks may not be justified for case of mild symptoms likely to resolve spontaneously, or for cases where the diver is likely to be unsafe in the water, but in-water recompression may be justified in cases where severe outcomes are likely, if conducted by a competent and suitably equipped team.Carrying out in-water recompression when there is a nearby recompression chamber or without suitable equipment and training is never a desirable option. The risk of the procedure is due to the diver suffering from DCS being seriously ill and may become paralysed, unconscious or stop breathing while under water. Any one of these events is likely to result in the diver drowning or asphyxiating or suffering further injury during a subsequent rescue to the surface. This risk can be reduced by improving airway security by using surface supplied gas and a helmet or full-face mask.Several schedules have been published for in-water recompression treatment, but little data on their efficacy is available.The decision of whether or not to attempt IWR is dependent on identifying the diver whose condition is serious enough to justify the risk, but whose clinical condition does not indicate that the risk is unacceptable. The risk may not be justified for mild DCI, if spontaneous recovery is probable whether the diver is recompressed or not, and surface oxygen is indicated for these cases. However, in these cases the risk of the recompression is also low, and early abandonment is also unlikely to cause further harm. Contraindications Some signs of decompression illness which suggest a risk of permanent injury are nevertheless considered contraindications for IWR. Hearing loss and vertigo displayed in isolation with no other symptoms of DCI can have been caused by inner ear barotrauma rather than DCI, and inner ear barotrauma is generally considered a contraindication for recompression. Even when caused by DCI, vertigo can make in-water treatment hazardous if accompanied by nausea and vomiting. A diver with a deteriorating level of consciousness or with a persisting reduced level of consciousness should also not be recompressed in-water nor should a diver who does not want to go back down, or with a history of oxygen toxicity in the preceding dives, or any physical injury or incapacitation which may make the procedure unsafe. Definitive treatment The duration of recompression treatment depends on the severity of symptoms, the dive history, the type of recompression therapy used and the patients response to the treatment. One of the more frequently used treatment schedules is the US Navy Table 6, which provides hyperbaric oxygen therapy with a maximum pressure equivalent to 60 feet (18 m) of seawater (2.8 bar PO2) for a total time under pressure of 288 minutes, of which 240 minutes are on oxygen and the balance are air breaks to minimise the possibility of oxygen toxicity.A multiplace chamber is the preferred facility for treatment of decompression sickness as it allows direct physical access to the patient by medical personnel, but monoplace chambers are more widely available and should be used for treatment if a multiplace chamber is not available or transportation would cause significant delay in treatment, as the interval between onset of symptoms and recompression is important to the quality of recovery. It may be necessary to modify the optimum treatment schedule to allow use of a monoplace chamber, but this is usually better than delaying treatment. A US Navy treatment table 5 can be safely performed without air breaks if a built-in breathing system is not available. In most cases the patient can be adequately treated in a monoplace chamber at the receiving hospital. Altitude decompression sickness Treatment and management may vary depending on the grade or form of decompression sickness and the treating facility or organization. First aid at altitude is oxygen at the highest practicable concentration and earliest and largest practicable reduction in cabin altitude. Ground-level 100% oxygen therapy is suggested for 2 hours following type-1 decompression sickness that occurs at altitude, if it resolves upon descent. In more severe cases, hyperbaric oxygen therapy following standard recompression protocols is indicated. Decompression sickness in aviation most commonly follows flights in non-pressurized aircraft, flights with cabin pressure fluctuations, or in individuals who fly after diving. Cases have also been reported after the use of altitude chambers. These are relatively rare clinical events. Prognosis Immediate treatment with 100% oxygen, followed by recompression in a hyperbaric chamber, will in most cases result in no long-term effects. However, permanent long-term injury from DCS is possible. Three-month follow-ups on diving accidents reported to DAN in 1987 showed 14.3% of the 268 divers surveyed had ongoing symptoms of Type II DCS, and 7% from Type I DCS. Long-term follow-ups showed similar results, with 16% having permanent neurological sequelae.Long term effects are dependent on both initial injury, and treatment. While almost all cases will resolve more quickly with treatment, milder cases may resolve adequately over time without recompression, where the damage is minor and the damage is not significantly aggravated by lack of treatment. In some cases the cost, inconvenience, and risk to the patient may make it appropriate not to evacuate to a hyperbaric treatment facility. These cases should be assessed by a specialist in diving medicine, which can generally be done remotely by telephone or internet.For joint pain, the likely tissues affected depend on the symptoms, and the urgency of hyperbaric treatment will depend largely on the tissues involved. Sharp, localised pain that is affected by movement suggests tendon or muscle injury, both of which will usually fully resolve with oxygen and anti-inflammatory medication. Sharp, localised pain that is not affected by movement suggests local inflammation, which will also usually fully resolve with oxygen and anti-inflammatory medication. Deep, non-localised pain affected by movement suggests joint capsule tension, which is likely to fully resolve with oxygen and anti-inflammatory medication, though recompression will help it to resolve faster. Deep, non-localised pain not affected by movement suggests bone medulla involvement, with ischaemia due to blood vessel blockage and swelling inside the bone, which is mechanistically associated with osteonecrosis, and therefore it has been strongly recommended that these symptoms are treated with hyperbaric oxygen. Epidemiology The incidence of decompression sickness is rare, estimated at 2.8 to 4 cases per 10,000 dives, with the risk 2.6 times greater for males than females. DCS affects approximately 1,000 U.S. scuba divers per year. In 1999, the Divers Alert Network (DAN) created "Project Dive Exploration" to collect data on dive profiles and incidents. From 1998 to 2002, they recorded 50,150 dives, from which 28 recompressions were required — although these will almost certainly contain incidents of arterial gas embolism (AGE) — a rate of about 0.05%.Around 2013, Honduras had the highest number of decompression-related deaths and disabilities in the world, caused by unsafe practices in lobster diving among the indigenous Miskito people, who face great economic pressures. At that time it was estimated that in the country over 2000 divers had been injured and 300 others had died since the 1970s. Timeline 1670: Robert Boyle demonstrated that a reduction in ambient pressure could lead to bubble formation in living tissue. This description of a bubble forming in the eye of a viper subjected to a near vacuum was the first recorded description of decompression sickness. 1769: Giovanni Morgagni described the post mortem findings of air in cerebral circulation and surmised that this was the cause of death. 1840: Charles Pasley, who was involved in the recovery of the sunken warship HMS Royal George, commented that, of those having made frequent dives, "not a man escaped the repeated attacks of rheumatism and cold". 1841: First documented case of decompression sickness, reported by a mining engineer who observed pain and muscle cramps among coal miners working in mine shafts air-pressurized to keep water out. 1854: Decompression sickness reported and one resulting death of caisson workers on the Royal Albert Bridge. 1867: Panamanian pearl divers using the revolutionary Sub Marine Explorer submersible repeatedly experienced "fever" due to rapid ascents. Continued sickness led to the vessels abandonment in 1869. 1870: Bauer published outcomes of 25 paralyzed caisson workers. From 1870 to 1910, all prominent features were established. Explanations at the time included: cold or exhaustion causing reflex spinal cord damage; electricity cause by friction on compression; or organ congestion; and vascular stasis caused by decompression. 1871: The Eads Bridge in St Louis employed 352 compressed air workers including Alphonse Jaminet as the physician in charge. There were 30 seriously injured and 12 fatalities. Jaminet himself developed decompression sickness and his personal description was the first such recorded. According to Divers Alert Network, in its Inert Gas Exchange, Bubbles and Decompression Theory course, this is where "bends" was first used to refer to DCS. 1872: The similarity between decompression sickness and iatrogenic air embolism as well as the relationship between inadequate decompression and decompression sickness was noted by Friedburg. He suggested that intravascular gas was released by rapid decompression and recommended: slow compression and decompression; four-hour working shifts; limit to maximum pressure of 44.1 psig (4 atm); using only healthy workers; and recompression treatment for severe cases. 1873: Andrew Smith first used the term "caisson disease" describing 110 cases of decompression sickness as the physician in charge during construction of the Brooklyn Bridge. The project employed 600 compressed air workers. Recompression treatment was not used. The project chief engineer Washington Roebling had caisson disease, and endured the after-effects of the disease for the rest of his life. During this project, decompression sickness became known as "The Grecian Bends" or simply "the bends" because affected individuals characteristically bent forward at the hips: this is possibly reminiscent of a then popular womens fashion and dance maneuver known as the Grecian Bend. 1890 During construction of the Hudson River Tunnel contractors agent Ernest William Moir pioneered the use of an airlock chamber for treatment. 1900: Leonard Hill used a frog model to prove that decompression causes bubbles and that recompression resolves them. Hill advocated linear or uniform decompression profiles. This type of decompression is used today by saturation divers. His work was financed by Augustus Siebe and the Siebe Gorman Company. 1904: Tunnel building to and from Manhattan Island caused over 3,000 injuries and over 30 deaths which led to laws requiring PSI limits and decompression rules for "sandhogs" in the United States. 1904: Siebe and Gorman in conjunction with Leonard Hill developed and produced a closed bell in which a diver can be decompressed at the surface. 1908: "The Prevention of Compressed Air Illness" was published by JS Haldane, Boycott and Damant recommending staged decompression. These tables were accepted for use by the Royal Navy. 1914–16: Experimental decompression chambers were in use on land and aboard ship. 1924: The US Navy published the first standardized recompression procedure. 1930s: Albert R Behnke separated the symptoms of Arterial Gas Embolism (AGE) from those of DCS. 1935: Behnke et al. experimented with oxygen for recompression therapy. 1937: Behnke introduced the "no-stop" decompression tables. 1941: Altitude DCS is treated with hyperbaric oxygen for the first time. 1944: US Navy published hyperbaric treatment tables "Long Air Recompression Table with Oxygen" and "Short Oxygen Recompression Table", both using 100% oxygen below 60 fsw (18 msw) 1945: Field results showed that the 1944 oxygen treatment table was not yet satisfactory, so a series of tests were conducted by staff from the Navy Medical Research Institute and the Navy Experimental Diving Unit using human subjects to verify and modify the treatment tables. Tests were conducted using the 100-foot air-oxygen treatment table and the 100-foot air treatment table, which were found to be satisfactory. Other tables were extended until they produced satisfactory results. The resulting tables were used as the standard treatment for the next 20 years, and these tables and slight modifications were adopted by other navies and industry. Over time, evidence accumulated that the success of these table for severe decompression sickness was not very good. 1957: Robert Workman established a new method for calculation of decompression requirements (M-values). 1959: The "SOS Decompression Meter", a submersible mechanical device that simulated nitrogen uptake and release, was introduced. 1960: FC Golding et al. split the classification of DCS into Type 1 and 2. 1965: Low success rates of the existing US Navy treatment tables led to the development of the oxygen treatment table by Goodman and Workman in 1965, variations of which are still in general use as the definitive treatment for most cases of decompression sickness. 1965: LeMessurier and Hills published a paper on A thermodynamic approach arising from a study on Torres Strait diving techniques which suggests that decompression by conventional models results in bubble formation which is then eliminated by re-dissolving at the decompression stops. 1976 – M.P. Spencer showed that the sensitivity of decompression testing is increased by the use of ultrasonic methods which can detect mobile venous bubbles before symptoms of DCS emerge. 1982: Paul K Weathersby, Louis D Homer and Edward T Flynn introduce survival analysis into the study of decompression sickness. 1983: Orca produced the "EDGE", a personal dive computer, using a microprocessor to calculate nitrogen absorption for twelve tissue compartments. 1984: Albert A Bühlmann released his book "Decompression–Decompression Sickness", which detailed his deterministic model for calculation of decompression schedules. Bt 1989: The advent of dive computers had not been widely acceped, but after the 1989 AAUS Dive computer workshop published a group consensus list of recommendations for the use of dive computers in scientific diving, most opposition to dive computers dissipated, numerous new models were introduced, the technology dramatically improved and dive computers became standard scuba diving equipment. Over time, some of the recommendations became irrelevant as the technology improved. c2000: HydroSpace Engineering developed the HS Explorer, a Trimix computer with optional PO2 monitoring and twin decompression algorithms, Buhlmann, and the first full real time RGBM implementation. 2001: The US Navy approved the use of Cochran NAVY decompression computer with the VVAL 18 Thalmann algorithm for Special Warfare operations. By 2010: The use of dive computers for decompression status tracking was virtually ubiquitous among recreational divers and widespread in scientific diving. 2018: A group of diving medical experts issued a consensus guideline on pre-hospital decompression sickness management and concluded that in-water recompression is a valid and effective emergency treatment where a chamber is not available, but
Decompression sickness
is only appropriate in groups that have been trained and are competent in the skills required for IWR and have appropriate equipment. Society and culture Economics In the United States, it is common for medical insurance not to cover treatment for the bends that is the result of recreational diving. This is because scuba diving is considered an elective and "high-risk" activity and treatment for decompression sickness is expensive. A typical stay in a recompression chamber will easily cost several thousand dollars, even before emergency transportation is included. As a result, groups such as Divers Alert Network (DAN) offer medical insurance policies that specifically cover all aspects of treatment for decompression sickness at rates of less than $100 per year.In the United Kingdom, treatment of DCS is provided by the National Health Service. This may occur either at a specialised facility or at a hyperbaric centre based within a general hospital. Other animals Animals may also contract DCS, especially those caught in nets and rapidly brought to the surface. It has been documented in loggerhead turtles and likely in prehistoric marine animals as well. Modern reptiles are susceptible to DCS, and there is some evidence that marine mammals such as cetaceans and seals may also be affected. AW Carlsen has suggested that the presence of a right-left shunt in the reptilian heart may account for the predisposition in the same way as a patent foramen ovale does in humans. Footnotes See also Decompression (diving) – Adjusting to pressure changes in ascents Decompression illness – Disorders arising from ambient pressure reduction Decompression theory – Theoretical modelling of decompression physiology Diving disorders – Physiological disorders resulting from underwater diving Inner ear decompression sickness – Medical condition caused by inert gas bubbles forming out of solution Taravana – Decompression sickness after breath-hold diving Notes 1. ^a autochthonous: formed or originating in the place where found. References Bibliography External links Divers Alert Network: diving medicine articles Archived 8 December 2005 at the Wayback Machine Dive Tables from the NOAA CDC – Decompression Sickness and Tunnel Workers – NIOSH Workplace Safety and Health Topic Pathophysiology of decompression and acute dysbaric disorders "Decompression Sickness" on Medscape
Auditory hallucination
An auditory hallucination, or paracusia, is a form of hallucination that involves perceiving sounds without auditory stimulus. While experiencing an auditory hallucination, the affected person would hear a sound or sounds which did not come from the natural environment. A common form of auditory hallucination involves hearing one or more voices without a speaker present, known as an auditory verbal hallucination. This may be associated with psychotic disorders, most notably schizophrenia, and this phenomenon is often used to diagnose these conditions. However, individuals without any psychiatric disease whatsoever may hear voices, including those under the influence of mind-altering substances, such as cannabis, cocaine, amphetamines, and PCP. There are three main categories into which the hearing of talking voices often fall: a person hearing a voice speak ones thoughts, a person hearing one or more voices arguing, or a person hearing a voice narrating their own actions. These three categories do not account for all types of auditory hallucinations. Hallucinations of music also occur. In these, people more often hear snippets of songs that they know, or the music they hear may be original. They may occur in mentally sound people and with no known cause. Other types of auditory hallucinations include exploding head syndrome and musical ear syndrome. In the latter, people will hear music playing in their mind, usually songs they are familiar with. These hallucinations can be caused by: lesions on the brain stem (often resulting from a stroke), sleep disorders such as narcolepsy, tumors, encephalitis, or abscesses. This should be distinguished from the commonly experienced phenomenon of getting a song stuck in ones head. Reports have also mentioned that it is also possible to get musical hallucinations from listening to music for long periods of time. Other causes include hearing loss and epileptic activity.In the past, the cause of auditory hallucinations was attributed to cognitive suppression by way of executive function failure of the frontoparietal sulcus. Newer research has found that they coincide with the left superior temporal gyrus, suggesting that they are better attributed to speech misrepresentations. It is assumed through research that the neural pathways involved in normal speech perception and production, which are lateralized to the left temporal lobe, also underlie auditory hallucinations. Auditory hallucinations correspond with spontaneous neural activity of the left temporal lobe, and the subsequent primary auditory cortex. The perception of auditory hallucinations corresponds to the experience of actual external hearing, despite the absence of any sound itself. Causes In 2015 a small survey reported voice hearing in persons with a wide variety of DSM-5 diagnoses, including: Bipolar disorder Borderline personality disorder Depression (mixed) Dissociative identity disorder Generalized anxiety disorder Major depression Obsessive compulsive disorder Post-traumatic stress disorder Psychosis (NOS) Schizoaffective disorder Schizophrenia Substance-induced psychosisHowever, numerous persons surveyed reported no diagnosis. In his popular 2012 book Hallucinations, neurologist Oliver Sacks describes voice hearing in patients with a wide variety of medical conditions, as well as his own personal experience of hearing voices. Genetic correlations have been identified with auditory hallucinations, but most work with non-psychotic causes of auditory hallucinations is still ongoing. Schizophrenia In people with psychosis, the premier cause of auditory hallucinations is schizophrenia, and these are known as auditory verbal hallucinations (AVHs). In schizophrenia, people show a consistent increase in activity of the thalamic and striatal subcortical nuclei, hypothalamus, and paralimbic regions; confirmed by PET and fMRI scans. Other research shows an enlargement of temporal white matter, frontal gray matter, and temporal gray matter volumes (those areas crucial to both inner and outer speech) when compared to control groups. This implies that functional and structural abnormalities in the brain, both of which may have a genetic component, can induce auditory hallucinations.Auditory verbal hallucinations attributed to an external source, rather than internal, are considered the defining factor for the diagnosis of schizophrenia. The voices heard are generally destructive and emotive, adding to the state of artificial reality and disorientation seen in psychotic patients. The causal basis of hallucinations has been explored on the cellular receptor level. The glutamate hypothesis, proposed as a possible cause for schizophrenia, may also have implications in auditory hallucinations, which are suspected to be triggered by altered glutamatergic transmission.Studies using dichotic listening methods suggest that people with schizophrenia have major deficits in the functioning of the left temporal lobe by showing that patients do not generally exhibit what is a functionally normal right ear advantage. Inhibitory control of hallucinations in patients has been shown to involve failure of top-down regulation of resting-state networks and up-regulation of effort networks, further impeding normal cognitive functioning.Not all who experience hallucinations find them to be distressing. The relationship between an individual and their hallucinations is personal, and everyone interacts with their troubles in different ways. There are those who hear solely malevolent voices, solely benevolent voices, those that hear a mix of the two, and those that see them as either malevolent or benevolent and not believing the voice. Mood disorders and dementias Mood disorders such as bipolar disorder and major depression have also been known to correlate with auditory hallucinations, but tend to be milder than their psychosis-induced counterpart. Auditory hallucinations are a relatively common sequelae of major neurocognitive disorders (formerly dementia) such as Alzheimers disease. Transient causes Auditory hallucinations have been known to manifest as a result of intense stress, sleep deprivation, and drug use. Auditory hallucinations can also occur in mentally healthy individuals during the altered state of consciousness while falling asleep (hypnagogic hallucinations) and waking up (hypnopompic hallucinations).High caffeine consumption has been linked to an increase in the likelihood of experiencing auditory hallucinations. A study conducted by the La Trobe University School of Psychological Sciences revealed that as few as five cups of coffee a day could trigger the phenomenon. Intoxication of psychoactive drugs such as PCP, amphetamines, cocaine, marijuana and other substances can produce hallucinations in general, especially in high doses. Withdrawal from certain drugs such as alcohol, sedatives, hypnotics, anxiolytics, and opioids can also produce hallucinations, including auditory. Pathophysiology The following areas of the brain have been found to be active during auditory hallucinations, through the use of fMRIs. Transverse temporal gyri (Heschls gyri): found within the primary auditory cortex. Left temporal lobe: processes semantics in speech and vision, including primary auditory cortex. Brocas area: speech and language comprehension. Superior temporal gyrus: contains primary auditory cortex. Primary auditory cortex: processes hearing and speech perception. Globus pallidus: Regulation of voluntary movement. Treatments Medication The primary means of treating auditory hallucinations is antipsychotic medications which affect dopamine metabolism. If the primary diagnosis is a mood disorder (with psychotic features), adjunctive medications are often used (e.g., antidepressants or mood stabilizers). These medical approaches may allow the person to function normally but are not a cure as they do not eradicate the underlying thought disorder. Therapy Cognitive behavioral therapy has been shown to help decrease the frequency and distressfulness of auditory hallucinations, particularly when other psychotic symptoms were presenting. Enhanced supportive therapy has been shown to reduce the frequency of auditory hallucinations, the violent resistance the patient displayed towards said hallucinations, and an overall decrease in the perceived malignancy of the hallucinations. Other cognitive and behavioural therapies have been used with mixed success.Another key to therapy is to help patients see that they do not need to obey the voices that they are hearing. It has been seen in patients with schizophrenia and auditory hallucinations that therapy might help confer insight into recognising and choosing to not obey the voices that they hear. Others Between 25% and 30% of schizophrenia patients do not respond to antipsychotic medication which has led researchers to look for alternate sources to help them. Two common methods to help are electroconvulsive therapy and repetitive transcranial magnetic stimulation (rTMS). Electroconvulsive therapy or ECT has been shown to reduce psychotic symptoms associated with schizophrenia, mania, and depression, and is often used in psychiatric hospitals. Transcranial magnetic stimulation when used to treat auditory hallucinations in patients with schizophrenia is done at a low frequency of 1 Hertz to the left temporoparietal cortex. History Ancient history Presentation In the ancient world, auditory hallucinations were often viewed as either a gift or curse by God or the gods (depending on the specific culture). According to the Greek historian Plutarch, during the reign of Tiberius (A.D. 14–37), a sailor named Thamus heard a voice cry out to him from across the water, "Thamus, are you there? When you reach Palodes, take care to proclaim that the great god Pan is dead."The oracles of ancient Greece were known to experience auditory hallucinations while breathing in certain neurologically active vapors (such as the smoke from bay leaves), while the more pervasive delusions and symptomatology were often viewed as possession by demonic forces as punishment for misdeeds. Treatments Treatment in the ancient world is ill-documented, but there are some cases of therapeutics being used to attempt treatment, while the common treatment was sacrifice and prayer in an attempt to placate the gods. During the Middle Ages, those with auditory hallucinations were sometimes subjected to trepanning or trial as a witch. In other cases of extreme symptomatology, individuals were seen as being reduced to animals by a curse; these individuals were either left on the streets or imprisoned in insane asylums. It was the latter response that eventually led to modern psychiatric hospitals. Pre-modern Presentation Auditory hallucinations were rethought during the enlightenment. As a result, the predominant theory in the western world beginning in the late 18th century was that auditory hallucinations were the result of a disease in the brain (e.g., mania), and treated as such. Treatments There were no effective treatments for hallucinations at this time. Conventional thought was that clean food, water, and air would allow the body to heal itself (sanatorium). Beginning in the 16th century insane asylums were first introduced in order to remove "the mad dogs" from the streets. These asylums acted as prisons until the late 18th century. This is when doctors began the attempt to treat patients. Often attending doctors would douse patients in cold water, starve them, or spin patients on a wheel. Soon, this gave way to brain-specific treatments with the most famous examples including lobotomy, shock therapy, and branding the skull with a hot iron. Society and culture Notable cases Robert Schumann, a famous music composer, spent the end of his life experiencing auditory hallucinations. Schumanns diaries state that he suffered perpetually from imagining that he had the note A5 sounding in his ears. The musical hallucinations became increasingly complex. One night he claimed to have been visited by the ghost of Schubert and wrote down the music that he was hearing. Thereafter, he began making claims that he could hear an angelic choir singing to him. As his condition worsened, the angelic voices developed into demonic ones.Brian Wilson, songwriter and co-founder of the Beach Boys, has schizoaffective disorder that presents itself in the form of disembodied voices. They formed a major component of Bill Pohlads Love & Mercy (2014), a biographical film which depicts Wilsons hallucinations as a source of musical inspiration, constructing songs that were partly designed to converse with them. Wilson has said of the voices: "Mostly [theyre] derogatory. Some of its cheerful. Most of it isnt." To combat them, his psychiatrist advised that he "talk humorously to them", which he says has helped "a little bit".The onset of delusional thinking is most often described as being gradual and insidious. Patients have described an interest in psychic phenomena progressing to increasingly unusual preoccupations and then to bizarre beliefs "in which I believed wholeheartedly". One author wrote of their hallucinations: "they deceive, derange and force me into a world of crippling paranoia". In many cases, the delusional beliefs could be seen as fairly rational explanations for abnormal experiences: "I increasingly heard voices (which Id always call loud thoughts)... I concluded that other people were putting these loud thoughts into my head". Some cases have been described as an "auditory ransom note". Cultural effects According to research on hallucinations, both with participants from the general population and people diagnosed with schizophrenia, psychosis and related mental illnesses, there is a relationship between culture and hallucinations. In relation to hallucinations, the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) states that "transient hallucinatory experiences may occur without a mental disorder"; put differently, short or temporary hallucinations are not exclusive to being diagnosed with a mental disorder.In a study of 1,080 people with a schizophrenia diagnosis from seven countries of origin: Austria, Poland, Lithuania, Georgia, Pakistan, Nigeria and Ghana, researchers found that 74.8% of the total participants (n = 1,080) disclosed having experienced more auditory hallucinations in the last year than any other hallucinations from the date of the interview. Further, the study found the highest rates of both auditory hallucinations and visual hallucinations in both of the West African countries, Ghana and Nigeria. In the Ghana sample, n = 76, auditory hallucinations were reported by 90.8% and visual hallucinations were reported by 53.9% of participants. In the Nigeria sample, n = 324, auditory hallucinations were reported by 85.4%, and visual hallucinations were reported by 50.8% of participants. These findings are in line with other studies that have found that visual hallucinations were reported more in traditional cultures. A 2015 published study, "Hearing Voices in Different Cultures: A Social Kindling Hypothesis" compared the experiences of three groups of 20 participants who met the criteria for schizophrenia (n = 60) from three places, including San Mateo, California (USA), Accra, Ghana (Africa), and Chennai, India (South Asia). In this study, researchers found distinct differences among the participants experience with voices. In the San Mateo, CA sample all but three of the participants referred to their experience of hearing voices with "diagnostic labels, and even [used] diagnostic criteria readily", they also connected "hearing voices" with being "crazy". For the Accra, Ghana sample, almost no participants referenced a diagnosis and instead they spoke about voices as having "a spiritual meaning and as well as a psychiatric one". In the Chennai, India sample, similarly to the Ghana interviewees, most of the participants did not reference a diagnosis and for many of these participants, the voices they heard were of people they knew and people they were related to, "voices of kin". Another key finding that was identified in this research study is that "voice-hearing experience outside the West may be less harsh". Finally, researchers found that "different cultural expectations about the mind, or about the way people expect thoughts and feelings to be private or accessible to spirits or persons" could be attributed to the differences they found across the participants. In a qualitative study of 57 self-identified Māori participants subcategorized within one or more of the following groups including: "tangata Māori (people seeking wellness/service users), Kaumatua/Kuia (elders), Kai mahi (cultural support workers), Managers of mental health services, clinicians (psychiatrists, nurses, and psychologists) and students (undergraduate and postgraduate psychology students)", researchers interviewed participants and asked them about "[1] their understanding of experiences that could be considered to be psychotic or labelled schizophrenic, [2] what questions they would ask someone who came seeking help and [3] they we asked about their understanding of the terms schizophrenia and psychosis". The participants were also people who either had worked with psychosis or schizophrenia or had experienced psychosis or schizophrenia. In this study, researchers found that the participants understood these experiences labelled "psychotic" or "schizophrenic" through multiple models. Taken directly from the article, the researchers wrote that there is "no one Māori way of understanding psychotic experiences". Instead, as part of understanding these experiences, the participants combined both "biological explanations and Māori spiritual beliefs", with a preference for cultural and psychosocial explanations. For example, 19 participants spoke about psychotic experiences as sometimes being a sign of matakite (giftedness). One of the Kaumatua/Kuia (elders) was quoted as saying:"I never wanted to accept it, I said no it isnt, it isnt [matakite] but it wouldnt stop and in truth I knew what I had to do, help my people, I didnt want the responsibility but here I am. They helped me understand it and told me what to do with it."An important finding highlighted in this study is that studies done by the World Health Organization (WHO) have found that "developing countries (non-Western) experience far higher rates of recovery from schizophrenia than Western countries". The researchers further articulate that these findings may be due to culturally specific meaning created about the experience of schizophrenia, psychosis, and hearing voices as well as "positive expectations around recovery". Research has found that auditory hallucinations and hallucinations more broadly are not necessarily a symptom of "severe mental health" and instead might be more commonplace than assumed and also experienced by people in the general population. According to a literature review, "The prevalence of voice-hearers in the general population: A literature review", which compared 17 studies on auditory hallucinations in participants from nine countries, found that "differences in the prevalence of [voice-hearing in the adult general population] can be attributed to true variations based on gender, ethnicity and environmental context". The studies took place from 1894 to 2007 and the nine countries in which the studies took place were the United Kingdom, Philippines, United States, Sweden, France, Germany, Italy, Netherlands, and New Zealand. The same literature review highlighted that "studies that [analyzed] their data by gender report[ed] a higher frequency of women reporting hallucinatory experiences of some kind". Although generally speaking hallucinations (including auditory) are strongly related to psychotic diagnoses and schizophrenia, the presence of hallucinations does not exclusively mean that someone has a psychotic or schizophrenic episode or diagnosis. Audible thoughts General information Audible thoughts, also called thought sonorisation, is a kind of auditory verbal hallucination. People with this hallucination constantly hear a voice narrating ones own thoughts out loud. This idea was first defined by Kurt Schneider, who included this symptom as one of the "first-rank symptoms" in diagnosing schizophrenia. Although the diagnostic reliability of "first-rank symptoms" has long been questioned, this idea remains important for its historical and descriptive value in psychiatry. Audible thoughts is a positive symptom of schizophrenia according to DSM-5, however, this hallucination is not exclusively found among people with schizophrenia, but also among patients of bipolar disorder in their manic phase. Types Patients who experience audible thoughts will hear the voice repeating their own thoughts either as or after the thought comes into their minds. The first kind of audible thought, the voice and the thought appear simultaneously, was named by German psychiatry August Cramer as Gedankenlautwerden, a German word stands for "thoughts become aloud".Example of Gedankenlautwerden: A 35-year-old painter heard a quiet voice with an Oxford accent. The volume was slightly lower than that of normal conversation and could be heard equally well with either ear. The voice would say, I cant stand that man, the way he holds his brush he looks like a poof. He immediately experienced whatever the voice was saying as his own thoughts, to the exclusion of all other thoughts.And the second kind in which the voice comes after the thought appears is called echo de la pensée in French, namely thought echo.Example of thought echo: A 32-year-old housewife complained of a mans voice. The voice would repeat almost all the patients goal-directed thinking, even banalest thoughts. The patient would think I must put the kettle on, and after a pause of not more than one second the voice would say I must put the kettle on.If categorized by patients subjective feelings about where the voices come from, audible thoughts can be either external or internal. Patients reporting an internal origin of the hallucination claim that the voices are coming from somewhere inside their body, mainly in their own heads, while those reporting an external origin feel the voice as coming from the environment. The external origins vary in the patients description: some hear the voice in front of their ears, some attribute the ambient surrounding noise, like running water or wind, as the source. This sometimes influences patients behaviours as they believe people around them can also hear these audible thoughts, therefore they may avoid social events and public places to prevent others from hearing their thoughts. Besides, study suggests that the locus of the voice may change as the patients hallucinations develop. Theres a trend of internalization of external perceptions, which means patients will locate the source of their hallucination from external objects to internal subjectivity over time. Phenomenological study According to the study conducted by Tony Nayani and Anthony David in 1996, about half of the patients(46%) with audible thoughts claimed that the hallucination has somehow taken the place of their conscience in making decisions and judgement. They tend to follow the voices instructions when confronting dilemmas in their daily lives. The study also suggests that a majority of the patients, both male and female, labeled the sounds they heard as male voices. However, younger patients tend to hear younger voices, which suggests that the voices in the hallucination may share age with the patients but not gender. Whats more, voices in the hallucination usually differ from the patients own voices in accents. They reported the voices they heard as coming from different regions or social classes with them.Some patients may develop skills to control their hallucinations to a certain extent through some kind of cognitive focusing. They cant eliminate the voices, but through cognitive focusing or suggestive behaviours (e.g. swallowing), they can control the onset and offset of their hallucination. Pathophysiology Studies have suggested that damage to specific brain areas may relate to the formation of audible thought. Patients who attribute the hallucination to an external locus are more likely to report the voice coming from the right. This unilateral characteristic can be explained by either contralateral temporal lobe disease or ipsilateral ear disease. Researchers also came up with hypotheses that audible thought may result from damage in the right hemisphere, which causes the malfunction of prosodic construction. If this happens, the left hemisphere may misinterpret the patients own thoughts as alien, leading the patients to misconceive their thoughts as coming from another voice. Research A good amount of the research done has focused primarily on patients with schizophrenia, and beyond that drug-resistant auditory hallucinations. Auditory verbal hallucinations as symptoms of disordered speech There is now substantial evidence that auditory verbal hallucinations (AVHs) in psychotic patients are manifestations of disorganized speech capacity at least as much as, and even more than, being genuinely auditory phenomena. Such evidence comes mainly from research carried out on the neuroimaging of AVHs, on the so-called "inner" and "subvocal" speech, on "voices" experienced by deaf patients, and on the phenomenology of AVHs. Interestingly, this evidence is in line with clinical insights of the classical psychiatric school (de Clérambault) as well as of (Lacanian) psychoanalysis. According to the latter, the experience of the voice is linked more to speech as a chain of articulated signifying elements than to sensorium itself. Non-psychotic symptomatology There is on-going research that supports the prevalence of auditory hallucinations, with a lack of other conventional psychotic symptoms (such as delusions, or paranoia), particularly in pre-pubertal children. These studies indicate a remarkably high percentage of children (up to 14% of the population sampled) experienced sounds or voices without any external cause, although "sounds" are not considered by psychiatrists to be examples of auditory hallucinations. Differentiating actual auditory hallucinations from "sounds" or a normal internal dialogue is important since the latter phenomena are not indicative of mental illness. Methods To explore the auditory hallucinations in schizophrenia, experimental neurocognitive use approaches such as dichotic listening, structural fMRI, and functional fMRI. Together, they allow insight into how the brain reacts to auditory stimuli, be they external or internal. Such methods allowed researchers to find a correlation between a decreased gray matter of the left temporal lobe and difficulties in processing external sound stimuli in hallucinating patients.Functional neuroimaging has shown increased blood and oxygen flow to speech-related areas of the left temporal lobe, including Brocas area and the thalamus. Causes The causes of auditory hallucinations are unclear. It is suspected that deficits in the left temporal lobe attribute that lead to spontaneous neural activity cause speech misrepresentations that account for auditory hallucinations.Charles Fernyhough, of the University of Durham, poses one theory among many but stands as a reasonable example of the literature. Given standing evidence towards the involvement of the inner voice in auditory hallucinations, he proposes two alternative hypotheses on the origins of auditory hallucinations in the non-psychotic. They both rely on an understanding of the internalization process of the inner voice. Internalization of the inner voice The internalization process of the inner voice is the process of creating an inner voice during early childhood and can be separated into four distinct levels.Level one (external dialogue) involves the capacity to maintain an external dialogue with another person, i.e. a toddler talking with their parent(s). Level two (private speech) involves the capacity to maintain a private external dialogue, as seen in children voicing the actions of play using dolls or other toys, or someone talking to themselves while repeating something they had written down. Level three (expanded inner speech) is the first internal level in speech. This involves the capacity to carry out internal monologues, as seen in reading to oneself or going over a list silently. Level four (condensed inner speech) is the final level in the internalization process. It involves the capacity to think in terms of pure meaning without the need to put thoughts into words in order to grasp the meaning of the thought. Disruption to internalization A disruption could occur during the normal process of internalizing ones inner voice, where the individual would not interpret their own voice as belonging to them; a problem that would be interpreted as a level one to level four error. Re-expansion Alternatively, the disruption could occur during the process of re-externalizing ones inner voice, resulting in an apparent second voice that seems alien to the individual; a problem that would be interpreted as a level four to level one error. Treatments Psychopharmacological treatments include antipsychotic medications. Meta-analyses show that cognitive behavioral therapy and metacognitive training also reduce the severity of hallucinations. Psychology research shows that the first step in treatment is for the patient to realize that the voices they hear are a creation of their own mind. This realization allows patients to reclaim a measure of control over their lives. See also Auditory imagery Earworm Hearing Voices Network Hypnagogic hallucinations Intrusive thought Microwave auditory effect Speech synthesis Tinnitus References Further reading Johnson FH (1978). The anatomy of hallucinations. Chicago: Nelson-Hall Co. ISBN 978-0-88229-155-0. Bentall RP, Slade PD (1988). Sensory deception: a scientific analysis of hallucination. London: Croom Helm. ISBN 978-0-7099-3961-0. Larøi F, Aleman A (2008). Hallucinations: The Science of Idiosyncratic Perception. American Psychological Association (APA). ISBN 978-1-4338-0311-6. Archived from the original on 2008-12-21. Retrieved 2009-10-27. External links "Anthropology and Hallucinations", chapter from The Making of Religion "The voice inside: A practical guide to coping with hearing voices" A salience dysregulation syndrome, from Jim van Os. The British Journal of Psychiatry, 2009.
Egg allergy
Egg allergy is an immune hypersensitivity to proteins found in chicken eggs, and possibly goose, duck, or turkey eggs. Symptoms can be either rapid or gradual in onset. The latter can take hours to days to appear. The former may include anaphylaxis, a potentially life-threatening condition which requires treatment with epinephrine. Other presentations may include atopic dermatitis or inflammation of the esophagus.In the United States, 90% of allergic responses to foods are caused by cows milk, eggs, wheat, shellfish, peanuts, tree nuts, fish, and soy beans. The declaration of the presence of trace amounts of allergens in foods is not mandatory in any country, with the exception of Brazil.Prevention is by avoiding eating eggs and foods that may contain eggs, such as cake or cookies. It is unclear if the early introduction of the eggs to the diet of babies aged 4–6 months decreases the risk of egg allergies.Egg allergy appears mainly in children but can persist into adulthood. In the United States, it is the second most common food allergy in children after cows milk. Most children outgrow egg allergy by the age of five, but some people remain allergic for a lifetime. In North America and Western Europe, egg allergy occurs in 0.5% to 2.5% of children under the age of five years. The majority grow out of it by school age, but for roughly one-third, the allergy persists into adulthood. Strong predictors for adult-persistence are anaphylaxis, high egg-specific serum immunoglobulin E (IgE), robust response to the skin prick test and absence of tolerance to egg-containing baked foods. Signs and symptoms Food allergies usually have an onset from minutes to one to two hours. Symptoms may include: rash, hives, itching of mouth, lips, tongue, throat, eyes, skin, or other areas, swelling of lips, tongue, eyelids, or the whole face, difficulty swallowing, runny or congested nose, hoarse voice, wheezing, shortness of breath, diarrhea, abdominal pain, lightheadedness, fainting, nausea, or vomiting. Symptoms of allergies vary from person to person and may vary from incident to incident. Serious danger regarding allergies can begin when the respiratory tract or blood circulation is affected. The former can be indicated by wheezing, a blocked airway and cyanosis, the latter by weak pulse, pale skin, and fainting. When these symptoms occur the allergic reaction is called anaphylaxis. Anaphylaxis occurs when IgE antibodies are involved, and areas of the body that are not in direct contact with the food become affected and show severe symptoms. Untreated, this can proceed to vasodilation and a low blood pressure situation called anaphylactic shock.Young children may exhibit dermatitis/eczema on face, scalp and other parts of the body, in older children knees and elbows are more commonly affected. Children with dermatitis are at greater than expected risk of also exhibiting asthma and allergic rhinitis. Causes Eating egg The cause is typically the eating of eggs or foods that contain eggs. Briefly, the immune system over-reacts to proteins found in eggs. This allergic reaction may be triggered by small amounts of egg, even egg incorporated into cooked foods, such as cake. People with an allergy to chicken eggs may also be reactive to goose, duck, or turkey eggs. Vaccines Influenza vaccines are created by injecting a live virus into fertilized chicken eggs. The viruses are harvested, killed and purified, but a residual amount of egg white protein remains. For adults ages 18 and older there is an option to receive recombinant flu vaccines (RIV3 or RIV4) which are grown on mammalian cell cultures instead of in eggs, and so are no risk for people with severe egg allergy. Recommendations are that for people with a history of mild egg allergy should receive any IIV or RIV vaccine. People with a more severe allergic reaction may also receive any IIV or RIV, but in an inpatient or outpatient medical setting, administered by a healthcare provider. People with a known severe allergic reaction to influenza vaccine (which could be egg protein or the gelatin or the neomycin components of the vaccine) should not receive a flu vaccine.Each year the American Academy of Pediatrics (AAP) publishes recommendations for prevention and control of influenza in children. In the 2016-2017 guidelines a change was made, that children with a history of egg allergy may receive the IIV3 or IIV4 vaccine without special precautions. It did, however, state that "Standard vaccination practice should include the ability to respond to acute hypersensitivity reactions." Prior to this, AAP recommended precautions based on egg allergy history: if no history, immunize; if a history of mild reaction, i.e., hives, immunize in a medical setting with healthcare professionals and resuscitative equipment available; if a history of severe reactions, refer to an allergist.The measles and mumps parts of the "MMR vaccine" (for measles, mumps, and rubella) are cultured on chick embryo cell culture and contain trace amounts of egg protein. The amount of egg protein is lower than in influenza vaccines and the risk of an allergic reaction is much lower. One guideline stated that all infants and children should get the two MMR vaccinations, mentioning that "Studies on large numbers of egg-allergic children show there is no increased risk of severe allergic reactions to the vaccines." Another guideline recommended that if a child has a known medical history of severe anaphylaxis reaction to eggs, then the vaccination should be done in a hospital center, and the child be kept for observation for 60 minutes before being allowed to leave. The second guideline also stated that if there was a severe reaction to the first vaccination - which could have been to egg protein or the gelatin and neomycin components of the vaccine - the second is contraindicated. Exercise as a contributing factor There is a condition called food-dependent, exercise-induced anaphylaxis (FDEIAn). Exercise can trigger hives and more severe symptoms of an allergic reaction. For some people with this condition, exercise alone is not sufficient, nor consumption of a food to which they are mildly allergic sufficient, but when the food in question is consumed within a few hours before high intensity exercise, the result can be anaphylaxis. Egg are specifically mentioned as a causative food. One theory is that exercise is stimulating the release of mediators such as histamine from IgE-activated mast cells. Two of the reviews postulate that exercise is not essential for the development of symptoms, but rather that it is one of several augmentation factors, citing evidence that the culprit food in combination with alcohol or aspirin will result in a respiratory anaphylactic reaction. Mechanisms Conditions caused by food allergies are classified into three groups according to the mechanism of the allergic response: IgE-mediated (classic) – the most common type, manifesting acute changes that occur shortly after eating, and may progress to anaphylaxis Non-IgE mediated – characterized by an immune response not involving immunoglobulin E; may occur hours to days after eating, complicating diagnosis IgE and non-IgE-mediated – a hybrid of the above two typesAllergic reactions are hyperactive responses of the immune system to generally innocuous substances, such as proteins in the foods we eat. Why some proteins trigger allergic reactions while others do not is not entirely clear, although in part thought to be due to resistance to digestion. Because of this, intact or largely intact proteins reach the small intestine, which has a large presence of white blood cells involved in immune reactions. The heat of cooking structurally degrades protein molecules, potentially making them less allergenic.The pathophysiology of allergic responses can be divided into two phases. The first is an acute response that occurs within minutes to an hour or two exposure to an allergen. This phase can either subside or progress into a "late-phase reaction" which can substantially prolong the symptoms of a response, and result in more tissue damage. In the early stages of acute allergic reaction, lymphocytes previously sensitized to a specific protein or protein fraction react by quickly producing a particular type of antibody known as secreted IgE (sIgE), which circulates in the blood and binds to IgE-specific receptors on the surface of other kinds of immune cells called mast cells and basophils. Both of these are involved in the acute inflammatory response. Activated mast cells and basophils undergo a process called degranulation, during which they release histamine and other inflammatory chemical mediators called (cytokines, interleukins, leukotrienes, and prostaglandins) into the surrounding tissue causing several systemic effects, such as vasodilation, mucous secretion, nerve stimulation, and smooth-muscle contraction. This results in runny nose, itchiness, shortness of breath, and potentially anaphylaxis. Depending on the individual, the allergen, and the mode of introduction, the symptoms can be system-wide (classical anaphylaxis), or localized to particular body systems; asthma is localized to the respiratory system while eczema is localized to the skin.After the chemical mediators of the acute response subside, late-phase responses can often occur due to the migration of other white blood cells such as neutrophils, lymphocytes, eosinophils, and macrophages to the initial reaction sites. This is usually seen 2–24 hours after the original reaction. Cytokines from mast cells may also play a role in the persistence of long-term effects. Late-phase responses seen in asthma are slightly different from those seen in other allergic responses, although they are still caused by release of mediators from eosinophils.Five major allergenic proteins from the egg of the domestic chicken (Gallus domesticus) have been identified; these are designated Gal d 1–5. Four of these are in egg white: ovomucoid (Gal d 1), ovalbumin (Gal d 2), ovotransferrin (Gal d 3) and lysozyme (Gal d 4). Of these, ovomucoid is the dominant allergen, and one that is less likely to be outgrown as children get older. Ingestion of under-cooked egg may trigger more severe clinical reactions than well-cooked egg. In egg yolk, alpha-livetin (Gal d 5) is the major allergen, but various vitellins may also trigger a reaction. People allergic to alpha-livetin may experience respiratory symptoms such as rhinitis and/or asthma when exposed to chickens, because the yolk protein is also found in live birds. In addition to IgE-mediated responses, egg allergy can manifest as atopic dermatitis, especially in infants and young children. Some will display both, so that a child could react to an oral food challenge with allergic symptoms, followed a day or two later with a flare up of atopic dermatitis and/or gastrointestinal symptoms, including allergic eosinophilic esophagitis. Non-allergic intolerance Egg whites, which are potentially histamine liberators, also provoke a nonallergic response in some people. In this situation, proteins in egg white directly trigger the release of histamine from mast cells. Because this mechanism is classified as a pharmacological reaction, or "pseudoallergy", the condition is considered a food intolerance instead of a true immunoglobulin E (IgE) based allergic reaction. The response is usually localized, typically in the gastrointestinal tract. Symptoms may include abdominal pain, diarrhea, or any other symptoms typical to histamine release. If sufficiently strong, it can result in an anaphylactoid reaction, which is clinically indistinguishable from true anaphylaxis. Some people with this condition tolerate small quantities of egg whites. They are more often able to tolerate well-cooked eggs, such as found in cake or dried egg-based pasta, than incompletely cooked eggs, such as fried eggs or meringues, or uncooked eggs. Diagnosis Diagnosis of egg allergy is based on the persons history of allergic reactions, skin prick test (SPT), patch test and measurement of egg-specific serum immunoglobulin E (IgE or sIgE). Confirmation is by double-blind, placebo-controlled food challenges. SPT and sIgE have sensitivity greater than 90% but specificity in the 50-60% range, meaning these tests will detect an egg sensitivity, but will also be positive for other allergens. For young children, attempts have been made to identify SPT and sIgE responses strong enough to avoid the need for a confirming oral food challenge. Prevention When eggs are introduced to a babys diet is thought to affect risk of developing allergy, but there are contradictory recommendations. A 2016 review acknowledged that introducing peanuts early appears to have a benefit, but stated "The effect of early introduction of egg on egg allergy are controversial." A meta-analysis published the same year supported the theory that early introduction of eggs into an infants diet lowers risk, and a review of allergens in general stated that introducing solid foods at 4–6 months may result in the lowest subsequent allergy risk. However, an older consensus document from the American College of Allergy, Asthma and Immunology recommended that introduction of chicken eggs be delayed to 24 months of age. Treatment The mainstay of treatment is total avoidance of egg protein intake. This is complicated because the declaration of the presence of trace amounts of allergens in foods is not mandatory (see regulation of labelling). Treatment for accidental ingestion of egg products by allergic individuals varies depending on the sensitivity of the person. An antihistamine such as diphenhydramine (Benadryl) may be prescribed. Sometimes prednisone will be prescribed to prevent a possible late phase Type I hypersensitivity reaction. Severe allergic reactions (anaphalaxis) may require treatment with an epinephrine pen, an injection device designed to be used by a non-healthcare professional when emergency treatment is warranted. Immunotherapy There is active research on trying oral immunotherapy (OIT) to desensitize people to egg allergens. A Cochrane Review concluded that OIT can desensitize people, but it remains unclear whether long-term tolerance develops after treatment ceases, and 69% of the people enrolled in the trials had adverse effects. They concluded there was a need for standardized protocols and guidelines prior to incorporating OIT into clinical practice. A second review noted that allergic reactions, up to anaphylaxis, can occur during OIT, and recommends this treatment not be routine medical practice. A third review limited its scope to trials of baked egg-containing goods such as bread or cake as a means of resolving egg allergy. Again, there were some successes, but also some severe allergic reactions, and the authors came down on the side of not recommending this as treatment. Avoiding eggs Prevention of egg-allergic reactions means avoiding eggs and egg-containing foods. People with an allergy to chicken eggs may also be allergic to other types of eggs, such as goose, duck, or turkey eggs. In cooking, eggs are multifunctional: they may act as an emulsifier to reduce oil/water separation (mayonnaise), a binder (water binding and particle adhesion, as in meatloaf), or an aerator (cakes, especially angel food). Some commercial egg substitutes can substitute for particular functions (potato starch and tapioca for water binding, whey protein or bean water for aeration or particle binding, or soy lecithin or avocado for emulsification). Food companies produce egg-free mayonnaise and other replacement foods. Alfred Bird invented egg-free Birds Custard, the original version of what is known generically as custard powder today.Most people find it necessary to strictly avoid any item containing eggs, including: Ingredients that sometimes include egg protein include: artificial flavoring, natural flavoring, lecithin and nougat candy. Probiotic products have been tested, and some found to contain milk and egg proteins which were not always indicated on the labels. Prognosis The majority of children outgrow egg allergy. One review reported that 70% of children will outgrow this allergy by 16 years. In subsequently published longitudinal studies, one reported that for 140 infants who had challenge-confirmed egg allergy, 44% had resolved by two years. A second reported that for 203 infants with confirmed IgE-mediated egg allergy, 45% resolved by two years of age, 66% by four years, and 71% by six years. Children will be able to tolerate eggs as an ingredient in baked goods and well-cooked eggs sooner than under-cooked eggs. Resolution was more likely if baseline serum IgE was lower, and if the baseline symptoms did not include anaphylaxis. Epidemiology In countries in North America and western Europe, where use of cows milk based infant formula is common, chicken egg allergy is the second most common food allergy in infants and young children after cows milk. However, in Japan, egg allergy is first and cows milk second, followed by wheat and then the other common allergenic foods. A review from South Africa reported egg and peanut as the two most common allergenic foods.Incidence and prevalence are terms commonly used in describing disease epidemiology. Incidence is newly diagnosed cases, which can be expressed as new cases per year per million people. Prevalence is the number of cases alive, expressible as existing cases per million people during a period of time. Egg allergies are usually observed in infants and young children, and often disappear with age (see Prognosis), so prevalence of egg allergy may be expressed as a percentage of children under a set age. One review estimates that in North American and western European populations the prevalence of egg allergy in children under the age of five years is 1.8-2.0%. A second described the range in young children as 0.5-2.5%. Although the majority of children develop tolerance as they age into school age years, for roughly one-third the allergy persists into adulthood. Strong predictors for adult-persistent allergy are anaphylactic symptoms as a child, high egg-specific serum IgE, robust response to the skin prick test and absence of tolerance to egg-containing baked foods. Self-reported allergy prevalence is always higher than food-challenge confirmed allergy. For all age groups, a review of fifty studies conducted in Europe estimated 2.5% for self-reported egg allergy and 0.2% for confirmed. National survey data in the United States collected in 2005 and 2006 showed that from age six and older, the prevalence of serum IgE confirmed egg allergy was under 0.2%.Adult-onset of egg allergy is rare, but there is confirmation of cases. Some were described as having started in late teenage years; another group were workers in the baking industry who were exposed to powdered egg dust. Regulation Whether food allergy prevalence is increasing or not, food allergy awareness has definitely increased, with impacts on the quality of life for children, their parents and their immediate caregivers. In the United States, the Food Allergen Labeling and Consumer Protection Act of 2004 (FALCPA) causes people to be reminded of allergy problems every time they handle a food package, and restaurants have added allergen warnings to menus. The Culinary Institute of America, a premier school for chef training, has courses in allergen-free cooking and a separate teaching kitchen. School systems have protocols about what foods can be brought into the school. Despite all these precautions, people with serious allergies are aware that accidental exposure can easily occur at other peoples houses, at school or in restaurants. Regulation of labelling In response to the risk that certain foods pose to those with food allergies, some countries have responded by instituting labeling laws that require food products to clearly inform consumers if their products contain major allergens or byproducts of major allergens among the ingredients intentionally added to foods. Nevertheless, there are no labeling laws to mandatory declare the presence of trace amounts in the final product as a consequence of cross-contamination, except in Brazil. Ingredients intentionally added FALCPA became effective 1 January 2006, requiring companies selling foods in the United States to disclose on labels whether a packaged food product contains any of these eight major food allergens, added intentionally: cows milk, peanuts, eggs, shellfish, fish, tree nuts, soy and wheat. This list originated in 1999 from the World Health Organisation Codex Alimentarius Commission. To meet FALCPA labeling requirements, if an ingredient is derived from one of the required-label allergens, then it must either have its "food sourced name" in parentheses, for example "Casein (milk)," or as an alternative, there must be a statement separate but adjacent to the ingredients list: "Contains milk" (and any other of the allergens with mandatory labeling).FALCPA applies to packaged foods regulated by the FDA, which does not include poultry, most meats, certain egg products, and most alcoholic beverages. However, some meat, poultry, and egg processed products may contain allergenic ingredients. These products are regulated by the Food Safety and Inspection Service (FSIS), which requires that any ingredient be declared in the labeling only by its common or usual name. Neither the identification of the source of a specific ingredient in a parenthetical statement nor the use of statements to alert for the presence of specific ingredients, like "Contains: milk", are mandatory according to FSIS. FALCPA also does not apply to food prepared in restaurants. The EU Food Information for Consumers Regulation 1169/2011 – requires food businesses to provide allergy information on food sold unpackaged, for example, in catering outlets, deli counters, bakeries and sandwich bars. Trace amounts as a result of cross-contamination The value of allergen labeling other than for intentional ingredients is controversial. This concerns labeling for ingredients present unintentionally as a consequence of cross-contact or cross-contamination at any point along the food chain (during raw material transportation, storage or handling, due to shared equipment for processing and packaging, etc.). Experts in this field propose that if allergen labeling is to be useful to consumers, and healthcare professionals who advise and treat those consumers, ideally there should be agreement on which foods require labeling, threshold quantities below which labeling may be of no purpose, and validation of allergen detection methods to test and potentially recall foods that were deliberately or inadvertently contaminated.Labeling regulations have been modified to provide for mandatory labeling of ingredients plus voluntary labeling, termed precautionary allergen labeling (PAL), also known as "may contain" statements, for possible, inadvertent, trace amount, cross-contamination during production. PAL labeling can be confusing to consumers, especially as there can be many variations on the wording of the warning. As of 2014 PAL is regulated only in Switzerland, Japan, Argentina, and South Africa. Argentina decided to prohibit precautionary allergen labeling since 2010, and instead puts the onus on the manufacturer to control the manufacturing process and label only those allergenic ingredients known to be in the products. South Africa does not permit the use of PAL, except when manufacturers demonstrate the potential presence of allergen due to cross-contamination through a documented risk assessment and despite adherence to Good Manufacturing Practice. In Australia and New Zealand there is a recommendation that PAL be replaced by guidance from VITAL 2.0 (Vital Incidental Trace Allergen Labeling). A review identified "the eliciting dose for an allergic reaction in 1% of the population" as ED01. This threshold reference dose for foods such as cows milk, egg, peanut and other proteins) will provide food manufacturers with guidance for developing precautionary labeling and give consumers a better idea of might be accidentally in a food product beyond "may contain." VITAL 2.0 was developed by the Allergen Bureau, a food industry sponsored, non-government organization. The European Union has initiated a process to create labeling regulations for unintentional contamination but is not expected to publish such before 2024.In Brazil since April 2016, the declaration of the possibility of cross-contamination is mandatory when the product does not intentionally add any allergenic food or its derivatives but the Good Manufacturing Practices and allergen control measures adopted are not sufficient to prevent the presence of accidental trace amounts. These allergens include wheat, rye, barley, oats and their hybrids, crustaceans, eggs, fish, peanuts, soybean, milk of all species of mammalians, almonds, hazelnuts, cashew nuts, Brazil nuts, macadamia nuts, walnuts, pecan nuts, pistaches, pine nuts, and chestnuts. Society and culture Food fear has a significant impact on quality of life. For children with allergies, their quality of life is also affected by actions of their peers. There is an increased occurrence of bullying, which can include threats or acts of deliberately being touched with foods they need to avoid, also having their allergen-free food deliberately contaminated. See also List of allergens (food and non-food) References == External links ==
Dermatopolymyositis
Dermatopolymyositis is a family of myositis disorders that includes polymyositis and dermatomyositis. As such, it includes both a distinctive skin rash and progressive muscular weakness. It is a rare disease. References == External links ==
Gapeworm
A gapeworm (Syngamus trachea), also known as a red worm and forked worm, is a parasitic nematode worm that infects the tracheas of certain birds. The resulting disease, known as "gape" occurs when the worms clog and obstruct the airway. The worms are also known as "red worms" or "forked worms" due to their red color and the permanent procreative conjunction of males and females. Gapeworms are common in young, domesticated chickens and turkeys. When the female gapeworm lays her eggs in the trachea of an infected bird, the eggs are coughed up, swallowed, then defecated. Birds are infected with the parasite when they consume the eggs found in the feces, or by consuming a transport host such as earthworms, snails (Planorbarius corneus, Bithynia tentaculata and others) Morphology Males and females are joined together in a state of permanent copulation forming, a Y shape (forked worms). They are also known as the red worms because of their color. Females (up to 20mm long) are much longer than males (up to 6mm long). The life cycle of the gapeworm is peculiar in that transmission from bird to bird may be successfully accomplished either directly (by ingesting embryonated eggs or infective larvae) or indirectly (by ingestion of earthworms containing free or encysted gapeworm larvae they had obtained by feeding on contaminated soil). Life cycle and pathogenesis In the preparasitic phase, third stage infective larvae (L3) develop inside the eggs at which time they may hatch. Earthworms serve as transport (paratenic) hosts. Larvae have been shown to remain viable for more than three years encapsulated in earthworm muscles. Other invertebrates may also serve as paratenic hosts, including terrestrial snails and slugs. The parasitic phase involves substantial migration in the definitive host to reach the predilection site. Young birds are most severely affected with migration of larvae and adults through the lungs causing a severe pneumonia. Lymphoid nodules form at the point of attachment of the worms in the bronchi and trachea. Adult worms also appear to feed on blood. Worms in the bronchi and trachea provoke a hemorrhagic tracheitis and bronchitis, forming large quantities of mucus, plugging the air passages and, in severe cases, causing asphyxiation. Pheasants appear to be particularly susceptible to infections resulting in mortality rates as high as 25% during outbreaks. The rapidly growing worms soon obstruct the lumen of the trachea, causing suffocation. Turkey poults, baby chicks and pheasant chicks are most susceptible to infection. Turkey poults usually develop gapeworm signs earlier and begin to die sooner after infection than young chickens. Lesions are usually found in the trachea of turkeys and pheasants but seldom if ever in the tracheas of young chickens and guinea fowl. The male worm, in the form of lesions, remains permanently attached to the tracheal wall throughout the duration of its life. The female worms apparently detach and reattach from time to time in order to obtain a more abundant supply of food. Epidemiology Earthworm transport hosts are important factors in the transmission of Syngamus trachea when poultry and game birds are reared on soil. The longevity of L3s in earthworms (up to 3 years) is particularly important in perpetuating the infection from year to year. Wild birds may serve as reservoirs of infection and have been implicated as the sources of infections in outbreaks on game-bird farms as well as poultry farms. Wild reservoir hosts may include pheasants, ruffed grouse, partridges, wild turkeys, magpies, meadowlarks, American robins, grackles, jays, jackdaws, rooks, starlings and crows. There is also evidence to suggest that strains of Syngamus trachea from wild bird reservoir hosts may be less effective in domestic birds; if they have an earthworm transport host rather than direct infections via ingestion of L3s, or eggs containing L3s. Clinical signs Blockage of the bronchi and trachea with worms and mucus will cause infected birds to gasp for air. They stretch out their necks, open their mouths and gasp for air producing a hissing noise as they do so. This "gaping" posture has given rise to the common term "gapeworm" to describe Syngamus trachea. These clinical signs first appear approximately 1–2 weeks after infection. Birds infected with gapeworms show signs of weakness and emaciation, usually spending much of their time with eyes closed and head drawn back against the body. An infected bird may give its head a convulsive shake in an attempt to remove the obstruction from the trachea so that normal breathing may be resumed. Severely affected birds, particularly young ones, will deteriorate rapidly; they stop drinking and become anorexic. At this stage, death is the usual outcome. Adult birds are usually less severely affected and may only show an occasional cough or even no obvious clinical signs. Diagnosis A diagnosis is usually made on the basis of the classical clinical signs of "gaping". Subclinical infections with few worms may be confirmed at necropsy by finding copulating worms in the trachea and also by finding the characteristic eggs in the feces of infected birds. Examination of the tracheas of infected birds shows that the mucous membrane is extensively irritated and inflamed. Coughing is apparently the result of this irritation to the mucous lining. Control and treatment Prevention In the artificial rearing of pheasants, gapes are a serious menace. Confinement rearing of young birds has reduced the problem in chickens compared to a few years ago. However, this parasite continues to present an occasional problem with turkeys raised on range. Confinement rearing of broilers/pullets and caging of laying hens, have significantly influenced the quantity and variety of nematode infections in poultry. For most nematodes, control measures consist of sanitation and breaking the life cycle rather than chemotherapy. Confinement rearing on litter largely prevents infections with nematodes using intermediate hosts such as earthworms or grasshoppers, which are not normally found in poultry houses. Conversely, nematodes with direct life cycles or those that utilize intermediate hosts such as beetles, which are common in poultry houses, may prosper. Treatment of the soil or litter to kill intermediate hosts may be beneficial. Insecticides suitable for litter treatment include carbaryl, tetrachlorvinphos (stirofos). However, treatment is usually done only between grow-outs. Extreme care should be taken to ensure that feed and water are not contaminated. Treatment of range soil to kill ova is only partially successful. Changing litter can reduce infections, but treating floors with oil is not very effective. Raising different species or different ages of birds together or in close proximity is a dangerous procedure as regards parasitism. Adult turkeys, which are carriers of gapeworms, can transmit the disease to young chicks or pheasants, although older chickens are almost resistant to infection. Treatment Flubendazole (Flubenvet) is the only licensed anthelmintic for use in poultry and game birds. Continuous medication of pen-reared birds has been recommended, but is not economical and increases the possibility of drug resistance. Several other compounds have been shown effective against S. trachea under experimental conditions. Methyl 5-benzoyl-2-benzimidazole was 100% efficacious when fed prophylactically to turkey poults. 5-Isopropoxycarbonylamino-2-(4-thizolyl)-benzimidazole was found to be more efficacious than thiabendazole or disophenol. The level of control with three treatments of cambendazole on days 3–4, 6–7, and 16-17 post-infection was 94.9% in chickens and 99.1% in turkeys. Levamisole (Ergamisol), fed at a level of 0.04% for 2 days or 2 g/gal drinking water for 1 day each month, has proven effective in game birds. Fenbendazole (Panacur) at 20 mg/kg for 3–4 days is also effective. Ivermectin injections may be effective in treating resistant strains. Sources https://web.archive.org/web/20100703145151/http://cal.vet.upenn.edu/projects/merial/Strongls/strong_4.htm https://web.archive.org/web/20110717194656/http://www.vetsweb.com/diseases/syngamus-trachea-d75.html#effects
Eclampsia
Eclampsia is the onset of seizures (convulsions) in a woman with pre-eclampsia. Pre-eclampsia is one of the hypertensive disorders of pregnancy that presents with three main features: new onset of high blood pressure, large amounts of protein in the urine or other organ dysfunction, and edema. The diagnostic criteria for pre-eclampsia is high blood pressure occurring after 20 weeks gestation or during the second half of pregnancy. Most often it occurs during the 3rd trimester of pregnancy and may occur before, during, or after delivery. The seizures are of the tonic–clonic type and typically last about a minute. Following the seizure, there is either a period of confusion or coma. Other complications include aspiration pneumonia, cerebral hemorrhage, kidney failure, pulmonary edema, HELLP syndrome, coagulopathy, placental abruption and cardiac arrest.Low dose aspirin is recommended to prevent pre-eclampsia and eclampsia in those at high risk. Other preventative recommendation include calcium supplementation in areas with low calcium intake and treatment of prior hypertension with anti-hypertensive medications. Exercise during pregnancy may also be useful. The use of intravenous or intramuscular magnesium sulfate improves outcomes in those with severe pre-eclampsia and eclampsia and is generally safe. Treatment options include blood pressure medications such as hydralazine and emergency delivery of the baby either vaginally or by cesarean section.Pre-eclampsia is estimated to globally affect about 5% of deliveries while eclampsia affects about 1.4% of deliveries. In the developed world eclampsia rates are about 1 in 2,000 deliveries due to improved medical care whereas in developing countries it can impact 10-30 times as many women. Hypertensive disorders of pregnancy are one of the most common causes of death in pregnancy. They resulted in 46,900 deaths in 2015. Around one percent of women with eclampsia die. The word eclampsia is from the Greek term for lightning. The first known description of the condition was by Hippocrates in the 5th century BC. Signs and symptoms Eclampsia is a disorder of pregnancy characterized by seizures in the setting of pre-eclampsia. Most women have premonitory signs/symptoms in the hours before the initial seizure. Typically the woman develops hypertension before the onset of a convulsion (seizure). Other signs and symptoms to looks out for include: Long-lasting (persistent) frontal or occipital headaches or thunderclap headaches) Visual disturbance (blurred vision, photophobia, diplopopia) Photophobia (i.e. bright light causes discomfort) Abdominal pain Either in the epigastric region (the center of the abdomen above the navel, or belly-button) And/or in the right upper quadrant of the abdomen (below the right side of the rib cage) Altered mental status (confusion)Any of these symptoms may be present before or after the seizure. It is also possible for the woman to be asymptomatic prior to the onset of the seizure. Other cerebral signs that may precede the convulsion include nausea, vomiting, headaches, and cortical blindness. If the complication of multi-organ failure ensues, signs and symptoms of those failing organs will appear, such as abdominal pain, jaundice, shortness of breath, and diminished urine output. Onset The seizures of eclampsia typically present during pregnancy and prior to delivery (the antepartum period), but may also occur during labor and delivery (the intrapartum period) or after the baby has been delivered (the postpartum period). If postpartum seizures develop, it is most likely to occur within the first 48 hours after delivery. However, late postpartum seizures of eclampsia may occur as late as 4 weeks after delivery. Characteristics Eclamptic seizure is typically described as a tonic–clonic seizure which may cause an abrupt loss of consciousness at onset. This is often associated with a shriek or scream followed by stiffness of the muscles of the arms, legs, back and chest. During the tonic phase, the mother may begin to appear cyanotic. This presentation lasts for about a minute, after which the muscles begin in jerk and twitch for an additional one to two minutes. Other signs include tongue biting, frothy and bloody sputum coming out of the mouth. Complications There are risks to both the mother and the fetus when eclampsia occurs. The fetus may grow more slowly than normal within the womb (uterus) of a woman with eclampsia, which is termed intrauterine growth restriction and may result in the child appearing small for gestational age or being born with low birth weight. Eclampsia may also cause problems with the placenta. The placenta may bleed (hemorrhage) or begin to separate early from the wall of the uterus. It is normal for the placenta to separate from the uterine wall during delivery, but it is abnormal for it to separate prior to delivery; this condition is called placental abruption and can be dangerous for the fetus. Placental insufficiency may also occur, a state in which the placenta fails to support appropriate fetal development because it cannot deliver the necessary amount of oxygen or nutrients to the fetus. During an eclamptic seizure, the beating of the fetal heart may become slower than normal (bradycardia). If any of these complications occurs, fetal distress may develop. Treatment of the mothers seizures may also manage fetal bradycardia. If the risk to the health of the fetus or the mother is high, the definitive treatment for eclampsia is delivery of the baby. Delivery by cesarean section may be necessary, especially if the instance of fetal bradycardia does not resolve after 10 to 15 minutes of resuscitative interventions. It may be safer to deliver the infant preterm than to wait for the full 40 weeks of fetal development to finish, and as a result prematurity is also a potential complication of eclampsia.In the mother, changes in vision may occur as a result of eclampsia, and these changes may include blurry vision, one-sided blindness (either temporary due to amaurosis fugax or potentially permanent due to retinal detachment), or cortical blindness, which affects the vision from both eyes. There are also potential complications in the lungs. The woman may have fluid slowly collecting in the lungs in a process known as pulmonary edema. During an eclamptic seizure, it is possible for a person to vomit the contents of the stomach and to inhale some of this material in a process known as aspiration. If aspiration occurs, the woman may experience difficulty breathing immediately or could develop an infection in the lungs later, called aspiration pneumonia. It is also possible that during a seizure breathing will stop temporarily or become inefficient, and the amount of oxygen reaching the womans body and brain will be decreased (in a state known as hypoxia). If it becomes difficult for the woman to breathe, she may need to have her breathing temporarily supported by an assistive device in a process called mechanical ventilation. In some severe eclampsia cases, the mother may become weak and sluggish (lethargy) or even comatose. These may be signs that the brain is swelling (cerebral edema) or bleeding (intracerebral hemorrhage). Risk factors Eclampsia, like pre-eclampsia, tends to occur more commonly in first pregnancies than subsequent pregnancies. Women who have long term high blood pressure before becoming pregnant have a greater risk of pre-eclampsia. Patients who have gestational hypertension and pre-eclampsia have an increased risk of eclampsia. Furthermore, women with other pre-existing vascular diseases (diabetes or nephropathy) or thrombophilia disease such as the antiphospholipid syndrome are at higher risk to develop pre-eclampsia and eclampsia. Having a placenta that is enlarged by multiple gestation or hydatidiform mole also increases risk of eclampsia. In addition, there is a genetic component: a woman whose mother or sister had the condition is at higher risk than otherwise. Patients who have experienced eclampsia are at increased risk for pre-eclampsia/eclampsia in a later pregnancy. The occurrence of pre-eclampsia was 5% in white, 9% in Hispanic, and 11% in African American patients and this may reflect disproportionate risk of developing pre-eclampsia among ethnic groups. Additionally, black patients were also shown to have a disproportionately higher risk of dying from eclampsia. Mechanism The mechanisms of eclampsia and preeclampsia are not definitively understood, but following provides some insight. The presence of a placenta is required, and eclampsia resolves if it is removed. Reduced blood flow to the placenta (placental hypoperfusion) may be a key feature of the process. It is typically accompanied by increased sensitivity of the maternal vasculature to agents which cause constriction of the small arteries, leading to reduced blood flow to multiple organs. Vascular dysfunction-associated maternal conditions such as Lupus, hypertension, and renal disease, or obstetric conditions that increase placental volume without an increase in placental blood flow (such as multifetal gestation) may increase risk for pre-eclampsia. Also, activation of the coagulation cascade can lead to microthrombi formation, which may further impair blood flow. Thirdly, increased vascular permeability results in the shift of extracellular fluid from the blood to the interstitial space which reduces blood flow and causes edema. These events can lead to hypertension, renal dysfunction, pulmonary dysfunction, hepatic dysfunction, and cerebral edema with cerebral dysfunction and convulsions. In clinical context, increased platelet and endothelial activation may be detected before symptoms appear.Hypoperfusion of the placenta is associated with abnormal modelling of the fetal–maternal placental interface that may be immunologically mediated. The pathogenesis of pre-eclampsia is poorly understood and may be attributed to factors related to the woman and placenta since pre-eclampsia is seen in molar pregnancies absent of a fetus or fetal tissue. The placenta normally produces the potent vasodilator adrenomedullin but it is reduced in pre-eclampsia and eclampsia. Other vasodilators, including prostacyclin, thromboxane A2, nitric oxide, and endothelins, are reduced in eclampsia and may lead to vasoconstriction.Eclampsia is associated with hypertensive encephalopathy in which cerebral vascular resistance is reduced, leading to increased blood flow to the brain, cerebral edema and resultant convulsions. An eclamptic convulsion usually does not cause chronic brain damage unless intracranial haemorrhage occurs. Diagnosis If a pregnant woman has already been diagnosed with pre-eclampsia during the current pregnancy and then develops a seizure, she may be assigned a clinical diagnosis of eclampsia without further workup. While seizures are most common in the third trimester, they may occur any time from 20 weeks of pregnancy until 6 weeks after birth. Because pre-eclampsia and eclampsia are common conditions in women, eclampsia can be assumed to be the correct diagnosis until proven otherwise in pregnant or postpartum women who experience seizures. However, if a woman has a seizure and it is unknown whether or not they have pre-eclampsia, testing can help make the diagnosis clear. Pre-eclampsia is diagnosed when repeated blood pressure measurements are greater or equal to 140/90mmHg, in addition to any signs of organ dysfunction, including: proteinuria, thrombocytopenia, renal insufficiency, impaired liver function, pulmonary edema, cerebral symptoms, or abdominal pain. Vital signs One of the core features of pre-eclampsia is the new onset of high blood pressure. Blood pressure is a measurement of two numbers: systolic blood pressure and diastolic blood pressure. A systolic blood pressure (the top number) of greater than 140 mmHg and/or a diastolic blood pressure (the bottom number) of greater than 90 mmHg is higher than the normal range. If the blood pressure is high on at least two separate occasions after the first 20 weeks of pregnancy and the woman has signs of organ dysfunction (e.g. proteinuria), then they meet the criteria for a diagnosis of pre-eclampsia. If the systolic blood pressure is greater than 160 or the diastolic pressure is greater than 110, the hypertension is considered to be severe. Laboratory testing Another common feature of pre-eclampsia is proteinuria, which is the presence of excess protein in the urine. To determine if proteinuria is present, the urine can be collected and tested for protein; if there is 0.3 grams of protein or more in the urine of a pregnant woman collected over 24 hours, this is one of the diagnostic criteria for pre-eclampsia and raises the suspicion that a seizure is due to eclampsia.In cases of severe eclampsia or pre-eclampsia, the woman can have low levels of platelets in the blood, a condition termed thrombocytopenia. A complete blood count, or CBC, is a test of the blood that can be performed to check platelet levels. Other investigations include: kidney function test, liver function tests (LFT), coagulation screen, 24-hour urine creatinine, and fetal/placental ultrasound. Differential diagnosis Convulsions during pregnancy that are unrelated to pre-eclampsia need to be distinguished from eclampsia. Such disorders include seizure disorders as well as brain tumor, aneurysm of the brain, and medication- or drug-related seizures. Usually, the presence of the signs of severe pre-eclampsia precede and accompany eclampsia, facilitating the diagnosis. Prevention Detection and management of pre-eclampsia is critical to reduce the risk of eclampsia. The USPSTF recommends regular checking of blood pressure through pregnancy in order to detect preeclampsia. Appropriate management of a woman with pre-eclampsia generally involves the use of magnesium sulfate to prevent eclamptic seizures. In some cases, low-dose aspirin has been shown to decrease the risk of pre-eclampsia in women, especially when taken in the late first trimester. Treatment The four goals of the treatment of eclampsia are to stop and prevent further convulsions, to control the elevated blood pressure, to deliver the baby as promptly as possible, and to monitor closely for the onset of multi-organ failure. Convulsions Convulsions are prevented and treated using magnesium sulfate. The study demonstrating the effectiveness of magnesium sulfate for the management of eclampsia was first published in 1955. Effective anticonvulsant serum levels range from 2.5 to 7.5 mEq/L.With intravenous administration, the onset of anticonvulsant action is fast and lasts about 30 minutes. Following intramuscular administration the onset of action is about one hour and lasts for three to four hours. Magnesium is excreted solely by the kidneys at a rate proportional to the plasma concentration (concentration in the blood) and glomerular filtration (rate at which the blood is filtered through the kidneys). Magnesium sulfate is associated with several minor side effects; serious side effects are uncommon, occurring at elevated magnesium serum concentrations greater than 7.0 mEq/L. Serious toxicity can be counteracted with calcium gluconate.Even with therapeutic serum magnesium concentrations, recurrent convulsions may occur, and additional magnesium may be needed, but with close monitoring for respiratory, cardiac, and neurological depression. If magnesium administration with resultant high serum concentrations fails to control convulsions, the addition of other intravenous anticonvulsants may be used and intubation and mechanical ventilation may be initiated. It is important to avoid magnesium toxicity, including thoracic muscle paralysis, which could cause respiratory failure and death. Magnesium sulfate results in better outcomes than diazepam, phenytoin or a combination of chlorpromazine, promethazine, and pethidine. Blood pressure management Blood pressure control is used to prevent stroke, which accounts for 15 to 20 percent of deaths in women with eclampsia. The agents of choice for blood pressure control during eclampsia are hydralazine or labetalol. This is because of their effectiveness, lack of negative effects on the fetus, and mechanism of action. Blood pressure management is indicated with a diastolic blood pressure above 105–110 mm Hg. Delivery If the baby has not yet been delivered, steps need to be taken to stabilize the woman and deliver her speedily. This needs to be done even if the baby is immature, as the eclamptic condition is unsafe for both baby and mother. As eclampsia is a manifestation of a type of non-infectious multiorgan dysfunction or failure, other organs (liver, kidney, lungs, cardiovascular system, and coagulation system) need to be assessed in preparation for a delivery (often a caesarean section), unless the woman is already in advanced labor. Regional anesthesia for caesarean section is contraindicated when a coagulopathy has developed. There is limited to no evidence in favor of a particular delivery method for women with eclampsia. Therefore, the delivery method of choice is an individualized decision. Monitoring Invasive hemodynamic monitoring may be elected in an eclamptic woman at risk for or with heart disease, kidney disease, refractory hypertension, pulmonary edema, or poor urine output. Etymology The Greek noun ἐκλαμψία, eklampsía, denotes a "light burst"; metaphorically, in this context, "sudden occurrence." The New Latin term first appeared in Johannes Varandaeus’ 1620 treatise on gynaecology Tractatus de affectibus Renum et Vesicae. The term toxemia of pregnancy is no longer recommended: placental toxins are not the cause of eclampsia occurrences, as previously believed. Popular culture In Downton Abbey, a historical drama television series, the character Lady Sybil dies (in series 3, episode 5) of eclampsia shortly after child birth. In Call the Midwife, a medical drama television series set in London in the 1950s and 1960s, the character (in series 1, episode 4) named Margaret Jones is struck with pre-eclampsia, ultimately proceeding from a comatose condition to death. The term "toxemia" was also used for the condition, in the dialogue. In House M.D., a medical drama television series set in the U.S., Dr. Cuddy, the hospital director, adopts a baby whose teenage mother dies from eclampsia and had no other parental figures available. In The Lemon Drop Kid, the main characters wife dies of eclampsia shortly after giving birth to a boy. References External links Eclampsia at Curlie
Anxiety disorder
Anxiety disorders are a cluster of mental disorders characterized by significant and uncontrollable feelings of anxiety and fear such that a persons social, occupational, and personal function are significantly impaired. Anxiety may cause physical and cognitive symptoms, such as restlessness, irritability, easy fatiguability, difficulty concentrating, increased heart rate, chest pain, abdominal pain, and a variety of other symptoms that may vary based on the individual.In casual discourse, the words anxiety and fear are often used interchangeably. In clinical usage, they have distinct meanings: anxiety is defined as an unpleasant emotional state for which the cause is either not readily identified or perceived to be uncontrollable or unavoidable, whereas fear is an emotional and physiological response to a recognized external threat. The umbrella term anxiety disorder refers to a number of specific disorders that include fears (phobias) or anxiety symptoms. There are several types of anxiety disorders, including generalized anxiety disorder, specific phobia, social anxiety disorder, separation anxiety disorder, agoraphobia, panic disorder, and selective mutism. The individual disorder can be diagnosed using the specific and unique symptoms, triggering events, and timing. If a person is diagnosed with an anxiety disorder, a medical professional must have evaluated the person to ensure the anxiety cannot be attributed to another medical illness or mental disorder. It is possible for an individual to have more than one anxiety disorder during their life or at the same time and anxiety disorders are marked by a typical persistent course. Anxiety disorders are the most common of mental disorders and affect nearly 30% of adults at some point in their lives. However, anxiety disorders are treatable and a number of effective treatments are available. Treatment helps most people lead normal productive lives. Sub-types Generalized anxiety disorder Generalized anxiety disorder (GAD) is a common disorder, characterized by long-lasting anxiety which is not focused on any one object or situation. Those with generalized anxiety disorder experience non-specific persistent fear and worry, and become overly concerned with everyday matters. Generalized anxiety disorder is "characterized by chronic excessive worry accompanied by three or more of the following symptoms: restlessness, fatigue, concentration problems, irritability, muscle tension, and sleep disturbance". Generalized anxiety disorder is the most common anxiety disorder to affect older adults. Anxiety can be a symptom of a medical or substance use disorder problem, and medical professionals must be aware of this. A diagnosis of GAD is made when a person has been excessively worried about an everyday problem for six months or more. These stresses can include family life, work, social life, or their own health. A person may find that they have problems making daily decisions and remembering commitments as a result of lack of concentration and/or preoccupation with worry. A symptom can be a strained appearance, with increased sweating from the hands, feet, and axillae, and they may be tearful, which can suggest depression. Before a diagnosis of anxiety disorder is made, physicians must rule out drug-induced anxiety and other medical causes.In children, GAD may be associated with headaches, restlessness, abdominal pain, and heart palpitations. Typically it begins around 8 to 9 years of age. Specific phobias The single largest category of anxiety disorders is that of specific phobias, which includes all cases in which fear and anxiety are triggered by a specific stimulus or situation. Between 5% and 12% of the population worldwide have specific phobias. According to the National Institute of Mental Health, a phobia is an intense fear of or aversion to specific objects or situations. Individuals with a phobia typically anticipate terrifying consequences from encountering the object of their fear, which can be anything from an animal to a location to a bodily fluid to a particular situation. Common phobias are flying, blood, water, highway driving, and tunnels. When people are exposed to their phobia, they may experience trembling, shortness of breath, or rapid heartbeat. Thus meaning that people with specific phobias often go out of their way to avoid encountering their phobia. People understand that their fear is not proportional to the actual potential danger but still are overwhelmed by it. Panic disorder With panic disorder, a person has brief attacks of intense terror and apprehension, often marked by trembling, shaking, confusion, dizziness, nausea, and/or difficulty breathing. These panic attacks, defined by the APA as fear or discomfort that abruptly arises and peaks in less than ten minutes, can last for several hours. Attacks can be triggered by stress, irrational thoughts, general fear or fear of the unknown, or even exercise. However, sometimes the trigger is unclear and the attacks can arise without warning. To help prevent an attack, one can avoid the trigger. This can mean avoiding places, people, types of behaviors, or certain situations that have been known to cause a panic attack. This being said, not all attacks can be prevented. In addition to recurrent unexpected panic attacks, a diagnosis of panic disorder requires that said attacks have chronic consequences: either worry over the attacks potential implications, persistent fear of future attacks, or significant changes in behavior related to the attacks. As such, those with panic disorder experience symptoms even outside specific panic episodes. Often, normal changes in heartbeat are noticed, leading them to think something is wrong with their heart or they are about to have another panic attack. In some cases, a heightened awareness (hypervigilance) of body functioning occurs during panic attacks, wherein any perceived physiological change is interpreted as a possible life-threatening illness (i.e., extreme hypochondriasis). Agoraphobia Agoraphobia is the specific anxiety about being in a place or situation where escape is difficult or embarrassing or where help may be unavailable. Agoraphobia is strongly linked with panic disorder and is often precipitated by the fear of having a panic attack. A common manifestation involves needing to be in constant view of a door or other escape route. In addition to the fears themselves, the term agoraphobia is often used to refer to avoidance behaviors that individuals often develop. For example, following a panic attack while driving, someone with agoraphobia may develop anxiety over driving and will therefore avoid driving. These avoidance behaviors can have serious consequences and often reinforce the fear they are caused by. In a severe case of agoraphobia, the person may never leave their home. Social anxiety disorder Social anxiety disorder (SAD; also known as social phobia) describes an intense fear and avoidance of negative public scrutiny, public embarrassment, humiliation, or social interaction. This fear can be specific to particular social situations (such as public speaking) or, more typically, is experienced in most (or all) social interactions. Roughly 7% of American adults have social anxiety disorder, and more than 75% of people experience their first symptoms in their childhood or early teenage years. Social anxiety often manifests specific physical symptoms, including blushing, sweating, rapid heart rate, and difficulty speaking. As with all phobic disorders, those with social anxiety often will attempt to avoid the source of their anxiety; in the case of social anxiety this is particularly problematic, and in severe cases can lead to complete social isolation. Children are also affected by social anxiety disorder, although their associated symptoms are different than that of teenagers and adults. They may experience difficulty processing or retrieving information, sleep deprivation, disruptive behaviors in class, and irregular class participation.Social physique anxiety (SPA) is a subtype of social anxiety, involving concern over the evaluation of ones body by others. SPA is common among adolescents, especially females. Post-traumatic stress disorder Post-traumatic stress disorder (PTSD) was once an anxiety disorder (now moved to trauma- and stressor-related disorders in DSM-V) that results from a traumatic experience. PTSD affects approximately 3.5% of U.S. adults every year, and an estimated one in eleven people will be diagnosed with PTSD in their lifetime. Post-traumatic stress can result from an extreme situation, such as combat, natural disaster, rape, hostage situations, child abuse, bullying, or even a serious accident. It can also result from long-term (chronic) exposure to a severe stressor— for example, soldiers who endure individual battles but cannot cope with continuous combat. Common symptoms include hypervigilance, flashbacks, avoidant behaviors, anxiety, anger and depression. In addition, individuals may experience sleep disturbances. People who have PTSD often try to detach themselves from their friends and family, and have difficulty maintaining these close relationships. There are a number of treatments that form the basis of the care plan for those with PTSD. Such treatments include cognitive behavioral therapy (CBT), prolonged exposure therapy, stress inoculation therapy, medication, and psychotherapy and support from family and friends.Post-traumatic stress disorder (PTSD) research began with Vietnam veterans, as well as natural and non-natural disaster victims. Studies have found the degree of exposure to a disaster has been found to be the best predictor of PTSD. Separation anxiety disorder Separation anxiety disorder (SepAD) is the feeling of excessive and inappropriate levels of anxiety over being separated from a person or place. Separation anxiety is a normal part of development in babies or children, and it is only when this feeling is excessive or inappropriate that it can be considered a disorder. Separation anxiety disorder affects roughly 7% of adults and 4% of children, but the childhood cases tend to be more severe; in some instances, even a brief separation can produce panic. Treating a child earlier may prevent problems. This may include training the parents and family on how to deal with it. Often, the parents will reinforce the anxiety because they do not know how to properly work through it with the child. In addition to parent training and family therapy, medication, such as SSRIs, can be used to treat separation anxiety. Obsessive–compulsive disorder Obsessive–compulsive disorder (OCD) is not classified as an anxiety disorder by the DSM-5, but is by the ICD-10. It was previously classified as an anxiety disorder in the DSM-IV. It is a condition where the person has obsessions (distressing, persistent, and intrusive thoughts or images) and compulsions (urges to repeatedly perform specific acts or rituals), that are not caused by drugs or physical disorder, and which cause distress or social dysfunction. The compulsive rituals are personal rules followed to relieve the feeling of discomfort. OCD affects roughly 1–⁠2% of adults (somewhat more women than men), and under 3% of children and adolescents.A person with OCD knows that the symptoms are unreasonable and struggles against both the thoughts and the behavior. Their symptoms could be related to external events they fear (such as their home burning down because they forget to turn off the stove) or worry that they will behave inappropriately.It is not certain why some people have OCD, but behavioral, cognitive, genetic, and neurobiological factors may be involved. Risk factors include family history, being single (although that may result from the disorder), and higher socioeconomic class or not being in paid employment. Of those with OCD about 20% of people will overcome it, and symptoms will at least reduce over time for most people (a further 50%). Selective mutism Selective mutism (SM) is a disorder in which a person who is normally capable of speech does not speak in specific situations or to specific people. Selective mutism usually co-exists with shyness or social anxiety. People with selective mutism stay silent even when the consequences of their silence include shame, social ostracism or even punishment. Selective mutism affects about 0.8% of people at some point in their life.Testing for selective mutism is important because doctors must determine if it is an issue associated with the childs hearing, movements associated with the jaw or tongue, and if the child can understand when others are speaking to them. Diagnosis The diagnosis of anxiety disorders is made by symptoms, triggers, and a persons personal and family histories. There are no objective biomarkers or laboratory tests that can diagnose anxiety. It is important for a medical professional to evaluate a person for other medical and mental causes for prolonged anxiety because treatments will vary considerably.Numerous questionnaires have been developed for clinical use and can be used for an objective scoring system. Symptoms may be vary between each subtype of generalized anxiety disorder. Generally, symptoms must be present for at least six months, occur more days than not, and significantly impair a persons ability to function in daily life. Symptoms may include: feeling nervous, anxious, or on edge; worrying excessively; difficulty concentrating; restlessness; irritability.Questionnaires developed for clinical use include the State-Trait Anxiety Inventory (STAI), the Generalized Anxiety Disorder 7 (GAD-7), the Beck Anxiety Inventory (BAI), the Zung Self-Rating Anxiety Scale, and the Taylor Manifest Anxiety Scale. Other questionnaires combine anxiety and depression measurement, such as the Hamilton Anxiety Rating Scale, the Hospital Anxiety and Depression Scale (HADS), the Patient Health Questionnaire (PHQ), and the Patient-Reported Outcomes Measurement Information System (PROMIS). Examples of specific anxiety questionnaires include the Liebowitz Social Anxiety Scale (LSAS), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Inventory (SPIN), the Social Phobia Scale (SPS), and the Social Anxiety Questionnaire (SAQ-A30). Differential diagnosis Anxiety disorders differ from developmentally normal fear or anxiety by being excessive or persisting beyond developmentally appropriate periods. They differ from transient fear or anxiety, often stress-induced, by being persistent (e.g., typically lasting 6 months or more), although the criterion for duration is intended as a general guide with allowance for some degree of flexibility and is sometimes of shorter duration in children.The diagnosis of an anxiety disorder requires first ruling out an underlying medical cause. Diseases that may present similar to an anxiety disorder, including certain endocrine diseases (hypo- and hyperthyroidism, hyperprolactinemia), metabolic disorders (diabetes), deficiency states (low levels of vitamin D, B2, B12, folic acid), gastrointestinal diseases (celiac disease, non-celiac gluten sensitivity, inflammatory bowel disease), heart diseases, blood diseases (anemia), and brain degenerative diseases (Parkinsons disease, dementia, multiple sclerosis, Huntingtons disease).Several drugs can also cause or worsen anxiety, whether in intoxication, withdrawal, or from chronic use. These include alcohol, tobacco, cannabis, sedatives (including prescription benzodiazepines), opioids (including prescription painkillers and illicit drugs like heroin), stimulants (such as caffeine, cocaine and amphetamines), hallucinogens, and inhalants. Prevention Focus is increasing on prevention of anxiety disorders. There is tentative evidence to support the use of cognitive behavioral therapy and mindfulness therapy. A 2013 review found no effective measures to prevent GAD in adults. A 2017 review found that psychological and educational interventions had a small benefit for the prevention of anxiety. Research indicates that predictors of the emergence of anxiety disorders partly differ from the factors that predict their persistence. Perception and Discrimination Stigma People with an anxiety disorder may be challenged by prejudices and stereotypes that the world believes, most likely as a result of misconception around anxiety and anxiety disorders. Misconceptions found in a data analysis from the National Survey of Mental Health Literacy and Stigma include (1) many people believe anxiety is not a real medical illness; and (2) many people believe that people with anxiety could turn it off if they wanted to. For people experiencing the physical and mental symptoms of an anxiety disorder, stigma and negative social perception can make an individual less likely to seek treatment.There are two prevalent types of stigmas that surround anxiety disorders: Public and Self-Stigma. Public stigma in this context is the reaction that the general population has to people with an anxiety disorder. Self-Stigma is described as the prejudice which people with mental illness turn against themselves.There is no explicit evidence that announces the exact cause of stigma towards anxiety, however there are three highlighted perspectives. The macro, intermediate, and micro levels. The macro level marks society as whole with the influence from mass media. The intermediate level includes health care professionals and their perspective. The micro level details the individuals contributions to the process through self-stigmatization.Stigma can be described in three conceptual ways: cognitive, emotional, and behavioural. This allows for differentiation between stereotypes, prejudice, and discrimination. Treatment Treatment options include lifestyle changes, therapy, and medications. There is no clear evidence as to whether therapy or medication is most effective; the specific medication decision can be made by a doctor and patient with consideration to the patients specific circumstances and symptoms. If while on treatment with a chosen medication, the persons anxiety does not improve, another medication may be offered. Specific treatments will vary by subtype of anxiety disorder, a persons other medical conditions, and medications. Lifestyle and diet Lifestyle changes include exercise, for which there is moderate evidence for some improvement, regularizing sleep patterns, reducing caffeine intake, and stopping smoking. Stopping smoking has benefits in anxiety as large as or larger than those of medications. Omega-3 polyunsaturated fatty acids, such as fish oil, may reduce anxiety, particularly in those with more significant symptoms. Psychotherapy Cognitive behavioral therapy (CBT) is effective for anxiety disorders and is a first-line treatment. CBT appears to be equally effective when carried out via the internet compared to sessions completed face to face.Mindfulness-based programs also appear to be effective for managing anxiety disorders. It is unclear if meditation has an effect on anxiety and transcendental meditation appears to be no different than other types of meditation.A 2015 Cochrane review of Morita therapy for anxiety disorder in adults found not enough evidence to draw a conclusion.Adventure-based counseling can be an effective way to anxiety. Using rock-climbing as an example, climbing can often bring on fear or frustration, and tackling these negative feelings in a nurturing environment can help people develop coping mechanisms necessary to deal with these negative feelings. Medications First-line choices for medications include SSRIs or SNRIs to treat generalized anxiety disorder. For adults there is no good evidence supporting which specific medication in the SSRI or SNRI class is best for treating anxiety, so cost often drives drug choice. Fluvoxamine is effective in treating a range of anxiety disorders in children and adolescents. Fluoxetine, sertraline and paroxetine can also help with some forms of anxiety in children and adolescents. If the chosen medicine is effective, it is recommended that it is continued for at least a year. Stopping medication results in a greater risk of relapse.Buspirone and pregabalin are second-line treatments for people who do not respond to SSRIs or SNRIs; there is also evidence that benzodiazepines, including diazepam and clonazepam, are effective. Pregabalin and gabapentin are effective in treating some anxiety disorders but there is concern regarding their off-label use due to the lack of strong scientific evidence for their efficacy in multiple conditions and their proven side effects.Medications need to be used with care among older adults, who are more likely to have side effects because of coexisting physical disorders. Adherence problems are more likely among older people, who may have difficulty understanding, seeing, or remembering instructions.In general, medications are not seen as helpful in specific phobia, but a benzodiazepine is sometimes used to help resolve acute episodes. In 2007, data were sparse for efficacy of any drug. Cannabis As of 2019, there is little evidence for cannabis in treating anxiety disorders. Children Both therapy and a number of medications have been found to be useful for treating childhood anxiety disorders. Therapy is generally preferred to medication.Cognitive behavioral therapy (CBT) is a good first therapy approach. Studies have gathered substantial evidence for treatments that are not CBT-based as being effective forms of treatment, expanding treatment options for those who do not respond to CBT. Although studies have demonstrated the effectiveness of CBT for anxiety disorders in children and adolescents, evidence that it is more effective than treatment as usual, medication, or wait list controls is inconclusive. Like adults, children may undergo psychotherapy, cognitive-behavioral therapy, or counseling. Family therapy is a form of treatment in which the child meets with a therapist together with the primary guardians and siblings. Each family member may attend individual therapy, but family therapy is typically a form of group therapy. Art and play therapy are also used. Art therapy is most commonly used when the child will not or cannot verbally communicate, due to trauma or a disability in which they are nonverbal. Participating in art activities allows the child to express what they otherwise may not be able to communicate to others. In play therapy, the child is allowed to play however they please as a therapist observes them. The therapist may intercede from time to time with a question, comment, or suggestion. This is often most effective when the family of the child plays a role in the treatment.If a medication option is warranted, antidepressants such as SSRIs and SNRIs can be effective. Fluvoxamine is effective in treating a range of anxiety disorders in children and adolescents. Minor side effects with medications, however, are common. Epidemiology Globally, as of 2010, approximately 273 million (4.5% of the population) had an anxiety disorder. It is more common in females (5.2%) than males (2.8%).In Europe, Africa and Asia, lifetime rates of anxiety disorders are between 9 and 16%, and yearly rates are between 4 and 7%. In the United States, the lifetime prevalence of anxiety disorders is about 29% and between 11 and 18% of adults have the condition in a given year. This difference is affected by the range of ways in which different cultures interpret anxiety symptoms and what they consider to be normative behavior. In general, anxiety disorders represent the most prevalent psychiatric condition in the United States, outside of substance use disorder.Like adults, children can experience anxiety disorders; between 10 and 20 percent of all children will develop a full-fledged anxiety disorder prior to the age of 18, making anxiety the most common mental health issue in young people. Anxiety disorders in children are often more challenging to identify than their adult counterparts, owing to the difficulty many parents face in discerning them from normal childhood fears. Likewise, anxiety in children is sometimes misdiagnosed as attention deficit hyperactivity disorder or, due to the tendency of children to interpret their emotions physically (as stomachaches, headaches, etc.), anxiety disorders may initially be confused with physical ailments.Anxiety in children has a variety of causes; sometimes anxiety is rooted in biology, and may be a product of another existing condition, such as autism spectrum disorder. Gifted children are also often more prone to excessive anxiety than non-gifted children. Other cases of anxiety arise from the child having experienced a traumatic event of some kind, and in some cases, the cause of the childs anxiety cannot be pinpointed.Anxiety in children tends to manifest along age-appropriate themes, such as fear of going to school (not related to bullying) or not performing well enough at school, fear of social rejection, fear of something happening to loved ones, etc. What separates disordered anxiety from normal childhood anxiety is the duration and intensity of the fears involved. See also List of people with an anxiety disorder Exposure Therapy Mixed anxiety–depressive disorder References External links Support Group Providers for Anxiety disorder at Curlie
Seizure
An epileptic seizure, informally known as a seizure, is a period of symptoms due to abnormally excessive or synchronous neuronal activity in the brain. Outward effects vary from uncontrolled shaking movements involving much of the body with loss of consciousness (tonic-clonic seizure), to shaking movements involving only part of the body with variable levels of consciousness (focal seizure), to a subtle momentary loss of awareness (absence seizure). Most of the time these episodes last less than two minutes and it takes some time to return to normal. Loss of bladder control may occur.Seizures may be provoked and unprovoked. Provoked seizures are due to a temporary event such as low blood sugar, alcohol withdrawal, abusing alcohol together with prescription medication, low blood sodium, fever, brain infection, or concussion. Unprovoked seizures occur without a known or fixable cause such that ongoing seizures are likely. Unprovoked seizures may be exacerbated by stress or sleep deprivation. Epilepsy describes brain disease in which there has been at least one unprovoked seizure and where there is a high risk of additional seizures in the future. Conditions that look like epileptic seizures but are not include: fainting, nonepileptic psychogenic seizure and tremor.A seizure that lasts for more than a brief period is a medical emergency. Any seizure lasting longer than five minutes should be treated as status epilepticus. A first seizure generally does not require long-term treatment with anti-seizure medications unless a specific problem is found on electroencephalogram (EEG) or brain imaging. Typically it is safe to complete the work-up following a single seizure as an outpatient. In many, with what appears to be a first seizure, other minor seizures have previously occurred.Up to 10% of people have at least one epileptic seizure. Provoked seizures occur in about 3.5 per 10,000 people a year while unprovoked seizures occur in about 4.2 per 10,000 people a year. After one seizure, the chance of experiencing a second is about 50%. Epilepsy affects about 1% of the population at any given time with about 4% of the population affected at some point in time. Many places require people to stop driving until they have not had a seizure for a specific period. Signs and symptoms The signs and symptoms of seizures vary depending on the type. The most common and stereotypical type of seizure is convulsive (60%), typically called a tonic-clonic seizure. Two-thirds of these begin as focal seizures prior to developing into tonic-clonic seizures. The remaining 40% of seizures are non-convulsive, an example of which is absence seizure. When EEG monitoring shows evidence of a seizure, but no symptoms are present, it is referred to as a subclinical seizure. Focal seizures Focal seizures often begin with certain experiences, known as an aura. These may include sensory (including visual, auditory, etc.), cognitive, autonomic, olfactory or motor phenomena.In a complex partial seizure a person may appear confused or dazed and cannot respond to questions or direction.Jerking activity may start in a specific muscle group and spread to surrounding muscle groups—known as a Jacksonian march. Unusual activities that are not consciously created may occur. These are known as automatisms and include simple activities like smacking of the lips or more complex activities such as attempts to pick something up. Generalized seizures There are six main types of generalized seizures: tonic-clonic, tonic, clonic, myoclonic, absence, and atonic seizures. They all involve a loss of consciousness and typically happen without warning. Tonic-clonic seizures present with a contraction of the limbs followed by their extension, along with arching of the back for 10–30 seconds. A cry may be heard due to contraction of the chest muscles. The limbs then begin to shake in unison. After the shaking has stopped it may take 10–30 minutes for the person to return to normal. Tonic seizures produce constant contractions of the muscles. The person may turn blue if breathing is impaired. Clonic seizures involve shaking of the limbs in unison. Myoclonic seizures involve spasms of muscles in either a few areas or generalized through the body. Absence seizures can be subtle, with only a slight turn of the head or eye blinking. The person often does not fall over and may return to normal right after the seizure ends, though there may also be a period of post-ictal disorientation. Atonic seizures involve the loss of muscle activity for greater than one second. This typically occurs bilaterally (on both sides of the body). Duration A seizure can last from a few seconds to more than five minutes, at which point it is known as status epilepticus. Most tonic-clonic seizures last less than two or three minutes. Absence seizures are usually around 10 seconds in duration. Postictal After the active portion of a seizure, there is typically a period of confusion called the postictal period before a normal level of consciousness returns. This usually lasts 3 to 15 minutes but may last for hours. Other common symptoms include: feeling tired, headache, difficulty speaking, and abnormal behavior. Psychosis after a seizure is relatively common, occurring in between 6 and 10% of people. Often people do not remember what occurred during this time. Causes Seizures have a number of causes. Of those who have a seizure, about 25% have epilepsy. A number of conditions are associated with seizures but are not epilepsy including: most febrile seizures and those that occur around an acute infection, stroke, or toxicity. These seizures are known as "acute symptomatic" or "provoked" seizures and are part of the seizure-related disorders. In many the cause is unknown. Different causes of seizures are common in certain age groups. Seizures in babies are most commonly caused by hypoxic ischemic encephalopathy, central nervous system (CNS) infections, trauma, congenital CNS abnormalities, and metabolic disorders. The most frequent cause of seizures in children is febrile seizures, which happen in 2–5% of children between the ages of six months and five years. During childhood, well-defined epilepsy syndromes are generally seen. In adolescence and young adulthood, non-compliance with the medication regimen and sleep deprivation are potential triggers. Pregnancy and labor and childbirth, and the post-partum, or post-natal period (after birth) can be at-risk times, especially if there are certain complications like pre-eclampsia. During adulthood, the likely causes are alcohol related, strokes, trauma, CNS infections, and brain tumors. In older adults, cerebrovascular disease is a very common cause. Other causes are CNS tumors, head trauma, and other degenerative diseases that are common in the older age group, such as dementia. Metabolic Dehydration can trigger epileptic seizures if it is severe enough. A number of disorders including: low blood sugar, low blood sodium, hyperosmolar nonketotic hyperglycemia, high blood sodium, low blood calcium and high blood urea levels may cause seizures. As may hepatic encephalopathy and the genetic disorder porphyria. Structural Cavernoma or cavernous malformation is a treatable medical condition that can cause seizures, headaches, and brain hemorrhages. Arteriovenous malformation (AVM) is a treatable medical condition that can cause seizures, headaches, and brain hemorrhages. Space-occupying lesions in the brain (abscesses, tumours). In people with brain tumours, the frequency of epilepsy depends on the location of the tumor in the cortical region. Medications Both medication and drug overdoses can result in seizures, as may certain medication and drug withdrawal. Common drugs involved include: antidepressants, antipsychotics, cocaine, insulin, and the local anaesthetic lidocaine. Difficulties with withdrawal seizures commonly occurs after prolonged alcohol or sedative use, a condition known as delirium tremens. In people who are at risk of developing epileptic seizures, common herbal medicines such as ephedra, ginkgo biloba and wormwood can provoke seizures. Infections Infection with the pork tapeworm, which can cause neurocysticercosis, is the cause of up to half of epilepsy cases in areas of the world where the parasite is common. Parasitic infections such as cerebral malaria. In Nigeria this is one of the most common causes of seizures among children under five years of age. Infection, such as encephalitis or meningitis Stress Stress can induce seizures in people with epilepsy, and is a risk factor for developing epilepsy. Severity, duration, and time at which stress occurs during development all contribute to frequency and susceptibility to developing epilepsy. It is one of the most frequently self-reported triggers in patients with epilepsy.Stress exposure results in hormone release that mediates its effects in the brain. These hormones act on both excitatory and inhibitory neural synapses, resulting in hyper-excitability of neurons in the brain. The hippocampus is known to be a region that is highly sensitive to stress and prone to seizures. This is where mediators of stress interact with their target receptors to produce effects. Other Seizures may occur as a result of high blood pressure, known as hypertensive encephalopathy, or in pregnancy as eclampsia when accompanied by either seizures or a decreased level of consciousness. Very high body temperatures may also be a cause. Typically this requires a temperature greater than 42 °C (107.6 °F). Head injury may cause non-epileptic post-traumatic seizures or post-traumatic epilepsy About 3.5 to 5.5% of people with celiac disease also have seizures. Seizures in a person with a shunt may indicate failure Hemorrhagic stroke can occasionally present with seizures, embolic strokes generally do not (though epilepsy is a common later complication); cerebral venous sinus thrombosis, a rare type of stroke, is more likely to be accompanied by seizures than other types of stroke Multiple sclerosis may cause seizures Electroconvulsive therapy (ECT) deliberately sets out to induce a seizure for the treatment of major depression. Reflex seizure induced by a specific stimulus or trigger (extrinsic or intrinsic stimuli) Mechanism Normally, brain electrical activity is non-synchronous. In epileptic seizures, due to problems within the brain, a group of neurons begin firing in an abnormal, excessive, and synchronized manner. This results in a wave of depolarization known as a paroxysmal depolarizing shift.Normally after an excitatory neuron fires it becomes more resistant to firing for a period of time. This is due in part from the effect of inhibitory neurons, electrical changes within the excitatory neuron, and the negative effects of adenosine. In epilepsy the resistance of excitatory neurons to fire during this period is decreased. This may occur due to changes in ion channels or inhibitory neurons not functioning properly. Forty-one ion-channel genes and over 1,600 ion-channel mutations have been implicated in the development of epileptic seizure. These ion channel mutations tend to confer a depolarized resting state to neurons resulting in pathological hyper-excitability. This long-lasting depolarization in individual neurons is due to an influx of Ca2+ from outside of the cell and leads to extended opening of Na+ channels and repetitive action potentials. The following hyperpolarization is facilitated by γ-aminobutyric acid (GABA) receptors or potassium (K+) channels, depending on the type of cell. Equally important in epileptic neuronal hyper-excitability, is the reduction in the activity of inhibitory GABAergic neurons, an effect known as disinhibition. Disinhibition may result from inhibitory neuron loss, dysregulation of axonal sprouting from the inhibitory neurons in regions of neuronal damage, or abnormal GABAergic signaling within the inhibitory neuron. Neuronal hyper-excitability results in a specific area from which seizures may develop, known as a "seizure focus". Following an injury to the brain, another mechanism of epilepsy may be the up regulation of excitatory circuits or down regulation of inhibitory circuits. These secondary epilepsies occur through processes known as epileptogenesis. Failure of the blood–brain barrier may also be a causal mechanism. While blood-brain barrier disruption alone does appear to cause epileptogenesis, it has been correlated to increased seizure activity. Furthermore, it has been implicated in chronic epileptic conditions through experiments inducing barrier permeability with chemical compounds. Disruption may lead to fluid leaking out of the blood vessels into the area between cells and driving epileptic seizures. Preliminary findings of blood proteins in the brain after a seizure support this theory.Focal seizures begin in one hemisphere of the brain while generalized seizures begin in both hemispheres. Some types of seizures may change brain structure, while others appear to have little effect. Gliosis, neuronal loss, and atrophy of specific areas of the brain are linked to epilepsy but it is unclear if epilepsy causes these changes or if these changes result in epilepsy.Seizure activity may be propagated through the brains endogenous electrical fields. Proposed mechanisms that may cause the spread and recruitment of neurons include an increase in K+ from outside the cell, and increase of Ca2+ in the presynaptic terminals. These mechanisms blunt hyperpolarization and depolarizes nearby neurons, as well as increasing neurotransmitter release. Diagnosis Seizures may be divided into provoked and unprovoked. Provoked seizures may also be known as "acute symptomatic seizures" or "reactive seizures". Unprovoked seizures may also be known as "reflex seizures". Depending on the presumed cause blood tests and lumbar puncture may be useful. Hypoglycemia may cause seizures and should be ruled out. An electroencephalogram and brain imaging with CT scan or MRI scan is recommended in the work-up of seizures not associated with a fever. Classification Seizure types are organized by whether the source of the seizure is localized (focal seizures) or distributed (generalized seizures) within the brain. Generalized seizures are divided according to the effect on the body and include tonic-clonic (grand mal), absence (petit mal), myoclonic, clonic, tonic, and atonic seizures. Some seizures such as epileptic spasms are of an unknown type.Focal seizures (previously called partial seizures) are divided into simple partial or complex partial seizure. Current practice no longer recommends this, and instead prefers to describe what occurs during a seizure.The classification of seizures can also be made according to dynamical criteria, observable in electrophysiological measurements. It is a classification according to their type of onset and offset. Physical examination Most people are in a postictal state (drowsy or confused) following a seizure. They may show signs of other injuries. A bite mark on the side of the tongue helps confirm a seizure when present, but only a third of people who have had a seizure have such a bite. When present in people thought to have had a seizure, this physical sign tentatively increases the likelihood that a seizure was the cause. Tests An electroencephalography is only recommended in those who likely had an epileptic seizure and may help determine the type of seizure or syndrome present. In children it is typically only needed after a second seizure. It cannot be used to rule out the diagnosis and may be falsely positive in those without the disease. In certain situations it may be useful to prefer the EEG while sleeping or sleep deprived.Diagnostic imaging by CT scan and MRI is recommended after a first non-febrile seizure to detect structural problems inside the brain. MRI is generally a better imaging test except when intracranial bleeding is suspected. Imaging may be done at a later point in time in those who return to their normal selves while in the emergency room. If a person has a previous diagnosis of epilepsy with previous imaging repeat imaging is not usually needed with subsequent seizures.In adults, testing electrolytes, blood glucose and calcium levels is important to rule these out as causes, as is an electrocardiogram. A lumbar puncture may be useful to diagnose a central nervous system infection but is not routinely needed. Routine antiseizure medical levels in the blood are not required in adults or children. In children additional tests may be required.A high blood prolactin level within the first 20 minutes following a seizure may be useful to confirm an epileptic seizure as opposed to psychogenic non-epileptic seizure. Serum prolactin level is less useful for detecting partial seizures. If it is normal an epileptic seizure is still possible and a serum prolactin does not separate epileptic seizures from syncope. It is not recommended as a routine part of diagnosis epilepsy. Differential diagnosis Differentiating an epileptic seizure from other conditions such as syncope can be difficult. Other possible conditions that can mimic a seizure include: decerebrate posturing, psychogenic seizures, tetanus, dystonia, migraine headaches, and strychnine poisoning. In addition, 5% of people with a positive tilt table test may have seizure-like activity that seems due to cerebral hypoxia. Convulsions may occur due to psychological reasons and this is known as a psychogenic non-epileptic seizure. Non-epileptic seizures may also occur due to a number of other reasons. Prevention A number of measures have been attempted to prevent seizures in those at risk. Following traumatic brain injury anticonvulsants decrease the risk of early seizures but not late seizures.In those with a history of febrile seizures, some medications (both antipyretics and anticonvulsants) have been found effective for reducing reoccurrence, however due to the frequency of adverse effects and the benign nature of febrile seizures the decision to use medication should be weighted carefully against potential negative effects.There is no clear evidence that antiepileptic drugs are effective or not effective at preventing seizures following a craniotomy, following subdural hematoma, after a stroke, or after subarachnoid haemorrhage, for both people who have had a previous seizure, and those who have not. Management Potentially sharp or dangerous objects should be moved from the area around a person experiencing a seizure so that the individual is not hurt. After the seizure, if the person is not fully conscious and alert, they should be placed in the recovery position. A seizure longer than five minutes, or two or more seizures occurring within the time of five minutes is a medical emergency known as status epilepticus. Contrary to a common misconception, bystanders should not attempt to force objects into the mouth of the person having a seizure, as doing so may cause injury to the teeth and gums.Treatments of a person that is actively seizing follows a progression from initial response, through first line, second line, and third line treatments. The initial response involves ensuring the person is protected from potential harms (such as nearby objects) and managing their airway, breathing, and circulation. Airway management should include placing the person on their side, known as the recovery position, to prevent them from choking. If they are unable to breathe because something is blocking their airway, they may require treatments to open their airway. Medication The first line medication for an actively seizing person is a benzodiazepine, with most guidelines recommending lorazepam. Diazepam and midazolam are alternatives. This may be repeated if there is no effect after 10 minutes. If there is no effect after two doses, barbiturates or propofol may be used.Second-line therapy for adults is phenytoin or fosphenytoin and phenobarbital for children. Third-line medications include phenytoin for children and phenobarbital for adults.Ongoing anti-epileptic medications are not typically recommended after a first seizure except in those with structural lesions in the brain. They are generally recommended after a second one has occurred. Approximately 70% of people can obtain full control with continuous use of medication. Typically one type of anticonvulsant is preferred. Following a first seizure, while immediate treatment with an anti-seizure drug lowers the probability of seizure recurrence up to five years it does not change the risk of death and there are potential side effects.In seizures related to toxins, up to two doses of benzodiazepines should be used. If this is not effective pyridoxine is recommended. Phenytoin should generally not be used.There is a lack of evidence for preventive anti-epileptic medications in the management of seizures related to intracranial venous thrombosis. Other Helmets may be used to provide protection to the head during a seizure. Some claim that seizure response dogs, a form of service dog, can predict seizures. Evidence for this, however, is poor. At present there is not enough evidence to support the use of cannabis for the management of seizures, although this is an ongoing area of research. There is low quality evidence that a ketogenic diet may help in those who have epilepsy and is reasonable in those who do not improve following typical treatments. Prognosis Following a first seizure, the risk of more seizures in the next two years is 40–50%. The greatest predictors of more seizures are problems either on the electroencephalogram or on imaging of the brain. In adults, after 6 months of being seizure-free after a first seizure, the risk of a subsequent seizure in the next year is less than 20% regardless of treatment. Up to 7% of seizures that present to the emergency department (ER) are in status epilepticus. In those with a status epilepticus, mortality is between 10% and 40%. Those who have a seizure that is provoked (occurring close in time to an acute brain event or toxic exposure) have a low risk of re-occurrence, but have a higher risk of death compared to those with epilepsy. Epidemiology Approximately 8–10% of people will experience an epileptic seizure during their lifetime. In adults, the risk of seizure recurrence within the five years following a new-onset seizure is 35%; the risk rises to 75% in persons who have had a second seizure. In children, the risk of seizure recurrence within the five years following a single unprovoked seizure is about 50%; the risk rises to about 80% after two unprovoked seizures. In the United States in 2011, seizures resulted in an estimated 1.6 million emergency department visits; approximately 400,000 of these visits were for new-onset seizures. The exact incidence of epileptic seizures in low-income and middle-income countries is unknown, however it probably exceeds that in high-income countries. This may be due to increased risks of traffic accidents, birth injuries, and malaria and other parasitic infections. History Epileptic seizures were first described in an Akkadian text from 2000 B.C. Early reports of epilepsy often saw seizures and convulsions as the work of "evil spirits". The perception of epilepsy, however, began to change in the time of Ancient Greek medicine. The term "epilepsy" itself is a Greek word, which is derived from the verb "epilambanein", meaning "to seize, possess, or afflict". Although the Ancient Greeks referred to epilepsy as the "sacred disease", this perception of epilepsy as a "spiritual" disease was challenged by Hippocrates in his work On the Sacred Disease, who proposed that the source of epilepsy was from natural causes rather than supernatural ones.Early surgical treatment of epilepsy was primitive in Ancient Greek, Roman and Egyptian medicine. The 19th century saw the rise of targeted surgery for the treatment of epileptic seizures, beginning in 1886 with localized resections performed by Sir Victor Horsley, a neurosurgeon in London. Another advancement was that of the development by the Montreal procedure by Canadian neurosurgeon Wilder Penfield, which involved use of electrical stimulation among conscious patients to more accurately identify and resect the epileptic areas in the brain. Society and culture Economics Seizures result in direct economic costs of about one billion dollars in the United States. Epilepsy results in economic costs in Europe of around €15.5 billion in 2004. In India, epilepsy is estimated to result in costs of US$1.7 billion or 0.5% of the GDP. They make up about 1% of emergency department visits (2% for emergency departments for children) in the United States. Driving Many areas of the world require a minimum of six months from the last seizure before people can drive a vehicle. Research Scientific work into the prediction of epileptic seizures began in the 1970s. Several techniques and methods have been proposed, but evidence regarding their usefulness is still lacking.Two promising areas include gene therapy, and seizure detection and seizure prediction.Gene therapy for epilepsy consists of employing vectors to deliver pieces of genetic material to areas of the brain involved in seizure onset.Seizure prediction is a special case of seizure detection in which the developed systems is able to issue a warning before the clinical onset of the epileptic seizure.Computational neuroscience has been able to bring a new point of view on the seizures by considering the dynamical aspects. References External links Seizure at Curlie
Localized hypertrichosis
Localized hypertrichosis may refer to: Localized acquired hypertrichosis Localized congenital hypertrichosisSee also: Hypertrichosis
Bacteriuria
Bacteriuria is the presence of bacteria in urine. Bacteriuria accompanied by symptoms is a urinary tract infection while that without is known as asymptomatic bacteriuria. Diagnosis is by urinalysis or urine culture. Escherichia coli is the most common bacterium found. People without symptoms should generally not be tested for the condition. Differential diagnosis include contamination.If symptoms are present treatment is generally with antibiotics. Bacteriuria without symptoms generally does not require treatment. Exceptions may include pregnant women, those who have had a recent kidney transplant, young children with significant vesicoureteral reflux, and those undergoing surgery of the urinary tract.Bacteriuria without symptoms is present in about 3% of otherwise healthy middle aged women. In nursing homes rates are as high as 50% among women and 40% in men. In those with a long term indwelling urinary catheter rates are 100%. Up to 10% of women have a urinary tract infection in a given year and half of all women have at least one infection at some point in their lives. There is an increased risk of asymptomatic or symptomatic bacteriuria in pregnancy due to physiological changes that occur in a pregnant women which promotes unwanted pathogen growth in the urinary tract. Signs and symptoms Asymptomatic Asymptomatic bacteriuria is bacteriuria without accompanying symptoms of a urinary tract infection and is commonly caused by the bacterium Escherichia coli. Other potential pathogens are Klebsiella spp., and group B streptococci. It is more common in women, in the elderly, in residents of long-term care facilities, and in people with diabetes, bladder catheters, and spinal cord injuries. People with a long-term Foley catheter always show bacteriuria. Chronic asymptomatic bacteriuria occurs in as many as 50% of the population in long-term care.There is an association between asymptomatic bacteriuria in pregnant women with low birth weight, preterm delivery, cystitis, infection of the newborn and fetus death. However, most of these studies were graded as poor quality. Bacteriuria in pregnancy also increases the risk of preeclampsia. Symptomatic Symptomatic bacteriuria is bacteriuria with the accompanying symptoms of a urinary tract infection (such as frequent urination, painful urination, fever, back pain, abdominal pain and blood in the urine) and includes pyelonephritis or cystitis. The most common cause of urinary tract infections is Escherichia coli. Diagnosis Testing for bacteriuria is usually performed in people with symptoms of a urinary tract infection. Certain populations that cannot feel or express symptoms of infection are also tested when showing nonspecific symptoms. For example, confusion or other changes in behaviour can be a sign of an infection in the elderly. Screening for asymptomatic bacteriuria in pregnancy is a common routine in many countries, but controversial. The gold standard for detecting bacteriuria is a bacterial culture which identifies the concentration of bacterial cells in the urine. The culture is usually combined with subsequent testing using biochemical methods or MALDI-TOF, which allows to identify the causal bacterial species, and antibiotic susceptibility testing. Urine culture is quantitative and very reliable, but can take at least one day to obtain a result and it is expensive.Miniaturization of bacterial culture within dipstick format, Digital Dipstick, allows bacterial detection, identification and quantification for bacteriuria within 10–12 hours at the point-of-care. Clinicians will often treat symptomatic bacteriuria based on the results of the urine dipstick test while waiting for the culture results. Bacteriuria can usually be detected using a urine dipstick test. The nitrite test detects nitrate-reducing bacteria if growing in high numbers in urine. A negative dipstick test does not exclude bacteriuria, as not all bacteria which can colonise the urinary tract are nitrate-reducing. The leukocyte esterase test indirectly detects the presence of leukocytes (white blood cells) in urine which can be associated with a urinary tract infection. In the elderly, the leukocyte esterase test is often positive even in the absence of an infection. The urine dipstick test is readily available and provides fast, but often unreliable results. Microscopy can also be used to detect bacteriuria. It is rarely used in clinical routine since it requires more time and equipment and does not allow reliable identification or quantification of the causal bacterial species.Bacteriuria is assumed if a single bacterial species is isolated in a concentration greater than 100,000 colony forming units per millilitre of urine in clean-catch midstream urine specimens. In urine samples obtained from women, there is a risk for bacterial contamination from the vaginal flora. Therefore, in research, usually a second specimen is analysed to confirm asymptomatic bacteriuria in women. For urine collected via bladder catheterization in men and women, a single urine specimen with greater than 100,000 colony forming units of a single species per millilitre is considered diagnostic. The threshold for women displaying UTI symptoms can be as low as 100 colony forming units of a single species per millilitre. However, bacteria below a threshold of 10000 colony forming units per millilitre are usually reported as "no growth" by clinical laboratories.Using special techniques certain non-disease causing bacteria have also been found in the urine of healthy people. These are part of the resident microbiota. Screening Although controversial, many countries including the United States recommend a one time screening for bacteriuria during mid pregnancy. The screening method is by urine culture. Screening non-pregnant adults is recommended against by the United States Preventive Task Force. Treatment The decision to treat bacteriuria depends on the presence of accompanying symptoms and comorbidities. Asymptomatic Asymptomatic bacteriuria generally does not require treatment. Exceptions include those undergoing surgery of the urinary tract, children with vesicoureteral reflux or others with structural abnormalities of the urinary tract. In many countries, regional guidelines recommend treatment of pregnant women.There is no indication to treat asymptomatic bacteriuria in diabetics, renal transplant recipients, and in those with spinal cord injuries.The overuse of antibiotics to treat asymptomatic bacteriuria has many adverse effects such as an increased risk of diarrhea, the spread of antimicrobial resistance, and infection due to Clostridium difficile. Symptomatic Symptomatic bacteriuria is synonymous with urinary tract infection and typically treated with antibiotics. Common choices include nitrofurantoin and trimethoprim/sulfamethoxazole. Epidemiology References == External links ==
Justifiable homicide
The concept of justifiable homicide in criminal law is a defense to culpable homicide (criminal or negligent homicide). Generally, there is a burden of production of exculpatory evidence in the legal defense of justification. In most countries, a homicide is justified when there is sufficient evidence to disprove (under the "beyond a reasonable doubt" standard for criminal charges, and "preponderance of evidence" standard for claims of wrongdoing, i.e. civil liability) the alleged criminal act or wrongdoing. The key to this legal defense is that it was reasonable for the subject to believe that there was an imminent and otherwise unavoidable danger of death or grave bodily harm to the innocent by the deceased when they committed the homicide. A homicide in this instance is blameless. Common excusing conditions Potentially excusing conditions common to multiple jurisdictions include the following. Capital punishment in places that it is legal. Where a state is engaged in a war with a legitimate casus belli, a combatant may lawfully kill an enemy combatant so long as that combatant is not hors de combat. This principle is embedded in public international law and has been respected by most states around the world. In most countries, it is lawful for a citizen to repel violence with violence to protect someones life or destruction of property.The scope of self-defense varies; some jurisdictions have a duty to retreat rule that disallows this defense if it was safe to flee from potential violence. In some jurisdictions, the castle doctrine allows the use of deadly force in self-defense against an intruder in ones home. Other jurisdictions have stand-your-ground laws that allow use of deadly force in self-defense in a vehicle or in public, without a duty to retreat. Where the persons death is inflicted by the effecting of lawful arrest or prevention of lawfully detained persons escape, quelling riot or insurrection when the use of force is "no more than absolutely necessary". The doctrine of necessity allows, for example, a surgeon to separate conjoined twins, killing the weaker twin to allow the stronger twin to survive. (This is not recognized, for example, in England and Wales). In the United States, the 2005 Unborn Victims of Violence Act changed the legal definition of human fetuses to "unborn children", formally defining feticide as murder (under USC §1111). However, the law retains explicit exceptions which prohibit the prosecution "of any person for conduct relating to an abortion," "of any person for any medical treatment," or "of any woman with respect to her unborn child," thereby preserving the right to an abortion stemming from Roe v. Wade. However, Roe v. Wade was overturned in 2022 by Dobbs v. Jackson Womens Health Organization. Several countries, such as the Netherlands, Belgium, Switzerland, Japan, Canada, and the U.S. states of Oregon and Washington allow both active and passive euthanasia by law, if justified. The "heat of the moment" defense for crimes of passion: death results from a situation where the defendant is deemed to have lost control. This may be considered a part of the defense of provocation against charges of murder. Based on the idea that all individuals may suddenly and unexpectedly lose control when words are spoken or events occur, jurisdictions differ on whether this should be allowed to excuse liability or merely mitigate to a lesser offense such as manslaughter, and under which circumstances this defense can be used. In many common law jurisdictions, provocation is a partial defense that converts what would have been murder into manslaughter. A few jurisdictions do not prosecute (Iran, Iraq) or have a lesser penalty (Kuwait, Egypt) for honor killings. European Convention on Human Rights Article 2 Paragraph 2 of the European Convention On Human Rights provides that death resulted from defending oneself or others, arresting a suspect or fugitive, or suppressing riots or insurrections, will not contravene the Article when the use of force involved is "no more than absolutely necessary": 2. Deprivation of life shall not be regarded as inflicted in contravention of this Article when it results from the use of force which is no more than absolutely necessary: (a) in defence of any person from unlawful violence; (b) in order to effect a lawful arrest or to prevent the escape of a person lawfully detained; (c) in action lawfully taken for the purpose of quelling a riot or insurrection. Example of Criminal Procedure Act in South Africa In South Africa, §49 Criminal Procedure Act used to provide: (2) Where the person concerned is to be arrested for an offense referred to in Schedule 1 or is to be arrested on the ground of having committed such an offense, and the person authorized under this Act to arrest or to assist in arresting him cannot arrest him or prevent him from fleeing by other means than killing him, the killing shall be deemed to be justifiable homicide. This has now been amended by §7 Judicial Matters Second Amendment Act 122 of 1998: (2) If any arrestor attempts to arrest a suspect and the suspect resists the attempt, or flees, or resists the attempt and flees, when it is clear that an attempt to arrest him or her is being made, and the suspect cannot be arrested without the use of force, the arrestor may, in order to effect the arrest, use such force as may be reasonably necessary and proportional in the circumstances to overcome resistance or to prevent the suspect from fleeing: Provided that the arrester is justified in terms of this section in using deadly force that is intended or is likely to cause death or grievous bodily harm to a suspect, only if he believes on reasonable grounds- (a) that the force is immediately necessary for the purpose of protecting the arrestor, any person lawfully assisting the arrestor or any other person from imminent or future death or grievous bodily harm; (b) that there is a substantial risk that the suspect will cause imminent or future death or grievous bodily harm if the arrest is delayed; or (c) that the offence for which the arrest is sought is in progress and is of a forcible and serious nature and involves the use of life threatening violence or a strong likelihood that it will cause grievous bodily harm. Circumstances which justify homicide in the United States A non-criminal homicide ruling, usually committed in self-defense or in defense of another, exists under United States law. A homicide may be considered justified if it is done to prevent a very serious crime, such as rape, armed robbery, manslaughter or murder. The victim must reasonably believe, under the totality of the circumstances, that the assailant intended to commit a criminal act that would likely result in the death or life-threatening injury of an innocent person. A homicide performed out of vengeance, or retribution for action in the past, or in pursuit of a "fleeing felon" (except under specific circumstances) would not be considered justifiable. In many states, given a case of self-defense, the defendant is expected to obey a duty to retreat if it is possible to do so. In the states of Alabama, Alaska, Arizona, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Mississippi, Missouri, Montana, New Jersey, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, Tennessee, Texas, West Virginia, Washington, Wyoming and other Castle Doctrine states, there is no duty to retreat in certain situations (depending on the state, this may apply to ones home, business, or vehicle, or to any public place where a person is lawfully present). Preemptive self-defense, in which one kills another on suspicion that the victim might eventually become dangerous, is not justifiable. In the U.S. Supreme Court ruling of District of Columbia v. Heller, the majority held that the Constitution protected the right to the possession of firearms for the purpose of self-defense "and to use that arm for traditionally lawful purposes, such as self-defense within the home".Two other forms of justifiable homicide are unique to the prison system: the death penalty and preventing prisoners from escaping. To quote the California State Penal Code (state law) that covers justifiable homicide: 196. Homicide is justifiable when committed by public officers and those acting by their command in their aid and assistance, either--1. In obedience to any judgment of a competent Court; or,2. When necessarily committed in overcoming actual resistance to the execution of some legal process, or in the discharge of any other legal duty; or3. When necessarily committed in retaking felons who have been rescued or have escaped, or when necessarily committed in arresting persons charged with felony, and who are fleeing from justice or resisting such arrest.Although the above text is from California law, many other jurisdictions, like Florida, have similar laws to prevent escapes from custody. Examples include self defense, prevention of criminal act, trespassers, and defense of another person. Notable cases involving justifiable homicide Killing of MaKhia Bryant. Her shooting, which prevented her from stabbing another girl, was later deemed a justifiable homicide with prosecutors noting "Under Ohio law the use of deadly force by a police officer is justified when there exists an immediate or imminent threat of death or serious bodily injury to the officer or another." Sam Cooke. After an inquest and investigation, the courts ruled Cookes death to be a justifiable homicide. John Dillinger. When BOI agents moved to arrest Dillinger as he exited the theater, he tried to flee. He was shot in the back; the deadly shot was ruled justifiable homicide. Fred Hampton. In January 1970, the Cook County Coroner held an inquest; the jury concluded that Hamptons and Clarks deaths were justifiable homicides. George Jackson (activist). Miller had not been charged with any crime, as a grand jury ruled his actions during the prison fight justifiable homicide. Don King. In 1954, King shot a man in the back after spotting him trying to rob one of his gambling houses; this incident was ruled a justifiable homicide. Killing of Sara-Nicole Morales. In 2021 Morales confronted several motorists who had followed her to her Florida home following a road rage incident. After she brandished a pistol at them from her front lawn, a motorcyclist she had deliberately struck with her vehicle during the road-range incident drew his own legally carried weapon and shot her several times; she died shortly after arrival at a local hospital. Police declined to file charges several months later. Eadweard Muybridge. In 1874, Muybridge shot and killed Major Harry Larkyns, his wifes lover, but was acquitted in a controversial jury trial, on the grounds of justifiable homicide. Johnny Stompanato homicide. After four hours of testimony and approximately 25 minutes of deliberation, the jury deemed Stompanatos killing a justifiable homicide. Further reading Omphemetse S. "Use of Deadly Force by the South African Police Services Re-visited". == References ==
Birth injury
Birth injury refers to damage or injury to the child before, during, or just after the birthing process. "Birth trauma" refers specifically to mechanical damage sustained during delivery (such as nerve damage and broken bones).The term "birth injury" may be used in two different ways: the ICD-10 uses "birth injury" and "birth trauma" interchangeably to refer to mechanical injuries sustained during delivery; the legal community uses "birth injury" to refer to any damage or injury sustained during pregnancy, during delivery, or just after delivery, including injuries caused by trauma.Birth injuries must be distinguished from birth defects. "Birth defect" refers to damage that occurs while the fetus is in the womb, which may be caused by genetic mutations, infections, or exposure to toxins. There are more than 4,000 types of birth defects. Causes Difficult labor (dystocia) Difficult labor, also known as dystocia or obstructed labor, occurs when the child cannot easily pass through the birth canal. This can result in fetal distress or physical trauma to the child, especially broken clavicles and damage to the brachial plexus nerves. It can also deprive the child of oxygen as the umbilical cord is pinched, potentially causing brain damage or death.Difficult labor may occur because the baby is abnormally large (macrosomia), because the mothers pelvis or birth canal is small or deformed, or because the baby is in an abnormal presentation for the birth (such as breech or transverse presentation). External causes Fetal malformations and birth injuries may occur as a result of exposure to environmental toxins such as mercury or lead. Many medications can also affect the development of the fetus, as can alcohol, tobacco, and illicit drugs.See Environmental toxins and fetal development. See Drugs in pregnancy. Genetic mutations Genetic mutations can cause a wide variety of fetal malformations, ranging from relatively mild cleft lips to severe and even fatal deformities. Infection Maternal infection may be transmitted to the fetus; this is called a vertically transmitted infection. The fetus has a weak immune system, so infections that are relatively minor in adults can be very serious in a developing fetus. In addition, some studies suggest that maternal infections increase the risk of neurodevelopmental disorders, including schizophrenia, in the child. Intrauterine hypoxia Intrauterine hypoxia, or oxygen deprivation in the womb, can cause serious brain damage in the fetus. It most commonly occurs because of damage to or malformation of the umbilical cord or placenta. Intrauterine hypoxia can cause brain damage, including cerebral palsy and other neurological and psychiatric disorders. Maternal health issues Certain maternal health issues can cause birth injuries. Gestational diabetes can cause premature birth, macrosomia, or stillbirth. Pregnancy complications Complications such as placenta previa, placental abruption, Placenta accreta, Retained placenta, Placental insufficiency, Placental infarcts, anemia, and preeclampsia can limit the supply of oxygen and nutrients to the fetus, increasing the risk of birth defects. Severe cases may be fatal to the fetus. Common types of birth injury Brachial plexus injury The brachial plexus is the plexus of nerves that lies between the neck and axilla and controls the motion of the arm and hand. The brachial plexus may be stretched and damaged during a difficult delivery. In minor cases, the nerves heal and full use of the hand and arm is recovered. In more severe cases, the child may sustain permanent nerve damage and may not have full use of the shoulder, arm, or hand. Brachial plexus injuries occur in 1-3 children per 1,000 live births.See Erbs palsy and Klumpkes palsy. Brain damage Brain damage may be caused by a number of factors, including fetal malformation due to genetic mutation or exposure to toxins, intrauterine hypoxia, or physical trauma during delivery.Cerebral palsy is one example of brain damage incurred before or during delivery; about 10,000 children are diagnosed with cerebral palsy every year. Bruising A difficult delivery may lead to bruising, especially on the head and face, from pressure against the mothers pelvis or pressure caused by forceps or a vacuum device (see ventouse) used in delivery. Bone fractures Bone fractures can occur during a difficult delivery. Fracture of the clavicle is the most common birth injury. Meconium aspiration syndrome Meconium is a sticky substance that usually makes up the childs first bowel movement. If the fetus is stressed before or during delivery, the meconium may be released and may mix with the amniotic fluid. If it gets into the childs airways or lungs, it can cause meconium aspiration syndrome. Serious cases may result in pneumonia or a collapsed lung. Legal issues Birth injuries may be unavoidable or they may be attributable to medical malpractice. When a legal claim results, birth injury cases are a subset of medical malpractice cases. Legal claims from birth injury cases typically seek compensation for the medical costs associated with the injury, including ongoing therapeutic and medical support for the child.In order to prevail in a birth injury malpractice case, the plaintiff must show (1) that the medical care provider owed a duty to the child, (2) that the medical care provider breached that duty by failing to meet the accepted standard of care, (3) that the child sustained an injury that was caused by the medical care providers breach of duty to the child, and (4) the child sustained damages as a result of the injury. All four elements must be present in order for the plaintiff to win. == References ==
Fasciolopsiasis
Fasciolopsiasis results from an infection by the trematode Fasciolopsis buski, the largest intestinal fluke of humans (up to 7.5 cm in length). Signs and symptoms Most infections are light, almost asymptomatic. In heavy infections, symptoms can include abdominal pain, chronic diarrhea, anemia, ascites, toxemia, allergic responses, sensitization caused by the absorption of the worms allergenic metabolites can lead to intestinal obstruction and may eventually cause death of the patient. Cause The parasite infects an amphibic snail (Segmentina nitidella, Segmentina hemisphaerula, Hippeutis schmackerie, Gyraulus, Lymnaea, Pila, Planorbis (Indoplanorbis)) after being released by infected mammalian feces; metacercaria released from this intermediate host encyst on aquatic plants like water spinach, which are eaten raw by pigs and humans. Water itself can also be infective when drunk unboiled ("Encysted cercariae exist not only on aquatic plants, but also on the surface of the water.") Diagnosis Microscopic identification of eggs, or more rarely of the adult flukes, in the stool or vomitus is the basis of specific diagnosis. The eggs are indistinguishable from those of the very closely related Fasciola hepatica liver fluke, but that is largely inconsequential since treatment is essentially identical for both. Prevention Infection can be prevented by immersing vegetables in boiling water for a few seconds to kill the infective metacercariae, avoiding the use of untreated feces ("nightsoil") as a fertilizer, and maintenance of proper sanitation and good hygiene. Additionally, snail control should be attempted. Treatment Praziquantel is the drug of choice for treatment. Treatment is effective in early or light infections. Heavy infections are more difficult to treat. Studies of the effectiveness of various drugs for treatment of children with F. buski have shown tetrachloroethylene as capable of reducing faecal egg counts by up to 99%. Other anthelmintics that can be used include thiabendazole, mebendazole, levamisole and pyrantel pamoate. Oxyclozanide, hexachlorophene and nitroxynil are also highly effective. Epidemiology F. buski is endemic in Asia including China, Taiwan, Southeast Asia, Indonesia, Malaysia, and India. It has an up to 60% prevalence in worst-affected communities in southern and eastern India and mainland China and has an estimated 10 million human infections. Infections occur most often in school-aged children or in impoverished areas with a lack of proper sanitation systems.A study from 1950s found that F. buski was endemic in central Thailand, affecting about 2,936 people due to infected aquatic plants called water caltrops and the snail hosts which were associated with them. The infection, or the eggs which hatch in the aquatic environment, were correlated with the water pollution in different districts of Thailand such as Ayuthaya Province. The high incidence of infection was prevalent in females and children ages 10–14 years of age. References Further reading Graczyk TK, Gilman RH, Fried B (2001). "Fasciolopsiasis: is it a controllable food-borne disease?". Parasitol. Res. 87 (1): 80–3. doi:10.1007/s004360000299. PMID 11199855. S2CID 19075125. Mas-Coma S, Bargues MD, Valero MA (2005). "Fascioliasis and other plant-borne trematode zoonoses". Int. J. Parasitol. 35 (11–12): 1255–78. doi:10.1016/j.ijpara.2005.07.010. PMID 16150452. http://www.ijmm.org/text.asp?2017/35/4/551/224440 Fasciolopsiasis in children: Clinical, Sociodemographic Profile and outcome. Indian Journal of Medical Microbiology2017 vol 35, issue 4 page 551-554DOI:10.4103/ijmm.IJMM_17_7 == External links ==