text
stringlengths 582
140k
|
---|
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is California encephalitis orthobunyavirus and explain in detail? [/INST] California encephalitis orthobunyavirus type strain California encephalitis virus was discovered in Kern County, California, and causes encephalitis in humans. Encephalitis is an acute inflammation of the brain that can cause minor symptoms, such as headaches, to more severe symptoms such as seizures. Mosquitoes serve as its carrier and for this reason this virus is known as an arbovirus (arthropod-borne virus).
California encephalitis orthobunyavirus belongs to the Bunyavirales order of viruses. The La Crosse Virus from the same genus is also a common cause of encephalitis in the United States. Other viruses with similar disease symptoms but genetically unrelated include: Saint Louis encephalitis and West Nile virus.
Signs and symptoms
The incubation period of California encephalitis is usually 3–7 days. An early symptoms phase of 1–4 days commonly precedes the onset of encephalitis, manifesting as fever, chills, nausea, vomiting, headache, lethargy and abdominal pain.The encephalitis is characterized by fever, drowsiness, and lack of mental alertness and orientation. Seizures occur in 50% of children. Focal neurologic findings, like focal motor abnormalities and paralysis, irregular and abnormal reflexes develop in 20% of children. 10% of patients develop coma.
The total duration of illness rarely exceeds 10–14 days. Recurrent unprovoked seizures occur even after the illness has passed develops in 20% of patients, especially those who had seizures during the acute illness. In adults, infection is asymptomatic, which means that the patient is a carrier of the infection, but experiences no symptoms or only mild feverish illness.The mortality rate is less than 1% and most patients with encephalitis clinical symptoms recover completely. Up to 20% of patients develop behavioral problems or recurrent seizures.
Pathophysiology
Initial infection by the virus and primary spread of the virus causes the onset of non-specific symptoms such as headache and fever. Secondary spread and the multiplication of the virus in the CNS (central nervous system) causes symptoms such as stiff neck, lethargy and seizures. It then can result in encephalitis, when inflammation of the brain, produced by infection by the virus, damages nerve cells, which affects signaling of the brain to the body.
After the virus enters the body via a mosquito bite, the virus undergoes local replication at the skin site where virus entered the body. A primary spread of virus occurs, with seeding of the reticuloendothelial system, mainly in the liver, spleen, and lymph nodes. With the ongoing replication of the virus a secondary spread occurs, with the seeding of the CNS. Not all the cases reach this stage, depending on the efficiency of viral replication at the different stages and the degree of virus spread. The California encephalitis virus invades the CNS through either the cerebral capillary endothelial cells or the choroid plexus.
Treatment
Treatments are given to manage the symptoms the patient is having. In patients who are very sick, supportive treatment, such as mechanical ventilation, is equally important. Corticosteroids are used to reduce brain swelling and inflammation. Sedatives may be needed for irritability or restlessness </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Delayed ejaculation and explain in detail? [/INST] Delayed ejaculation (DE) describes a mans inability or persistent difficulty in achieving orgasm, despite typical sexual desire and sexual stimulation. Generally, a man can reach orgasm within a few minutes of active thrusting during sexual intercourse, whereas a man with delayed ejaculation either does not have orgasms at all or cannot have an orgasm until after prolonged intercourse which might last for 30–45 minutes or more. Delayed ejaculation is closely related to anorgasmia.
In the Diagnostic and Statistical Manual of Mental Disorders (DSM), fifth edition, the definition of DE requires 1 of 2 symptoms: either a marked delay in or a marked infrequency or absence of ejaculation on 75% to 100% of occasions for at least 6 months of partnered sexual activity without the individual desiring delay, and causing significant distress to the individual. DE is meant to describe any and all of the ejaculatory disorders that result in a delay or absence of ejaculation. The Third International Consultation on Sexual Medicine defined DE as an IELT threshold beyond 20 to 25
minutes of sexual activity, as well as negative personal consequences such as bother or distress. Of note, most mens intravaginal ejaculation latency time range is approximately 4 to 10 minutes. While ejaculatory latency and control were significant criteria to differentiate men with DE from those without ejaculatory disorders, bother/distress did not emerge as a significant factor.Delayed ejaculation is the least common of the male sexual dysfunctions, and can result as a side effect of some medications. In one survey, 8% of men reported being unable to achieve orgasm over a two-month period or longer in the previous year. DEs are either primary and lifelong or acquired. Acquired DEs may be situational. While most men do experience occasional or short term delayed ejaculation issues, the prevalence of lifelong DE and acquired long term DE is estimated around 1% and 4%, respectively.
Signs and symptoms
Delayed ejaculation can be mild (men who still experience orgasm during intercourse, but only under certain conditions), moderate (cannot ejaculate during intercourse, but can during fellatio or manual stimulation), severe (can ejaculate only when alone), or most severe (cannot ejaculate at all). All forms may result in a sense of sexual frustration. In most cases, delayed ejaculation presents the condition in which the man can climax and ejaculate only during masturbation, but not during sexual intercourse.
Causes
The etiologies of delayed ejaculation can be age-related, organic, psychological, or pharmacological.Primary lifelong DEs are poorly understood and rarely explained by few congenital anatomic causes (viz., Müllerian duct cyst, Wolfian duct abnormalities, prune belly syndrome, imperforate anus, congenital ejaculatory duct obstruction, genetic abnormalities including cystic fibrosis, etc.)
Anatomic causes (acquired ejaculatory duct obstruction)
Infective/Inflammation (residual damage from acute infections, chronic infections, urethritis, prostatitis, orchitis, genitourinary tuberculosis, schistosomiasis, lichen sclerosis, etc.)
Neurogenic causes (diabetic autonomic neuropathy, spinal cord or nerve root injury from trauma or disc prolapse, multiple sclerosis, etc.). DE can be due to the injury to pelvic nerves responsible for orgasm from trauma as a result of pelvic surgery (viz., prostate surgery including transurethral resection of prostate and bladder neck incision, proctocolectomy, bilateral sympathectomy, abdominal aortic aneurysmectomy, para-aortic lymphadenectomy etc.). Some men report a lack of sensation in the nerves of the glans penis, which may or may not be related to external factors, including a history of circumcision.
Endocrine (hypogonadism, pituitary disorders such as hyperprolactinaemia and Cushings disease, thyroid disorders, etc.). Although low testosterone level had been considered a risk factor in the past, more recent studies have not confirmed any association between ejaculation times and serum testosterone levels.
Delayed ejaculation is a possible side effect of alcohol and certain medications, including antipsychotics, antidepressants including selective serotonin reuptake inhibitors (SSRIs), opiates such as morphine or oxycodone, many benzodiazepines such as Valium or Xanax, and certain antihypertensives including thiazide diuretics, alpha-adrenergic blockers and ganglion blockers. Although they may increase sexual desire, stimulants such as amphetamines and cocaine have an inhibitory effect on ejaculation, and can cause erectile dysfunction and reduced penile sensitivity via their vasoconstrictive effects.
Psychological (acute psychological distress, relationship distress, psychosexual skill deficit, disconnect between arousal and sexual situations, masturbation style and frequency, etc.)
Psychological and lifestyle factors have been discussed as potential contributors, including insufficient sleep, distraction due to worry, distraction from the environment, anxiety about pleasing their partner and anxiety about relationship problems.
Idiosyncratic masturbation and lack of desire for stimuli are also proposed risk factors for DE. The word Idiosyncratic means something that pertains to an individual. In this case, it means a person who has a way of masturbating that is unique (or out of the ordinary). Most men stroke their penis with their hand in a way that vaguely imitates the stimulation from intercourse. An idiosyncratic style is defined as a technique not easily duplicated by a partner utilizing their hand, mouth, anus, or vagina. In these patients delayed ejaculation is adaptation to a certain masturbatory technique. Lawrence Sank (1998) wrote about the "Traumatic masturbatory syndrome", when the sensations a man feels when masturbating may bear little resemblance to the sensations he experiences during intercourse. Factors such as pressure, angle and grip during masturbation can make for an experience so different from sex with a partner that the ability to ejaculate is reduced or eliminated. One in three men with DE report Idiosyncratic masturbation. Also, high-frequency masturbation is associated with prolonged DE in penetrative sex with the partner accounting for another one third of the cases. Fantasy/partner disparity – that is to say, variant sexual fantasy during masturbation that was not incorporated into sex with their partner – accounted for one in five DEs. According to DSM-5-TR, "Delayed ejaculation is associated with highly frequent masturbation, use of masturbation techniques not easily duplicated by a partner, and marked disparities between sexual fantasies during masturbation and the reality of sex with a partner."
Intravaginal ejaculation disorder is a peculiar Japanese case, but is very similar to "traumatic masturbatory syndrome," which is also mainly caused by intense masturbation.
Diagnosis
Diagnosis and management of DE warrant one of the most comprehensive medical evaluation in sexual health assessment that includes a full medical and sexual history performed along with a detailed physical examination. Understanding the quality of the sexual response cycle (desire, arousal, ejaculation, orgasm, and refractory period); details of the ejaculatory response, sensation, frequency, and sexual activity/techniques; cultural context and history of the disorder; partners assessment of the disorder and if the partner has any sexual dysfunction themselves; and the overall satisfaction of the sexual relationship are all important to garner during history-taking.Relatively normal latency to orgasm with self masturbation as compared to insertive or intravaginal ejaculation latency time reasonably rules out most of the organic causes of DEs.
Treatment
Primary, lifelong DEs are poorly understood and hence less well studied. Organic causes in the acquired DEs should be addresses promptly. Retraining masturbatory practices and re-calibrating the mismatch of sexual fantasies with arousal are essential when these are contributing to DE. Techniques geared towards reduction of anxiety are important skills that can help overcome performance anxiety, as this can often interrupt the natural erectile function through orgasmic progression.
Sex therapy
Therapy usually involves homework assignments and exercises intended to help a man get used to having orgasms through insertional intercourse, vaginal, anal, or oral, that is through the way to which he is not accustomed. Commonly, the couple is advised to go through three stages. At the first stage, a man masturbates in the presence of his partner </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Kashin–Beck disease and explain in detail? [/INST] Kashin–Beck disease (KBD) is a chronic, endemic type of osteochondropathy (disease of the bone) that is mainly distributed from northeastern to southwestern China, including 15 provinces. As of 2011, Tibet has the highest incidence rate of KBD in China. Southeast Siberia and North Korea are other affected areas. KBD usually involves children ages 5–15. To date, more than a million individuals have had KBD. The symptoms of KBD include joint pain, morning stiffness in the joints, disturbances of flexion and extension in the elbows, enlarged inter-phalangeal joints, and limited motion in many joints of the body. Death of cartilage cells in the growth plate and articular surface is the basic pathologic feature; this can result in growth retardation and secondary osteoarthrosis. Histological diagnosis of KBD is particularly difficult; clinical and radiological examinations have proved to be the best means for identifying KBD. Little is known about the early stages of KBD before the visible appearance of the disease becomes evident in the destruction of the joints.This disease has been recognized for over 150 years but its cause has not yet been completely defined. Currently, the accepted potential causes of KBD include mycotoxins present in grain, trace mineral deficiency in nutrition, and high levels of fulvic acid in drinking water. Selenium and iodine have been considered the major deficiencies associated with KBD. Mycotoxins produced by fungi can contaminate grain, which may cause KBD because mycotoxins cause the production of free radicals. T-2 is the mycotoxin implicated with KBD, produced by members of several fungal genera. T-2 toxin can cause lesions in hematopoietic, lymphoid, gastrointestinal, and cartilage tissues, especially in physeal cartilage. Fulvic acid present in drinking water damages cartilage cells. Selenium supplementation in selenium deficient areas has been shown to prevent this disease. However, selenium supplementation in some areas showed no significant effect, meaning that deficiency of selenium may not be the dominant cause in KBD. Recently a significant association between SNP rs6910140 of COL9A1 and Kashin–Beck disease was discovered genetically, suggesting a role of COL9A1 in the development of Kashin–Beck disease.
Cause
The cause of KBD remains controversial. Studies of the pathogenesis and risk factors of KBD have proposed selenium deficiency, inorganic (e.g. manganese, phosphate) and organic matter (humic and fulvic acids) in drinking water, and fungi on self-produced storage grain (Alternaria sp., Fusarium sp.) producing trichotecene (T2) mycotoxins.
Most authors accept that the cause of KBD is multifactorial, selenium deficiency being the underlying factor that predisposes the target cells (chondrocytes) to oxidative stress from free-radical carriers, such as mycotoxins in storage grain and fulvic acid in drinking water.
In Tibet, epidemiological studies carried out in 1995–1996 by MSF and coll. showed that KBD was associated with iodine deficiency and with fungal contamination of barley grains by Alternaria sp., Trichotecium sp., Cladosporium sp. and Drechslera sp. Indications existed as well with respect to the role of organic matters in drinking water.
A severe selenium deficiency was documented as well, but selenium statuswas not associated with the disease, suggesting that selenium deficiency alone could not explain the occurrence of KBD in the villages under study.An association with the gene Peroxisome Proliferator-Activated Receptor Gamma Coactivator 1 Beta (PPARGC1B) has been reported. This gene is a transcription factor and mutations in this gene would be expected to affect several other genes.
Prevention
Prevention of Kashin–Beck disease has a long history. Intervention strategies were mostly based on one of the three major theories of its cause.
Selenium supplementation, with or without additional antioxidant therapy (vitamin E and vitamin C) has been reported to be successful, but in other studies no significant decrease could be shown compared to a control group. Major drawbacks of selenium supplementation are logistic difficulties (including daily or weekly intake and drug supply), potential toxicity (in case of less well-controlled supplementation strategies), associated iodine deficiency (that should be corrected before selenium supplementation to prevent further deterioration of thyroid status) and low compliance. The latter was certainly the case in Tibet, where selenium supplementation has been implemented from 1987 to 1994 in areas of high endemicity.
With the mycotoxin theory in mind, backing of grains before storage was proposed in Guangxi province, but results are not reported in international literature. Changing from grain source has been reported to be effective in Heilongjiang Province and North Korea.
With respect to the role of drinking water, changing of water sources to deep well water has been reported to decrease the X-ray metaphyseal detection rate in different settings.
In general, the effect of preventive measures however remains controversial, due to methodological problems (no randomised controlled trials), lack of documentation, or, as discussed above, inconsistency of results.
Treatment
Treatment of KBD is palliative. Surgical corrections have been made with success by Chinese and Russian orthopedists. By the end of 1992, Médecins Sans Frontières—Belgium started a physical therapy programme aiming at alleviating the symptoms of KBD patients with advanced joint impairment and pain (mainly adults), in Nyemo county, Lhasa prefecture </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Galactorrhea and explain in detail? [/INST] Galactorrhea (also spelled galactorrhoea) (galacto- + -rrhea) or lactorrhea (lacto- + -rrhea) is the spontaneous flow of milk from the breast, unassociated with childbirth or nursing.
Galactorrhea is reported to occur in 5–32% of women. Much of the difference in reported incidence can be attributed to different definitions of galactorrhea. Although frequently benign, it may be caused by serious underlying conditions and should be properly investigated. Galactorrhea also occurs in males, newborn infants and adolescents of both sexes.
Causes
Galactorrhea can take place as a result of dysregulation of certain hormones </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Stepsibling and explain in detail? [/INST] Step-siblings are children born of two different families who have been joined by marriage, A male step-sibling is a stepbrother and a female is a stepsister. The step-siblings relationship is connected through law and is not a blood relation.
Step-siblings are sometimes abbreviated informally as stepsibs.
Culture
In many fairy tales, the central character has a stepmother and the step-siblings serve as an extension of their mother </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Stillbirth and explain in detail? [/INST] Stillbirth is typically defined as fetal death at or after 20 or 28 weeks of pregnancy, depending on the source.: Overview tab, It results in a baby born without signs of life. A stillbirth can result in the feeling of guilt or grief in the mother. The term is in contrast to miscarriage, which is an early pregnancy loss, and Sudden Infant Death Syndrome, where the baby dies a short time after being born alive.Often the cause is unknown. Causes may include pregnancy complications such as pre-eclampsia and birth complications, problems with the placenta or umbilical cord, birth defects, infections such as malaria and syphilis, and poor health in the mother.: Causes tab, Risk factors include a mothers age over 35, smoking, drug use, use of assisted reproductive technology, and first pregnancy. Stillbirth may be suspected when no fetal movement is felt. Confirmation is by ultrasound.Worldwide prevention of most stillbirths is possible with improved health systems.: Overview tab, Around half of stillbirths occur during childbirth, with this being more common in the developing than developed world.: Info panel, Otherwise, depending on how far along the pregnancy is, medications may be used to start labor or a type of surgery known as dilation and evacuation may be carried out. Following a stillbirth, people are at higher risk of another one; however, most subsequent pregnancies do not have similar problems. Depression, financial loss, and family breakdown are known complications.Worldwide in 2019, there were an estimated 2.0 million stillbirths that occurred after 28 weeks of pregnancy (about 1 for every 72 births). They occur most commonly in low income settings, particularly the Middle east and Sub-Saharan Africa.: Info panel, In the United States, for every 167 births there is one stillbirth. Stillbirth rates have declined, though more slowly since the 2000s.
Causes
As of 2016, there is no international classification system for stillbirth causes. The causes of a large percentage of stillbirths is unknown, even in cases where extensive testing and an autopsy have been performed. A rarely used term to describe these is "sudden antenatal death syndrome", or SADS, a phrase coined in 2000. Many stillbirths occur at full term to apparently healthy mothers, and a postmortem evaluation reveals a cause of death in about 40% of autopsied cases.About 10% of cases are believed to be due to obesity, high blood pressure, or diabetes.Other risk factors include:
bacterial infection, like syphilis
malaria
birth defects, especially pulmonary hypoplasia
chromosomal aberrations
growth restriction
intrahepatic cholestasis of pregnancy
maternal diabetes
maternal consumption of recreational drugs (such as alcohol, nicotine, etc.) or pharmaceutical drugs contraindicated in pregnancy
postdate pregnancy
placental abruptions
physical trauma
radiation poisoning
Rh disease
celiac disease
female genital mutilation
umbilical cord accidents
Prolapsed umbilical cord – Prolapse of the umbilical cord happens when the fetus is not in a correct position in the pelvis. Membranes rupture and the cord is pushed out through the cervix. When the fetus pushes on the cervix, the cord is compressed and blocks blood and oxygen flow to the fetus. The mother has approximately 10 minutes to get to a doctor before there is any harm done to the fetus.
Monoamniotic twins – These twins share the same placenta and the same amniotic sac and therefore can interfere with each others umbilical cords. When entanglement of the cords is detected, it is highly recommended to deliver the fetuses as early as 31 weeks.
Umbilical cord length - A short umbilical cord (<30 cm) can affect the fetus in that fetal movements can cause cord compression, constriction, and rupture. A long umbilical cord (>72 cm) can affect the fetus depending on the way the fetus interacts with the cord. Some fetuses grasp the umbilical cord but it is yet unknown as to whether a fetus is strong enough to compress and stop blood flow through the cord. Also, an active fetus, one that frequently repositions itself in the uterus can accidentally entangle itself with the cord. A hyperactive fetus should be evaluated with ultrasound to rule out cord entanglement.
Cord entanglement - The umbilical cord can wrap around an extremity, the body or the neck of the fetus. When the cord is wrapped around the neck of the fetus, it is called a nuchal cord. These entanglements can cause constriction of blood flow to the fetus. These entanglements can be visualized with ultrasound.
Torsion – This term refers to the twisting of the umbilical around itself. Torsion of the umbilical cord is very common (especially in equine stillbirths) but it is not a natural state of the umbilical cord. The umbilical cord can be untwisted at delivery. The average cord has three twists.
Smoke Inhalation - If a pregnant woman gets trapped in a building fire, the smoke and fumes can kill an unborn child.A pregnant woman sleeping on her back after 28 weeks of pregnancy may be a risk factor for stillbirth.After a stillbirth there is a 2.5% risk of another stillbirth in the next pregnancy (an increase from 0.4%).In the United States, highest rates of stillbirths happen in pregnant women who:
are of low socioeconomic status
are aged 35 years or older
have chronic medical conditions such as diabetes, high blood pressure, high cholesterol, etc.
are African-American
have previously lost a child during pregnancy
have multiple children at a time (twins, triplets, etc.)
Diagnosis
It is unknown how much time is needed for a fetus to die. Fetal behavior is consistent and a change in the fetus movements or sleep-wake cycles can indicate fetal distress. A decrease or cessation in sensations of fetal activity may be an indication of fetal distress or death, though it is not entirely uncommon for a healthy fetus to exhibit such changes, particularly near the end of a pregnancy when there is considerably less space in the uterus than earlier in pregnancy for the fetus to move about. Still, medical examination, including a nonstress test, is recommended in the event of any type of any change in the strength or frequency of fetal movement, especially a complete cease; most midwives and obstetricians recommend the use of a kick chart to assist in detecting any changes. Fetal distress or death can be confirmed or ruled out via fetoscopy/doptone, ultrasound, and/or electronic fetal monitoring. If the fetus is alive but inactive, extra attention will be given to the placenta and umbilical cord during ultrasound examination to ensure that there is no compromise of oxygen and nutrient delivery.Some researchers have tried to develop models to identify, early on, pregnant women who may be at high risk of having a stillbirth.
Definition
There are a number of definitions for stillbirth. To allow comparison, the World Health Organization uses the ICD-10 definitions and recommends that any baby born without signs of life at greater than or equal to 28 completed weeks gestation be classified as a stillbirth.: Overview tab The WHO uses the ICD-10 definitions of "late fetal deaths" as their definition of stillbirth. Other organisations recommend that any combination of greater than 16, 20, 22, 24 or 28 weeks gestational age or 350 g, 400 g, 500 g or 1000 g birth weight may be considered a stillbirth.The term is often used in distinction to live birth (the baby was born alive, even if it died shortly thereafter) or miscarriage (early pregnancy loss). The word miscarriage is often used incorrectly to describe stillbirths. The term is mostly used in a human context; however, the same phenomenon can occur in all species of placental mammals.
Constricted umbilical cord
When the umbilical cord is constricted (q.v. "accidents" above), the fetus experiences periods of hypoxia, and may respond by unusually high periods of kicking or struggling, to free the umbilical cord. These are sporadic if constriction is due to a change in the fetus or mothers position, and may become worse or more frequent as the fetus grows. Extra attention should be given if mothers experience large increases in kicking from previous childbirths, especially when increases correspond to position changes.Regulating high blood pressure, diabetes and drug use may reduce the risk of a stillbirth. Umbilical cord constriction may be identified and observed by ultrasound, if requested.Some maternal factors are associated with stillbirth, including being age 35 or older, having diabetes, having a history of addiction to illegal drugs, being overweight or obese, and smoking cigarettes in the three months before getting pregnant.
Treatment
Fetal death in utero does not present an immediate health risk to the pregnant woman, and labour will usually begin spontaneously after two weeks, so the pregnant woman may choose to wait and bear the fetal remains vaginally. After two weeks, the pregnant woman is at risk of developing blood clotting problems, and labor induction is recommended at this point. In many cases, the pregnant woman will find the idea of carrying the dead fetus traumatizing and will elect to have labor induced. Caesarean birth is not recommended unless complications develop during vaginal birth. How the diagnosis of stillbirth is communicated by healthcare workers may have a long-lasting and deep impact on parents. People need to heal physically after a stillbirth just as they do emotionally. In Ireland, for example, people are offered a cuddle cot, a cooled cot which allows them to spend a number of days with their child before burial or cremation.
Delivery
In single stillbirths, common practice is to induce labor for the health of the mother due to possible complications such as exsanguination. Induction and labor can take 48 hours. In the case of various complications such as preclampsia, infections, multiples(twins), emergency Cesarean may occur.
Epidemiology
The average stillbirth rate in the United States is approximately 1 in 160 births, which is roughly 26,000 stillbirths each year. In Australia, England, Wales, and Northern Ireland, the rate is approximately 1 in every 200 births; in Scotland, 1 in 167. Rates of stillbirth in the United States have decreased by about two-thirds since the 1950s.The vast majority of stillbirths worldwide (98%) occur in low and middle-income countries, where medical care can be of low quality or unavailable. Reliable estimates calculate that, yearly, about 2.6 million stillbirths occur worldwide during the third trimester. Stillbirths were previously not included in the Global Burden of Disease Study which records worldwide deaths from various causes until 2015.
Society and culture
The way people view stillbirths has changed dramatically over time; however, its economic and psychosocial impact is often underestimated. In the early 20th century, when a stillbirth occurred, the baby was taken and discarded and the parents were expected to immediately let go of the attachment and try for another baby. In many countries, parents are expected by friends and family members to recover from the loss of an unborn baby very soon after it happens. Societally-mediated complications such as financial hardship and depression are among the more common results. A stillbirth can have significant psychological effects on the parents, notably causing feelings of guilt in the mother. Further psycho-social effects on parents include apprehension, anger, feelings of worthlessness and not wanting to interact with other people, with these reactions sometimes carried over into pregnancies that occur after the stillbirth. Men also suffer psychologically after stillbirth, although they are more likely to hide their grief and feelings and try to act strong, with the focus on supporting their partner.
Legal definitions
Australia
In Australia, stillbirth is defined as a baby born with no signs of life that weighs more than 400 grams, or more than 20 weeks in gestation. They legally must have their birth registered.
Austria
In Austria, a stillbirth is defined as a birth of a child of at least 500g weight without vital signs, e.g. blood circulation, breath or muscle movements.
Canada
Beginning in 1959, "the definition of a stillbirth was revised to conform, in substance, to the definition of fetal death recommended by the World Health Organization". The definition of "fetal death" promulgated by the World Health Organization in 1950 is as follows:
"Fetal death" means death prior to the complete expulsion or extraction from its mother of a product of human conception, irrespective of the duration of pregnancy and which is not an induced termination of pregnancy. The death is indicated by the fact that after such expulsion or extraction, the fetus does not breathe or show any other evidence of life, such as beating of the heart, pulsation of the umbilical cord, or definite movement of voluntary muscles. Heartbeats are to be distinguished from transient cardiac contractions; respirations are to be distinguished from fleeting respiratory efforts or gasps.
Germany
In Germany, a stillbirth is defined as the birth of a child of at least 500g weight without blood circulation or breath. Details for burial vary amongst the federal states.
Republic of Ireland
Since 1 January 1995, stillbirths occurring in the Republic of Ireland must be registered; stillbirths that occurred before that date can also be registered but evidence is required. For the purposes of civil registration, s.1 of the Stillbirths Registration Act 1994 refers to "...a child weighing at least 500 grammes, or having reached a gestational age of at least 24 weeks who shows no signs of life."
Netherlands
In the Netherlands, stillbirth is defined differently by the Central Bureau of Statistics (CBS) and the Dutch Perinatal Registry (Stichting PRN). The birth and mortality numbers from the CBS include all livebirths, regardless of gestational duration, and all stillbirths from 24 weeks of gestation and onwards. In the Perinatal Registry, gestational duration of both liveborn and stillborn children is available. They register all liveborn and stillborn children from 22, 24 or 28 weeks of gestation and onwards (dependent on the report: fetal, neonatal or perinatal mortality). Therefore, data from these institutions on (still)births can not simply be compared one-on-one.
United Kingdom
The registration of stillbirths has been required in England and Wales from 1927 and in Scotland from 1939 but is not required in Northern Ireland. Sometimes a pregnancy is terminated deliberately during a late phase, for example due to congenital anomaly. UK law requires these procedures to be registered as "stillbirths".
England and Wales
For the purposes of the Births and Deaths Registration Act 1926 (as amended), section 12 contains the definition:"still-born" and "still-birth" shall apply to any child which has issued forth from its mother after the twenty fourth week of pregnancy and which did not at any time after being completely expelled from its mother, breathe or show any other signs of life.A similar definition is applied within the Births and Deaths Registration Act 1953 (as amended), contained in s.41.
The above definitions apply within those Acts thus other legislation will not necessarily be in identical terms.
s.2 of the 1953 Act requires that registration of a birth takes place within 42 days of the birth except where an inquest takes place or the child has been "found exposed" in which latter case the time limit runs from the time of finding.
Extracts from the register of stillbirths are restricted to those who have obtained consent from the Registrar General for England and Wales.
Scotland
Section 56(1) of the Registration of Births, Deaths and Marriages (Scotland) Act 1965 (as amended) contains the definition:"still-born child" means a child which has issued forth from its mother after the twenty-fourth week of pregnancy and which did not at any time after being completely expelled from its mother breathe or show any other signs of life, and the expression "still-birth" shall be construed accordinglys.21(1) of the same Act requires that:Except so far as otherwise provided by this section or as may be prescribed, the provisions of this Part of this Act shall, so far as applicable, apply to still-births in like manner as they apply to births of children born alive.In the general case, s.14 of the Act requires that a birth has to be registered within 21 days of the birth or of the child being found.
Unlike the registers for births, marriages, civil partnerships and deaths, the register of still-births is not open to public access and issue of extracts requires the permission of the Registrar General for Scotland.
Northern Ireland
In Northern Ireland, the Births and Deaths Registration (Northern Ireland) Order 1976, as amended contains the definition:"still-birth" means the complete expulsion or extraction from its mother after the twenty-fourth week of pregnancy of a child which did not at any time after being completely expelled or extracted breathe or show any other evidence of life.Registration of stillbirths can be made by a relative or certain other persons involved with the stillbirth but it is not compulsory to do so. Registration takes place with the District Registrar for the Registration District where the still-birth occurred or for the District in which the mother is resident. A stillbirth certificate will be issued to the registrant with further copies only available to those obtaining official consent for their issue. Registration may be made within three months of the still-birth.
United States
In the United States, there is no standard definition of the term stillbirth.In the U.S., the Born-Alive Infants Protection Act of 2002 specifies that any breathing, heartbeat, pulsating umbilical cord or confirmed voluntary muscle movement indicate live birth rather than stillbirth.The Centers for Disease Control and Prevention collects statistical information on "live births, fetal deaths, and induced termination of pregnancy" from 57 reporting areas in the United States. Each reporting area has different guidelines and definitions for what is being reported; many do not use the term "stillbirth" at all. The federal guidelines suggest (at page 1) that fetal death and stillbirth can be interchangeable terms. The CDC definition of "fetal death" is based on the definition promulgated by the World Health Organization in 1950 (see section above on Canada). Researchers are learning more about the long term psychiatric sequelae of traumatic birth and believe the effects may be intergenerationalThe federal guidelines recommend reporting those fetal deaths whose birth weight is over 12.5 oz (350 g), or those more than 20 weeks gestation. Forty-one areas use a definition very similar to the federal definition, thirteen areas use a shortened definition of fetal death, and three areas have no formal definition of fetal death </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Aura (symptom) and explain in detail? [/INST] An aura is a perceptual disturbance experienced by some with epilepsy or migraine. An epileptic aura is a seizure.Epileptic and migraine auras are due to the involvement of specific areas of the brain, which are those that determine the symptoms of the aura. Therefore, if the visual area is affected, the aura will consist of visual symptoms, while if a sensory one, then sensory symptoms will occur.
Epileptic auras are subjective sensory or psychic phenomena due to a focal seizure, i.e. a seizure that originates from that area of the brain responsible for the function which then expresses itself with the symptoms of the aura. It is important because it makes it clear where the alteration causing the seizure is located. An epileptic aura is in most cases followed by other manifestations of a seizure, for example a convulsion, since the epileptic discharge spreads to other parts of the brain. Rarely it remains isolated. Auras, when they occur, allow some people who have epilepsy time to prevent injury to themselves and/or others when they lose consciousness.
Migraine
The aura of migraine is visual in the vast majority of cases, because dysfunction starts from the visual cortex. The aura is usually followed, after a time varying from minutes to an hour, by the migraine headache. However, the migraine aura can manifest itself in isolation, that is, without being followed by headache. The aura can stay for the duration of the migraine; depending on the type of aura, it can leave the person disoriented and confused. It is common for people with migraines to experience more than one type of aura during the migraine. Most people who have auras have the same type of aura every time.Auras can also be confused with sudden onset of panic, panic attacks or anxiety attacks, which creates difficulties in diagnosis </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Occupational burnout and explain in detail? [/INST] According to the World Health Organization (WHO), occupational burnout is a syndrome resulting from chronic work-related stress, with symptoms characterized by "feelings of energy depletion or exhaustion; increased mental distance from one’s job, or feelings of negativism or cynicism related to ones job; and reduced professional efficacy". While burnout may influence health and can be a reason for people contacting health services, it is not itself classified by the WHO as a medical condition or mental disorder. WHO additional states that "Burn-out refers specifically to phenomena in the occupational context and should not be applied to describe experiences in other areas of life."
History
According to Wolfgang Kaskcha, "Burnout as a phenomenon has probably existed at all times and in all cultures." He notes that the condition is described in the Book of Exodus.
Gordon Parker believes the ancient European concept of acedia refers to burnout, and not depression as many others believe.It has been suggested that the first printed use of the term "burnout" in English was in one of Shakespeares sonnets in 1599, referring to a womans love burning out. The sonnet in question is one of Shakespeares most famous.
In 1869, New York neurologist George Beard used the term "neurasthenia" to describe a very broad condition caused by the exhaustion of the nervous system, which was thought to be particularly found in "civilized, intellectual communities." The concept soon became popular, and many in the United States believed themselves to have it. Some came to call it "Americanitis". The rest cure was a commonly prescribed treatment (though there were many others). Beard yet further broadened the potential symptoms of neurasthenia over time, so that almost any symptom or behaviour could be deemed to be caused by it. Don R Lipsitt would later wonder if the term "burnout" was similarly too broadly defined to be useful.In 1961, Graham Greene published the novel A Burnt-Out Case, the story of an architect who became greatly fatigued by his work, and who took much time to recover.In 1969, HB Bradley used the term "burnout" in a criminology paper to describe the fatigued staff at a centre for treating young adult offenders. This has been cited as the first known academic work to use the term for this concept.In 1974, Herbert Freudenberger, an American psychologist, used the term in his academic paper "Staff Burn-Out." The paper was based on his qualitative observations of the volunteer staff (including himself) at a free clinic for drug addicts. He characterized burnout by a set of symptoms that includes exhaustion resulting from works excessive demands as well as physical symptoms such as headaches and sleeplessness, "quickness to anger", and closed thinking. He observed that the burned-out worker "looks, acts, and seems depressed." After the publication of Freudenbergers paper, interest in the concept grew.
In 1976, Christina Maslach noted the term being used by California lawyers working with the poor, and began to study the concept.In 1981, Maslach and Susan E. Jackson published an instrument for assessing burnout, the Maslach Burnout Inventory (MBI). It was the first such instrument of its kind and the most widely used burnout instrument. The two researchers described burnout in terms of emotional exhaustion, depersonalization (feeling low-empathy towards other people in an occupational setting), and reduced feelings of work-related personal accomplishment.In 2010, the Swedish Board of Health and Welfare adopted a refined conceptualisation of severe burnout it described as "exhaustion disorder."In 2015, The World Health Organization adopted a conceptualisation of burnout that is consistent with Maslachs. It adopted a modified version of this in 2022. However, "is not itself classified by the WHO as a medical condition or mental disorder."As of 2017, nine European countries considered burnout as being a recognised medical disorder.
Diagnosis
Classification
The two main classification systems of psychological disorders are the Diagnostic and Statistical Manual of Mental Disorders (DSM, used in North America and elsewhere) from the American Psychiatric Association (APA), and the International Classification of Diseases (ICD, used in Europe and elsewhere) from the World Health Organisation (WHO).
Burnout is not recognized as a distinct mental disorder in the DSM-5 (published in 2013). Its definitions for Adjustment Disorders, and Unspecified Trauma- and Stressor-Related Disorder in some cases reflect the condition. 2022s update, the DSM-5-TR, did not add a definition of burnout.As of 2017, nine European countries considered burnout as being a recognised medical disorder. These countries classify it under one or more codes of the ICD.
The ICD-10 (current 1994–2021) classified "burn-out" as a type of non-medical life-management difficulty under code Z73.0. It was considered to be one of the "factors influencing health status and contact with health services" and "should not be used" for "primary mortality coding". It was also considered one of the "problems related to life-management difficulty". The condition is further defined as being a "state of vital exhaustion," which historically had been called neurasthenia.The ICD-10 also contained a medical condition category of "F43.8 Other reactions to severe stress."In 2010 the Swedish Board of Health and Welfare added “exhaustion disorder” (ED; F43.8A) to the Swedish version of the ICD-10, the ICD-10-SE, representing what is typically called "burnout" in English. (See also sv:Utmattningssyndrom). Swedish sufferers of severe burnout had earlier been treated as having neurasthenia.The ICD-10-SE defined exhaustion disorder as consisting of:
A. Physical and mental symptoms of exhaustion during at least two weeks. The symptoms have developed in response to one or more identifiable stressors present for at least six months.
B. The clinical picture is dominated by markedly reduced mental energy, as manifested by reduced initiative, lack of endurance, or increased time needed for recovery after mental effort.
C. At least four of the following symptoms have been present, nearly every day, during the same 2-week period:
Concentration difficulties or impaired memory
Markedly reduced capacity to tolerate demands or to work under time pressure
Emotional instability or irritability
Sleep disturbance
Marked fatigability or physical weakness
Physical symptoms such as aches and pains, palpitations, gastrointestinal problems, vertigo or increased sensitivity to soundD. The symptoms cause clinically significant distress or impairment in occupational, social or other important respects.
E. The symptoms are not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication) or a physical illness/injury (e.g., hypothyroidism, diabetes, infectious disease).
According to Lindsäter et al., "The diagnosis has become almost as prevalent as major depression in Swedish health care settings, and currently accounts for more instances of long-term sick-leave reimbursement than any other single diagnosis in the country."The Royal Dutch Medical Association defined "burnout" as a subtype of adjustment disorder as part of the ICD-10 system. In the Netherlands burnout is included in handbooks and medical staff are trained in its diagnosis and treatment. A reform of Dutch health insurance resulted in adjustment disorder treatment being removed from the compulsory basic package in 2012. Practiconers were told that more serious cases of the condition may qualify for classification as depression or anxiety disorder.A new version of the ICD, ICD-11, was released in June 2018, for first use in January 2022. The new version has an entry coded and titled "QD85 Burn-out". The ICD-11 describes the condition this way:
Burn-out is a syndrome conceptualized as resulting from chronic workplace stress that has not been successfully managed. It is characterized by three dimensions: 1) feelings of energy depletion or exhaustion; 2) increased mental distance from one’s job, or feelings of negativism or cynicism related to ones job; and 3) reduced professional efficacy. Burn-out refers specifically to phenomena in the occupational context and should not be applied to describe experiences in other areas of life.
This condition is classified under "Problems associated with employment or unemployment" in the section on "Factors influencing health status or contact with health services." The section is devoted to reasons other than recognized diseases or health conditions for which people contact health services. In a statement made in May 2019, the WHO said "Burn-out is included in the 11th Revision of the International Classification of Diseases (ICD-11) as an occupational phenomenon. It is not classified as a medical condition."The ICDs browser and coding tool both attach the term "caregiver burnout" to category "QF27 Difficulty or need for assistance at home and no other household member able to render care." QF27 thus acknowledges that burnout can occur outside the work context.
The ICD-11 also has the medical condition "6B4Y Other specified disorders specifically associated with stress", which is the equivalent of the ICD-10s F43.8.
If, after treatment, a person with burnout continues to have persistent physical symptoms triggered by the condition, in Iceland they may be considered to have "somatic symptom disorder" (DSM-5) and "bodily distress disorder" (ICD-11).Further detail about the varied ways clinicians and others use ICD and DSM classification with burnout was published by Arno Van Dam in 2021.The US governments National Institutes of Health includes the condition as "psychological burnout" in its index of the National Library of Medicine, and provides a number of synonyms. It defines the condition as "An excessive reaction to stress caused by ones environment that may be characterized by feelings of emotional and physical exhaustion, coupled with a sense of frustration and failure."SNOMED CT includes the term "burnout" as a synonym for its defined condition of "Physical AND emotional exhaustion state," which is a subtype of anxiety disorder. The Diseases Database defines the condition as "professional burnout."
Instruments
In 1981, Maslach and Jackson developed the first widely used instrument for assessing burnout, the Maslach Burnout Inventory (MBI). It remains by far the most commonly used instrument to assess the condition. Consistent with Maslachs conceptualization, the MBI operationalizes burnout as a three-dimensional syndrome consisting of emotional exhaustion, depersonalization (an unfeeling and impersonal response toward recipients of ones service, care, treatment, or instruction), and reduced personal accomplishment.The MBI originally focused on human service professionals (e.g., teachers, social workers). Since that time, the MBI has been used for a wider variety of workers (e.g., healthcare workers). The instrument or its variants are now employed with job incumbents working in many other occupations.There are other conceptualizations of burnout that differ from that suggested by Maslach and adopted by the WHO.
In 1999, Demerouti and Bakker, with their Oldenburg Burnout Inventory (OLBI), conceptualized burnout in terms of exhaustion and disengagement, linking it to the job demands-resources model. This instrument is used mainly in the United States.
Also that year, Wilmar Schaufeli and Arnold Bakker released the Utrecht Work Engagement Scale (UWES). It uses a similar conceptualisation to the MBI. However the UWES measures vigour, dedication and absorption; positive counterparts to the values measured by the MBI. It is used mainly in Germany.
In 2005, TS Kristensen et al. released the public domain Copenhagen Burnout Inventory (CBI). They argued that the definition of burnout should be limited to fatigue and exhaustion. The CBI has had some use in Germany.
In 2006, Shirom and Melamed with their Shirom-Melamed Burnout Measure (SMBM) conceptualized burnout in terms of physical exhaustion, cognitive weariness, and emotional exhaustion. An examination of the SMBMs emotional exhaustion subscale, however, indicates that the subscale more clearly embodies Maslachs concept of depersonalization than her concept of emotional exhaustion. This measure has seen little use.
In 2010, researchers from Mayo Clinic used portions of the MBI, along with other comprehensive assessments, to develop the Well-Being Index, a nine-item self-assessment tool designed to measure burnout and other dimensions of distress in healthcare workers specifically. It has been mainly used in the United States.
In 2014, Aniella Besèr et al. developed the Karolinska Exhaustion Disorder Scale (KEDS), which is used mainly in Sweden. It was designed to measure the symptoms defined by the ICD-10-SEs category for exhaustion disorder. The authors believed that those with the disorder were often initially depressed, but that this soon passed. The core symptoms of the disorder were deemed to be "exhaustion, cognitive problems, sleep disturbance". The authors also believed that the condition was clearly differentiated from both depression and anxiety.
In 2020, the Occupational Depression Inventory (ODI), was developed to quantify the severity of work-attributed depressive symptoms and establish provisional diagnoses of job-ascribed depression. The ODI covers nine symptoms, including exhaustion (burnout’s putative core). The instrument exhibits particularly robust psychometric properties. The ODI is the only instrument that assesses work-related suicidal thoughts, a particularly important symptom calling for immediate attention.
In 2021, the Sydney Burnout Measure (SBM) was released by Gordon Parker et al., which "captures domains of exhaustion, cognitive impairment, loss of empathy, withdrawal and insularity, and impaired work performance, as well as several anxiety, depression and irritability symptoms."There are still other conceptualizations as well that are embodied in other instruments, including the Hamburg Burnout Inventory, and Malach-Piness Burnout Measure.Kristensen et al. and Malach-Pines (who also published as Pines) advanced the view that burnout can also occur in connection to life outside of work. For example, Malach-Pines developed a burnout measure keyed the role of spouse.The core of all of these conceptualizations, including that of Freudenberger, is exhaustion.Maslach advanced the idea that burnout should not be viewed as a depressive condition. Recent evidence, based on factor-analytic and meta-analytic findings, calls into question this supposition. Burnout is also now often seen as involving the full array of depressive symptoms (e.g., low mood, cognitive alterations, sleep disturbance).Marked differences among researchers conceptualizations of what constitutes burnout have underlined the need for a consensus definition.
Subtypes
In 1991, Barry A. Farber in his research on teachers proposed that there are three types of burnout:
"wearout" and "brown-out," where someone gives up having had too much stress and/or too little reward
"classic/frenetic burnout," where someone works harder and harder, trying to resolve the stressful situation and/or seek suitable reward for their work
"underchallenged burnout," where someone has low stress, but the work is unrewarding.Farber found evidence that the most idealistic teachers who enter the profession are the most likely to suffer burnout.
Caregiver burnout
Burnout affects caregivers.
Relationship with other conditions
A growing body of evidence suggests that burnout is etiologically, clinically, and nosologically similar to depression. In a study that directly compared depressive symptoms in burned out workers and clinically depressed patients, no diagnostically significant differences were found between the two groups; burned out workers reported as many depressive symptoms as clinically depressed patients. Moreover, a study by Bianchi, Schonfeld, and Laurent (2014) showed that about 90% of workers with very high scores on the MBI meet diagnostic criteria for depression. The view that burnout is a form of depression has found support in several recent studies. Some authors have recommended that the nosological concept of burnout be revised or even abandoned entirely given that it is not a distinct disorder and that there is no agreement on burnouts diagnostic criteria. A newer generation of studies indicates that burnout, particularly its exhaustion dimension, problematically overlaps with depression; these studies have relied on more sophisticated statistical techniques, for example, exploratory structural equation modeling (ESEM) bifactor analysis, than earlier studies of the topic. The advantage of ESEM bifactor analysis, which combines the best features of exploratory and confirmatory factor analysis, is that it provides a granular look at item-construct relationships, without falling into traps earlier burnout researchers fell into.Liu and van Liew wrote that "the term burnout is used so frequently that it has lost much of its original meaning. As originally used, burnout meant a mild degree of stress-induced unhappiness. The solutions ranged from a vacation to a sabbatical. Ultimately, it was used to describe everything from fatigue to a major depression and now seems to have become an alternative word for depression, but with a less serious significance" (p. 434). The authors equate burnout with adjustment disorder with depressed mood.
Kakiashvili et al., however, argued that although burnout and depression have overlapping symptoms, endocrine evidence suggests that the disorders biological bases are different. They argued that antidepressants should not be used by people with burnout because the medications can make the underlying hypothalamic–pituitary–adrenal axis dysfunction worse.
Despite its name, depression with atypical features, which is seen in the above table, is not a rare form of depression. The cortisol profile in atypical depression, in contrast to that of melancholic depression, is similar to the cortisol profile found in burnout. Commentators advanced the view that burnout differs from depression because the cortisol profile of burnout differs from that of melancholic depression; however, as the above table indicates, burnouts cortisol profile is similar to that of atypical depression.It has previously been hypothesised that Chronic Fatigue Syndrome (CFS), also known as Myalgic Encephalomyelitis (ME), is caused by burnout. It is suggested that the "burning out" of the bodys stress symptom (by any of a wide range of causes) can lead to chronic fatigue. However, the very nomenclature of "Chronic Fatigue Syndrome" is viewed as problematic by both patients and healthcare professionals specialising in the condition, as it fails to encompass the wide range of non-fatigue related symptoms present in ME/CFS, or to differentiate between the so-named condition and the symptom of chronic fatigue which is present in many unrelated conditions or diseases. Furthermore, more up-to-date research has shown that ME/CFS is a multi-systemic condition that is not psychological in etiology and so comparison to occupational burnout is now less prudent.Overtraining syndrome is also known as "chronic fatigue," "burnout" and "overstress" in athletes. It has been conceptualised as adjustment disorder, a common diagnosis for those burnt out.Recovering alcoholics often have a blunted cortisol response to stress, as do some cancer patients and survivors. This symptom is a particular feature of people suffering burnout. Altered cortisol response is experienced by up to 80% of people with depression.
Risk factors
Evidence suggests that the etiology of burnout is multifactorial, with personality factors playing an important, long-overlooked role. Cognitive dispositional factors implicated in depression have also been found to be implicated in burnout. One cause of burnout includes stressors that a person is unable to cope with fully.Burnout is thought to occur when a mismatch is present between the nature of the job and the job the person is actually doing. A common indication of this mismatch is work overload, which sometimes involves a worker who survives a round of layoffs, but after the layoffs the worker finds that he or she is doing too much with too few resources. Overload may occur in the context of downsizing, which often does not narrow an organizations goals, but requires fewer employees to meet those goals. The research on downsizing, however, indicates that downsizing has more destructive effects on the health of the workers who survive the layoffs than mere burnout; these health effects include increased levels of sickness and greater risk of mortality.The job demands-resources model has implications for burnout, as measured by the Oldenburg Burnout Inventory (OLBI). Physical and psychological job demands were concurrently associated with the exhaustion, as measured by the OLBI. Lack of job resources was associated with the disengagement component of the OLBI.
Maslach, Schaufeli and Leiter identified six risk factors for burnout in 2001: mismatch in workload, mismatch in control, lack of appropriate awards, loss of a sense of positive connection with others in the workplace, perceived lack of fairness, and conflict between values.Although job stress has long been viewed as the main determinant of burnout, recent meta-analytic findings indicate that job stress is a weak predictor of burnout. These findings question one of the most central assumptions of burnout research.
In a systematic literature review in 2014, the Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU) found that a number of work environment factors could affect the risk of developing exhaustion disorder or depressive symptoms:
People who experience a work situation with little opportunity to influence, in combination with too high demands, develop more depressive symptoms.
People who experience a lack of compassionate support in the work environment develop more symptoms of depression and exhaustion disorder than others. Those who experience bullying or conflict in their work develop more depressive symptoms than others, but it is not possible to determine whether there is a corresponding connection for symptoms of exhaustion disorder.
People who feel that they have urgent work or a work situation where the reward is perceived as small in relation to the effort develops more symptoms of depression and exhaustion disorder than others. This also applies to those who experience insecurity in the employment, for example concerns that the workplace will be closed down.
In some work environments, people have less trouble. People who experience good opportunities for control in their own work and those who feel that they are treated fairly develop less symptoms of depression and exhaustion disorder than others.
Women and men with similar working conditions develop symptoms of depression as much as exhaustion disorder.Negative consequences of burnout on both the employee and the organization call for preventive measures in order to reduce the impact of the risk factors. Burnout prevention strategies, either addressing to the general working population (primary prevention) or the occupational groups which are more vulnerable (secondary prevention), are focused on reducing the impact of risk factors. Reviews of healthcare professionals‟ burnout focusing on identifying risk factors have been conducted previously.
Effects
Some research indicates that burnout is associated with reduced job performance, coronary heart disease, and mental health problems. Examples of emotional symptoms of occupational burnout include a lack of interest in the work being done, a decrease in work performance levels, feelings of helplessness, and trouble sleeping. With regard to mental health problems, research on dentists and physicians suggests that what is meant by burnout is a depressive syndrome. Thus reduced job performance and cardiovascular risk could be related to burnout because of burnouts tie to depression. Behavioral signs of occupational burnout are demonstrated through cynicism within workplace relationships with coworkers, clients, and the organization itself.
Other effects of burnout can manifest as lower energy and productivity levels, with workers observed to be consistently late for work and feeling a sense of dread upon arriving. They can suffer concentration problems, forgetfulness, increased frustration, and/or feelings of being overwhelmed. They may complain and feel negative, or feel apathetic and believe they have little impact on their coworkers and environment. Occupational burnout is also associated with absenteeism, other time missed from work, and thoughts of quitting.Chronic burnout is also associated with cognitive impairments in memory and attention. (See also Effects of stress on memory.)
Research suggests that burnout can manifest differently between genders, with higher levels of depersonalisation among men and increased emotional exhaustion among women. Other research suggests that people revealing a history of occupational burnout face future hiring discrimination.When it happens in the context of volunteering, burnout can often lead to volunteers significantly reducing their activities or stopping volunteering altogether. Likewise, academic stress, as it has been called, or academic burnout is a process originated from the inciting element, which implies the subjection to events that from the students perspective can be considered as stressors.Burnout might result in learned helplessness.Burnout has been found to be associated with spiritual health.
Stages
Drozdstoj Stoyanov et al. believe burnout has three stages:
Flame out - trying to deal with excessive stress, causing depression and anxiety.
Genuine burn out - a process of increasing emotional exhaustion.
Rust out - being completely alienated from other people, cynical and ineffective.
Treatment and prevention
Health condition treatment and prevention methods are often classified as "primary prevention" (stopping the condition occurring), "secondary prevention" (removing the condition that has occurred) and "tertiary prevention" (helping people live with the condition).
In addition to interventions that can address and improve conditions on the work side of work-life balance, the ways in which people spend their non-work time can help to prevent burnout and improve health and well-being.
Primary prevention
Maslach believes that the only way to truly prevent burnout is through a combination of organizational change and education for the individual.Maslach and Leiter postulated that burnout occurs when there is a disconnection between the organization and the individual with regard to what they called the six areas of worklife: workload, control, reward, community, fairness, and values. Resolving these discrepancies requires integrated action on the part of both the individual and the organization. With regard to workload, assuring that a worker has adequate resources to meet demands as well as ensuring a satisfactory work–life balance could help revitalize employees energy. With regard to values, clearly stated ethical organizational values are important for ensuring employee commitment. Supportive leadership and relationships with colleagues are also helpful.One approach for addressing these discrepancies focuses specifically on the fairness area. In one study employees met weekly to discuss and attempt to resolve perceived inequities in their job. The intervention was associated with decreases in exhaustion over time but not cynicism or inefficacy, suggesting that a broader approach is required.Hätinen et al. suggest "improving job-person fit by focusing attention on the relationship between the person and the job situation, rather than either of these in isolation, seems to be the most promising way of dealing with burnout." They also note that "at the individual level, cognitive-behavioural strategies have the best potential for success."
Burnout prevention programs have traditionally focused on cognitive-behavioral therapy (CBT), cognitive restructuring, didactic stress management, and relaxation. CBT, relaxation techniques (including physical techniques and mental techniques), and schedule changes are the best-supported techniques for reducing or preventing burnout in a health-care setting. Mindfulness therapy has been shown to be an effective preventative for occupational burnout in medical practitioners. Combining both organizational and individual-level activities may be the most beneficial approach to reducing symptoms. A Cochrane review, however, reported that evidence for the efficacy of CBT in healthcare workers is of low quality, indicating that it is no better than alternative interventions.For the purpose of preventing occupational burnout, various stress management interventions have been shown to help improve employee health and well-being in the workplace and lower stress levels. Training employees in ways to manage stress in the workplace have also been shown to be effective in preventing burnout. One study suggests that social-cognitive processes such as commitment to work, self-efficacy, learned resourcefulness, and hope may insulate individuals from experiencing occupational burnout. Increasing a workers control over his or her job is another intervention has been shown to help counteract exhaustion and cynicism in the workplace.Additional prevention methods include: starting the day with a relaxing ritual; yoga; adopting healthy eating, exercising, and sleeping habits; setting boundaries; taking breaks from technology; nourishing ones creative side, and learning how to manage stress.Barry A. Farber suggests strategies like setting more achievable goals, focusing on the value of the work, and finding better ways of doing the job, can all be helpful ways of helping the stressed. People who dont mind the stress but want more reward can benefit from reassessing their work–life balance and implementing stress reduction techniques like meditation and exercise. Others with low stress, but are underwhelmed and bored with work, can benefit from seeking greater challenge.In one trial, workers taking a high-dose Vitamin B complex "reported significantly lower personal strain and a reduction in confusion and depressed/dejected mood after 12 weeks."In another trial, doctors undertaking a program involving "mindfulness, reflection, shared experience, and small-group learning" for 9 months had a much lessened propensity to burn out. Another trial with medical interns found a ten-week mindfulness program reduced the incidence of burnout.
Secondary and tertiary prevention (aka treatment and management)
Hätinen et al. list a number of common treatments, including treatment of any outstanding medical conditions, stress management, time management, depression treatment, psychotherapies, ergonomic improvement and other physiological and occupational therapy, physical exercise and relaxation. They have found that is more effective to have a greater focus on "group discussions on work related issues", and discussion about "work and private life interface" and other personal needs with psychologists and workplace representatives.Jac JL van der Klink and Frank JH van Dijk suggest stress inoculation training, cognitive restructuring, graded activity and "time contingency" (progressing based on a timeline rather than patients comfort) are effective methods of treatment.Kakiashvili et al. said that "medical treatment of burnout is mostly symptomatic: it involves measures to prevent and treat the symptoms." They say the use of anxiolytics and sedatives to treat burnout related stress is effective, but does nothing to change the sources of stress. They say the poor sleep often caused by burnout (and the subsequent fatigue) is best treated with hypnotics and CBT (within which they include "sleep hygiene, education, relaxation training, stimulus control, and cognitive therapy"). They advise against the use of antidepressants as they worsen the hypothalamic–pituitary–adrenal axis dysfunction at the core of burnout. They also believe "vitamins and minerals are crucial in addressing adrenal and HPA axis dysfunction," noting the importance of specific nutrients. Omega-3 fatty acids may be helpful. DHA supplementation may also be useful for moderating norepinephrine </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Adoption and explain in detail? [/INST] Adoption is a process whereby a person assumes the parenting of another, usually a child, from that persons biological or legal parent or parents. Legal adoptions permanently transfer all rights and responsibilities, along with filiation, from the biological parents to the adoptive parents.
Unlike guardianship or other systems designed for the care of the young, adoption is intended to effect a permanent change in status and as such requires societal recognition, either through legal or religious sanction. Historically, some societies have enacted specific laws governing adoption, while others used less formal means (notably contracts that specified inheritance rights and parental responsibilities without an accompanying transfer of filiation). Modern systems of adoption, arising in the 20th century, tend to be governed by comprehensive statutes and regulations.
History
Antiquity
Adoption for the well-born
While the modern form of adoption emerged in the United States, forms of the practice appeared throughout history. The Code of Hammurabi, for example, details the rights of adopters and the responsibilities of adopted individuals at length. The practice of adoption in ancient Rome is well-documented in the Codex Justinianus.Markedly different from the modern period, ancient adoption practices put emphasis on the political and economic interests of the adopter, providing a legal tool that strengthened political ties between wealthy families and created male heirs to manage estates. The use of adoption by the aristocracy is well-documented: many of Romes emperors were adopted sons. Adrogation was a kind of Roman adoption in which the person adopted consented to be adopted by another.
Infant adoption during Antiquity appears rare. Abandoned children were often picked up for slavery and composed a significant percentage of the Empires slave supply. Roman legal records indicate that foundlings were occasionally taken in by families and raised as a son or daughter. Although not normally adopted under Roman Law, the children, called alumni, were reared in an arrangement similar to guardianship, being considered the property of the father who abandoned them.Other ancient civilizations, notably India and China, used some form of adoption as well. Evidence suggests the goal of this practice was to ensure the continuity of cultural and religious practices; in contrast to the Western idea of extending family lines. In ancient India, secondary sonship, clearly denounced by the Rigveda, continued, in a limited and highly ritualistic form, so that an adopter might have the necessary funerary rites performed by a son. China had a similar idea of adoption with males adopted solely to perform the duties of ancestor worship.The practice of adopting the children of family members and close friends was common among the cultures of Polynesia including Hawaii where the custom was referred to as hānai.
Middle ages to modern period
Adoption and commoners
The nobility of the Germanic, Celtic, and Slavic cultures that dominated Europe after the decline of the Roman Empire denounced the practice of adoption. In medieval society, bloodlines were paramount; a ruling dynasty lacking a "natural-born" heir apparent was replaced, a stark contrast to Roman traditions. The evolution of European law reflects this aversion to adoption. English common law, for instance, did not permit adoption since it contradicted the customary rules of inheritance. In the same vein, Frances Napoleonic Code made adoption difficult, requiring adopters to be over the age of 50, sterile, older than the adopted person by at least 15 years, and to have fostered the adoptee for at least six years. Some adoptions continued to occur, however, but became informal, based on ad hoc contracts. For example, in the year 737, in a charter from the town of Lucca, three adoptees were made heirs to an estate. Like other contemporary arrangements, the agreement stressed the responsibility of the adopted rather than adopter, focusing on the fact that, under the contract, the adoptive father was meant to be cared for in his old age; an idea that is similar to the conceptions of adoption under Roman law.Europes cultural makeover marked a period of significant innovation for adoption. Without support from the nobility, the practice gradually shifted toward abandoned children. Abandonment levels rose with the fall of the empire and many of the foundlings were left on the doorstep of the Church. Initially, the clergy reacted by drafting rules to govern the exposing, selling, and rearing of abandoned children. The Churchs innovation, however, was the practice of oblation, whereby children were dedicated to lay life within monastic institutions and reared within a monastery. This created the first system in European history in which abandoned children did not have legal, social, or moral disadvantages. As a result, many of Europes abandoned and orphaned children became alumni of the Church, which in turn took the role of adopter. Oblation marks the beginning of a shift toward institutionalization, eventually bringing about the establishment of the foundling hospital and orphanage.As the idea of institutional care gained acceptance, formal rules appeared about how to place children into families: boys could become apprenticed to an artisan and girls might be married off under the institutions authority. Institutions informally adopted out children as well, a mechanism treated as a way to obtain cheap labor, demonstrated by the fact that when the adopted died their bodies were returned by the family to the institution for burial.This system of apprenticeship and informal adoption extended into the 19th century, today seen as a transitional phase for adoption history. Under the direction of social welfare activists, orphan asylums began to promote adoptions based on sentiment rather than work; children were placed out under agreements to provide care for them as family members instead of under contracts for apprenticeship. The growth of this model is believed to have contributed to the enactment of the first modern adoption law in 1851 by the Commonwealth of Massachusetts, unique in that it codified the ideal of the "best interests of the child". Despite its intent, though, in practice, the system operated much the same as earlier incarnations. The experience of the Boston Female Asylum (BFA) is a good example, which had up to 30% of its charges adopted out by 1888. Officials of the BFA noted that, although the asylum promoted otherwise, adoptive parents did not distinguish between indenture and adoption: "We believe," the asylum officials said, "that often, when children of a younger age are taken to be adopted, the adoption is only another name for service."
Modern period
Adopting to create a familyThe next stage of adoptions evolution fell to the emerging nation of the United States. Rapid immigration and the American Civil War resulted in unprecedented overcrowding of orphanages and foundling homes in the mid-nineteenth century. Charles Loring Brace, a Protestant minister, became appalled by the legions of homeless waifs roaming the streets of New York City. Brace considered the abandoned youth, particularly Catholics, to be the most dangerous element challenging the citys order.
His solution was outlined in The Best Method of Disposing of Our Pauper and Vagrant Children (1859), which started the Orphan Train movement. The orphan trains eventually shipped an estimated 200,000 children from the urban centers of the East to the nations rural regions. The children were generally indentured, rather than adopted, to families who took them in. As in times past, some children were raised as members of the family while others were used as farm laborers and household servants. The sheer size of the displacement—the largest migration of children in history—and the degree of exploitation that occurred, gave rise to new agencies and a series of laws that promoted adoption arrangements rather than indenture. The hallmark of the period is Minnesotas adoption law of 1917, which mandated investigation of all placements and limited record access to those involved in the adoption.During the same period, the Progressive movement swept the United States with a critical goal of ending the prevailing orphanage system. The culmination of such efforts came with the First White House Conference on the Care of Dependent Children called by President Theodore Roosevelt in 1909, where it was declared that the nuclear family represented "the highest and finest product of civilization" and was best able to serve as primary caretaker for the abandoned and orphaned. As late as 1923, only two percent of children without parental care were in adoptive homes, with the balance in foster arrangements and orphanages. Less than forty years later, nearly one-third were in adoptive homes.Nevertheless, the popularity of eugenic ideas in America put up obstacles to the growth of adoption. There were grave concerns about the genetic quality of illegitimate and indigent children, perhaps best exemplified by the influential writings of Henry H. Goddard, who protested against adopting children of unknown origin, saying,
Now it happens that some people are interested in the welfare and high development of the human race; but leaving aside those exceptional people, all fathers and mothers are interested in the welfare of their own families. The dearest thing to the parental heart is to have the children marry well and rear a noble family. How short-sighted it is then for such a family to take into its midst a child whose pedigree is absolutely unknown; or, where, if it were partially known, the probabilities are strong that it would show poor and diseased stock, and that if a marriage should take place between that individual and any member of the family the offspring would be degenerates. The period 1945 to 1974, the baby scoop era, saw rapid growth and acceptance of adoption as a means to build a family. Illegitimate births rose three-fold after World War II, as sexual mores changed. Simultaneously, the scientific community began to stress the dominance of nurture over genetics, chipping away at eugenic stigmas. In this environment, adoption became the obvious solution for both unwed people and infertile couples.Taken together, these trends resulted in a new American model for adoption. Following its Roman predecessor, Americans severed the rights of the original parents while making adopters the new parents in the eyes of the law. Two innovations were added: 1) adoption was meant to ensure the "best interests of the child", the seeds of this idea can be traced to the first American adoption law in Massachusetts, and 2) adoption became infused with secrecy, eventually resulting in the sealing of adoption and original birth records by 1945. The origin of the move toward secrecy began with Charles Loring Brace, who introduced it to prevent children from the Orphan Trains from returning to or being reclaimed by their parents. Brace feared the impact of the parents poverty, in general, and Catholic religion, in particular, on the youth. This tradition of secrecy was carried on by the later Progressive reformers when drafting of American laws.The number of adoptions in the United States peaked in 1970. It is uncertain what caused the subsequent decline. Likely contributing factors in the 1960s and 1970s include a decline in the fertility rate, associated with the introduction of the pill, the completion of legalization of artificial birth control methods, the introduction of federal funding to make family planning services available to the young and low-income, and the legalization of abortion. In addition, the years of the late 1960s and early 1970s saw a dramatic change in societys view of illegitimacy and in the legal rights of those born outside of wedlock. In response, family preservation efforts grew so that few children born out of wedlock today are adopted. Ironically, adoption is far more visible and discussed in society today, yet it is less common.The American model of adoption eventually proliferated globally. England and Wales established their first formal adoption law in 1926. The Netherlands passed its law in 1956. Sweden made adoptees full members of the family in 1959. West Germany enacted its first laws in 1977. Additionally, the Asian powers opened their orphanage systems to adoption, influenced as they were by Western ideas following colonial rule and military occupation. In France, local public institutions accredit candidates for adoption, who can then contact orphanages abroad or ask for the support of NGOs. The system does not involve fees, but gives considerable power to social workers whose decisions may restrict adoption to "standard" families (middle-age, medium to high income, heterosexual, Caucasian).Adoption is today practiced globally. The table below provides a snapshot of Western adoption rates. Adoption in the United States still occurs at rates nearly three times those of its peers even though the number of children awaiting adoption has held steady in recent years, between 100,000 and 125,000 during the period 2009 to 2018.
Contemporary adoption
Forms of adoption
Contemporary adoption practices can be open or closed.
Open adoption allows identifying information to be communicated between adoptive and biological parents and, perhaps, interaction between kin and the adopted person. Open adoption can be an informal arrangement subject to termination by adoptive parents who have sole custody over the child. In some jurisdictions, the biological and adoptive parents may enter into a legally enforceable and binding agreement concerning visitation, exchange of information, or other interaction regarding the child. As of February 2009, 24 U.S. states allowed legally enforceable open adoption contract agreements to be included in the adoption finalization.
The practice of closed adoption (also called confidential or secret adoption), which has not been the norm for most of modern history, seals all identifying information, maintaining it as secret and preventing disclosure of the adoptive parents, biological kins, and adoptees identities. Nevertheless, closed adoption may allow the transmittal of non-identifying information such as medical history and religious and ethnic background. Today, as a result of safe haven laws passed by some U.S. states, secret adoption is seeing renewed influence. In so-called "safe-haven" states, infants can be left anonymously at hospitals, fire departments, or police stations within a few days of birth, a practice criticized by some adoption advocacy organizations as being retrograde and dangerous.
How adoptions originate
Adoptions can occur between related or unrelated individuals. Historically, most adoptions occurred within a family. The most recent data from the U.S. indicates that about half of adoptions are currently between related individuals. A common example of this is a "step-parent adoption", where the new partner of a parent legally adopts a child from the parents previous relationship. Intra-family adoption can also occur through surrender, as a result of parental death, or when the child cannot otherwise be cared for and a family member agrees to take over.
Adoption is not always a voluntary process. In some countries, for example in the U.K., one of the main origins of children being placed for adoption is that they have been removed from the birth home, often by a government body such as the local authority. There are a number of reasons why children are removed including abuse and neglect, which can have a lasting impact on the adoptee. Social workers in many cases will be notified of a safeguarding concern in relation to a child and will make enquiries into the childs well-being. Social workers will often seek means of keeping a child together with the birth family, for example, by providing additional support to the family before considering removal of a child. A court of law will often then make decisions regarding the childs future, for example, whether they can return to the birth family, enter into foster care or be adopted.
Infertility is the main reason parents seek to adopt children they are not related to. One study shows this accounted for 80% of unrelated infant adoptions and half of adoptions through foster care. Estimates suggest that 11–24% of Americans who cannot conceive or carry to term attempt to build a family through adoption, and that the overall rate of ever-married American women who adopt is about 1.4%. Other reasons people adopt are numerous although not well documented. These may include wanting to cement a new family following divorce or death of one parent, compassion motivated by religious or philosophical conviction, to avoid contributing to overpopulation out of the belief that it is more responsible to care for otherwise parent-less children than to reproduce, to ensure that inheritable diseases (e.g., Tay–Sachs disease) are not passed on, and health concerns relating to pregnancy and childbirth. Although there are a range of reasons, the most recent study of experiences of women who adopt suggests they are most likely to be 40–44 years of age, to be currently married, to have impaired fertility, and to be childless.Unrelated adoptions may occur through the following mechanisms:
Private domestic adoptions: under this arrangement, charities and for-profit organizations act as intermediaries, bringing together prospective adoptive parents with families who want to place a child, all parties being residents of the same country. Alternatively, prospective adoptive parents sometimes avoid intermediaries and connect with women directly, often with a written contract; this is not permitted in some jurisdictions. Private domestic adoption accounts for a significant portion of all adoptions; in the United States, for example, nearly 45% of adoptions are estimated to have been arranged privately.
Foster care adoption: this is a type of domestic adoption where a child is initially placed in public care. Many times the foster parents take on the adoption when the children become legally free. Its importance as an avenue for adoption varies by country. Of the 127,500 adoptions in the U.S. in 2000, about 51,000 or 40% were through the foster care system.
International adoption: this involves the placing of a child for adoption outside that childs country of birth. This can occur through public or private agencies. In some countries, such as Sweden, these adoptions account for the majority of cases (see above table). The U.S. example, however, indicates there is wide variation by country since adoptions from abroad account for less than 15% of its cases. More than 60,000 Russian children have been adopted in the United States since 1992, and a similar number of Chinese children were adopted from 1995 to 2005. The laws of different countries vary in their willingness to allow international adoptions. Recognizing the difficulties and challenges associated with international adoption, and in an effort to protect those involved from the corruption and exploitation which sometimes accompanies it, the Hague Conference on Private International Law developed the Hague Adoption Convention, which came into force on 1 May 1995 and has been ratified by 85 countries as of November 2011.
Embryo adoption: based on the donation of embryos remaining after one couples in vitro fertilization treatments have been completed; embryos are given to another individual or couple, followed by the placement of those embryos into the recipient womans uterus, to facilitate pregnancy and childbirth. In the United States, embryo adoption is governed by property law rather than by the court systems, in contrast to traditional adoption.
Common law adoption: this is an adoption that has not been recognized beforehand by the courts, but where a parent, without resorting to any formal legal process, leaves his or her children with a friend or relative for an extended period of time. At the end of a designated term of (voluntary) co-habitation, as witnessed by the public, the adoption is then considered binding, in some courts of law, even though not initially sanctioned by the court. The particular terms of a common-law adoption are defined by each legal jurisdiction. For example, the U.S. state of California recognizes common law relationships after co-habitation of 2 years. The practice is called "private fostering" in Britain.
Disruption and dissolution
Although adoption is often described as forming a "forever" family, the relationship can be ended at any time. The legal termination of an adoption is called disruption. In U.S. terminology, adoptions are disrupted if they are ended before being finalized, and they are dissolved if the relationship is ended afterwards. It may also be called a failed adoption. After legal finalization, the disruption process is usually initiated by adoptive parents via a court petition and is analogous to divorce proceedings. It is a legal avenue unique to adoptive parents as disruption/dissolution does not apply to biological kin, although biological family members are sometimes disowned or abandoned.Ad hoc studies performed in the U.S., however, suggest that between 10 and 25 percent of adoptions through the child welfare system (e.g., excluding babies adopted from other countries or step-parents adopting their stepchildren) disrupt before they are legally finalized and from 1 to 10 percent are dissolved after legal finalization. The wide range of values reflects the paucity of information on the subject and demographic factors such as age; it is known that teenagers are more prone to having their adoptions disrupted than young children.
Adoption by same-sex couples
Joint adoption by same-sex couples is legal in 34 countries as of March 2022, and additionally in various sub-national territories. Adoption may also be in the form of step-child adoption (6 additional countries), wherein one partner in a same-sex couple adopts the child of the other. Most countries that have same-sex marriage allow joint adoption by those couples, the exceptions being Ecuador (no adoption by same-sex couples), Taiwan (step-child adoption only) and Mexico (in one third of states with same-sex marriage). A few countries with civil unions or lesser marriage rights nonetheless allow step- or joint adoption.
Parenting of adoptees
Parenting
The biological relationship between a parent and child is important, and the separation of the two has led to concerns about adoption. The traditional view of adoptive parenting received empirical support from a Princeton University study of 6,000 adoptive, step, and foster families in the United States and South Africa from 1968 to 1985; the study indicated that food expenditures in households with mothers of non-biological children (when controlled for income, household size, hours worked, age, etc.) were significantly less for adoptees, step-children, and foster children, causing the researchers to speculate that people are less interested in sustaining the genetic lines of others. This theory is supported in another more qualitative study wherein adoptive relationships marked by sameness in likes, personality, and appearance, were associated with both adult adoptees and adoptive parents report being happier with the adoption.Other studies provide evidence that adoptive relationships can form along other lines. A study evaluating the level of parental investment indicates strength in adoptive families, suggesting that parents who adopt invest more time in their children than other parents, and concludes "...adoptive parents enrich their childrens lives to compensate for the lack of biological ties and the extra challenges of adoption." Another recent study found that adoptive families invested more heavily in their adopted children, for example, by providing further education and financial support. Noting that adoptees seemed to be more likely to experience problems such as drug addiction, the study speculated that adoptive parents might invest more in adoptees not because they favor them, but because they are more likely than genetic children to need the help.Psychologists findings regarding the importance of early mother-infant bonding created some concern about whether parents who adopt older infants or toddlers after birth have missed some crucial period for the childs development. However, research on The Mental and Social Life of Babies suggested that the "parent-infant system", rather than a bond between biologically related individuals, is an evolved fit between innate behavior patterns of all human infants and equally evolved responses of human adults to those infant behaviors. Thus nature "ensures some initial flexibility with respect to the particular adults who take on the parental role."Beyond the foundational issues, the unique questions posed for adoptive parents are varied. They include how to respond to stereotypes, answering questions about heritage, and how best to maintain connections with biological kin when in an open adoption. One author suggests a common question adoptive parents have is: "Will we love the child even though he/she is not our biological child?" A specific concern for many parents is accommodating an adoptee in the classroom. Familiar lessons like "draw your family tree" or "trace your eye color back through your parents and grandparents to see where your genes come from" could be hurtful to children who were adopted and do not know this biological information. Numerous suggestions have been made to substitute new lessons, e.g., focusing on "family orchards".Adopting older children presents other parenting issues. Some children from foster care have histories of maltreatment, such as physical and psychological neglect, physical abuse, and sexual abuse, and are at risk of developing psychiatric problems. Such children are at risk of developing a disorganized attachment. Studies by Cicchetti et al. (1990, 1995) found that 80% of abused and maltreated infants in their sample exhibited disorganized attachment styles. Disorganized attachment is associated with a number of developmental problems, including dissociative symptoms, as well as depressive, anxious, and acting-out symptoms. "Attachment is an active process—it can be secure or insecure, maladaptive or productive." In the U.K., some adoptions fail because the adoptive parents do not get sufficient support to deal with difficult, traumatized children. This is a false economy as local authority care for these children is extremely expensive.Concerning developmental milestones, studies from the Colorado Adoption Project examined genetic influences on adoptee maturation, concluding that cognitive abilities of adoptees reflect those of their adoptive parents in early childhood but show little similarity by adolescence, resembling instead those of their biological parents and to the same extent as peers in non-adoptive families.Similar mechanisms appear to be at work in the physical development of adoptees. Danish and American researchers conducting studies on the genetic contribution to body mass index found correlations between an adoptees weight class and his biological parents BMI while finding no relationship with the adoptive family environment. Moreover, about one-half of inter-individual differences were due to individual non-shared influences.These differences in development appear to play out in the way young adoptees deal with major life events. In the case of parental divorce, adoptees have been found to respond differently from children who have not been adopted. While the general population experienced more behavioral problems, substance use, lower school achievement, and impaired social competence after parental divorce, the adoptee population appeared to be unaffected in terms of their outside relationships, specifically in their school or social abilities.
Effects on the original parents
Several factors affect the decision to release or raise the child. White adolescents tend to give up their babies to non-relatives, whereas black adolescents are more likely to receive support from their own community in raising the child and also in the form of informal adoption by relatives. Studies by Leynes and by Festinger and Young, Berkman, and Rehr found that, for pregnant adolescents, the decision to release the child for adoption depended on the attitude toward adoption held by the adolescents mother. Another study found that pregnant adolescents whose mothers had a higher level of education were more likely to release their babies for adoption. Research suggests that women who choose to release their babies for adoption are more likely to be younger, enrolled in school, and have lived in a two-parent household at age 10, than those who kept and raised their babies.There is limited research on the consequences of adoption for the original parents, and the findings have been mixed. One study found that those who released their babies for adoption were less comfortable with their decision than those who kept their babies. However, levels of comfort over both groups were high, and those who released their child were similar to those who kept their child in ratings of life satisfaction, relationship satisfaction, and positive future outlook for schooling, employment, finances, and marriage. Subsequent research found that adolescent mothers who chose to release their babies for adoption were more likely to experience feelings of sorrow and regret over their decision than those who kept their babies. However, these feelings decreased significantly from one year after birth to the end of the second year.More recent research found that in a sample of mothers who had released their children for adoption four to 12 years prior, every participant had frequent thoughts of their lost child. For most, thoughts were both negative and positive in that they produced both feelings of sadness and joy. Those who experienced the greatest portion of positive thoughts were those who had open, rather than closed or time-limited mediated, adoptions.In another study that compared mothers who released their children to those who raised them, mothers who released their children were more likely to delay their next pregnancy, to delay marriage, and to complete job training. However, both groups reached lower levels of education than their peers who were never pregnant. Another study found similar consequences for choosing to release a child for adoption. Adolescent mothers who released their children were more likely to reach a higher level of education and to be employed than those who kept their children. They also waited longer before having their next child. Most of the research that exists on adoption effects on the birth parents was conducted with samples of adolescents, or with women who were adolescents when carrying their babies—little data exists for birth parents from other populations. Furthermore, there is a lack of longitudinal data that may elucidate long-term social and psychological consequences for birth parents who choose to place their children for adoption.
Development of adoptees
Previous research on adoption has led to assumptions that indicate that there is a heightened risk in terms of psychological development and social relationships for adoptees. Yet, such assumptions have been clarified as flawed due to methodological failures. But more recent studies have been supportive in indicating more accurate information and results about the similarities, differences and overall lifestyles of adoptees.Evidence about the development of adoptees can be supported in newer studies. It can be said that adoptees, in some respect, tend to develop differently from the general population. This can be seen in many aspects of life, but usually can be found as a greater risk around the time of adolescence. For example, it has been found that many adoptees experience difficulty in establishing a sense of identity.
Identity
There are many ways in which the concept of identity can be defined. It is true in all cases that identity construction is an ongoing process of development, change and maintenance of identifying with the self. Research has shown that adolescence is a time of identity progression rather than regression. Ones identity tends to lack stability in the beginning years of life but gains a more stable sense in later periods of childhood and adolescence. Typically associated with a time of experimentation, there are endless factors that go into the construction of ones identity. As well as being many factors, there are many types of identities one can associate with. Some categories of identity include gender, sexuality, class, racial and religious, etc. For transracial and international adoptees, tension is generally found in the categories of racial, ethnic and national identification. Because of this, the strength and functionality of family relationships play a huge role in its development and outcome of identity construction. Transracial and transnational adoptees tend to develop feelings of a lack of acceptance because of such racial, ethnic, and cultural differences. Therefore, exposing transracial and transnational adoptees to their "cultures of origin" is important in order to better develop a sense of identity and appreciation for cultural diversity. Identity construction and reconstruction for transnational adoptees the instant they are adopted. For example, based upon specific laws and regulations of the United States, the Child Citizen Act of 2000 makes sure to grant immediate U.S. citizenship to adoptees.Identity is defined both by what one is and what one is not. Adoptees born into one family lose an identity and then borrow one from the adopting family.
The formation of identity is a complicated process and there are many factors that affect its outcome. From a perspective of looking at issues in adoption circumstances, the people involved and affected by adoption (the biological parent, the adoptive parent and the adoptee) can be known as the "triad members and state".
Adoption may threaten triad members sense of identity. Triad members often express feelings related to confused identity and identity crises because of differences between the triad relationships.
Adoption, for some, precludes a complete or integrated sense of self. Triad members may experience themselves as incomplete, deficient, or unfinished. They state that they lack feelings of well-being, integration, or solidity associated with a fully developed identity.
Influences
Family plays a vital role in identity formation. This is not only true in childhood but also in adolescence. Identity (gender/sexual/ethnic/religious/family) is still forming during adolescence and family holds a vital key to this.
The research seems to be unanimous; a stable, secure, loving, honest and supportive family in which all members feel safe to explore their identity is necessary for the formation of a sound identity. Transracial and International adoptions are some factors that play a significant role in the identity construction of adoptees. Many tensions arise from relationships built between the adoptee(s) and their family. These include being "different" from the parent(s), developing a positive racial identity, and dealing with racial/ethnic discrimination. It has been found that multicultural and transnational youth tend to identify with their parents origin of culture and ethnicity rather than their residing location, yet it is sometimes hard to balance an identity between the two because school environments tend to lack diversity and acknowledgment regarding such topics. These tensions also tend to create questions for the adoptee, as well as the family, to contemplate. Some common questions include what will happen if the family is more naïve to the ways of socially constructed life? Will tensions arise if this is the case? What if the very people that are supposed to be modeling a sound identity are in fact riddled with insecurities? Ginni Snodgrass answers these questions in the following way.
The secrecy in an adoptive family and the denial that the adoptive family is different builds dysfunction into it. "... social workers and insecure adoptive parents have structured a family relationship that is based on dishonesty, evasions and exploitation. To believe that good relationships will develop on such a foundation is psychologically unsound" (Lawrence). Secrecy erects barriers to forming a healthy identity.The research says that the dysfunction, untruths and evasiveness that can be present in adoptive families not only makes identity formation impossible, but also directly works against it. What effect on identity formation is present if the adoptee knows they are adopted but has no information about their biological parents? Silverstein and Kaplans research states that adoptees lacking medical, genetic, religious, and historical information are plagued by questions such as "Who am I?" "Why was I born?" "What is my purpose?" This lack of identity may lead adoptees, particularly in adolescent years, to seek out ways to belong in a more extreme fashion than many of their non-adopted peers. Adolescent adoptees are overrepresented among those who join sub-cultures, run away, become pregnant, or totally reject their families.Concerning developmental milestones, studies from the Colorado Adoption Project examined genetic influences on adoptee maturation, concluding that cognitive abilities of adoptees reflect those of their adoptive parents in early childhood but show little similarity by adolescence, resembling instead those of their biological parents and to the same extent as peers in non-adoptive families.Similar mechanisms appear to be at work in the physical development of adoptees. Danish and American researchers conducting studies on the genetic contribution to body mass index found correlations between an adoptees weight class and his biological parents BMI while finding no relationship with the adoptive family environment. Moreover, about one-half of inter-individual differences were due to individual non-shared influences.These differences in development appear to play out in the way young adoptees deal with major life events. In the case of parental divorce, adoptees have been found to respond differently from children who have not been adopted. While the general population experienced more behavioral problems, substance use, lower school achievement, and impaired social competence after parental divorce, the adoptee population appeared to be unaffected in terms of their outside relationships, specifically in their school or social abilities.The adoptee population does, however, seem to be more at risk for certain behavioral issues. Researchers from the University of Minnesota studied adolescents who had been adopted and found that adoptees were twice as likely as non-adopted people to develop oppositional defiant disorder (ODD) and attention deficit hyperactivity disorder (ADHD), with an 8% rate in the general population. Suicide risks were also significantly greater than the general population. Swedish researchers found both international and domestic adoptees undertook suicide at much higher rates than non-adopted peers; with international adoptees and female international adoptees, in particular, at highest risk.Nevertheless, work on adult adoptees has found that the additional risks faced by adoptees are largely confined to adolescence. Young adult adoptees were shown to be alike with adults from biological families and scored better than adults raised in alternative family types including single parent and step-families. Moreover, while adult adoptees showed more variability than their non-adopted peers on a range of psychosocial measures, adult adoptees exhibited more similarities than differences with adults who had not been adopted. There have been many cases of remediation or the reversibility of early trauma. For example, in one of the earliest studies conducted, Professor Goldfarb in England concluded that some children adjust well socially and emotionally despite their negative experiences of institutional deprivation in early childhood. Other researchers also found that prolonged institutionalization does not necessarily lead to emotional problems or character defects in all children. This suggests that there will always be some children who fare well, who are resilient, regardless of their experiences in early childhood. Furthermore, much of the research on psychological outcomes for adoptees draws from clinical populations. This suggests that conclusions such that adoptees are more likely to have behavioral problems such as ODD and ADHD may be biased. Since the proportion of adoptees that seek mental health treatment is small, psychological outcomes for adoptees compared to those for the general population are more similar than some researchers propose.
Mental health
Adopted children are more likely to experience psychological and behavioral problems than non-adopted peers. Children who were older than four at the time of their adoption experience more psychological problems than those who were younger.According to study in the UK, adopted children can have mental health problems that do not improve even four years after their adoption. Children with multiple adverse childhood experiences are more likely to have mental health problems. The study suggests that to identify and tread mental health problems early, care professionals and the adopting parents need detailed biographical information about the childs life. Another study in the UK suggests that adopted children are more likely to suffer from post-traumatic stress (PTS) than the general population. Their PTS symptoms depend on the type of adverse experiences they went through and knowledge of their history offers an option for tailored support.
Adoptees of LGBT parents
There is evidence that shows the adoptees of LGBT families and those in heterosexual families have no significant differences in development. One of the main arguments used against same-sex adoption is that a child needs a mother and a father in the home to develop properly. However, a 2013 study of predictors for psychological outcomes of adoptees showed that family type (hetero, gay, lesbian) does not affect the childs adjustment; rather the preparedness of the adoptive parent(s), and health of relationship to partner, and other contextual factors predicted later adjustment in early placed adoptees. Along with this, a 2009 study showed again that sexual orientation of parents does not affect externalizing and internalized problems, but family functioning and income can affect adjustment, especially for older adoptees.
Public perception of adoption
In Western culture, many see that the common image of a family being that of a heterosexual couple with biological children. This idea places alternative family forms outside the norm. As a consequence – research indicates – disparaging views of adoptive families exist, along with doubts concerning the strength of their family bonds.The most recent adoption attitudes survey completed by the Evan Donaldson Institute provides further evidence of this stigma. Nearly one-third of the surveyed population believed adoptees are less-well adjusted, more prone to medical issues, and predisposed to drug and alcohol problems. Additionally, 40–45% thought adoptees were more likely to have behavior problems and trouble at school. In contrast, the same study indicated adoptive parents were viewed favorably, with nearly 90% describing them as "lucky, advantaged, and unselfish".The majority of people state that their primary source of information about adoption comes from friends and family and the news media. Nevertheless, most people report the media provides them a favorable view of adoption; 72% indicated receiving positive impressions. There is, however, still substantial criticism of the medias adoption coverage. Some adoption blogs, for example, criticized Meet the Robinsons for using outdated orphanage imagery as did advocacy non-profit The Evan B. Donaldson Adoption Institute.The stigmas associated with adoption are amplified for children in foster care. Negative perceptions result in the belief that such children are so troubled it would be impossible to adopt them and create "normal" families. A 2004 report from the Pew Commission on Children in Foster Care has shown that the number of children waiting in foster care doubled since the 1980s and now remains steady at about a half-million a year."Attitude toward Adoption Questionnaire (ATAQ): this questionnaire was first developed by Abdollahzadeh, Chaloyi and Mahmoudi(2019). Preliminary Edition: This questionnaire has 23 items based on the Likert scale of 1 (totally Disagree), up to 5 (Totally Agree) being obtained after refining the items designed to construct the present tool and per-study study. The analysis of item and initial psychometric analyses indicate that there are two factors in it. Items 3-10-11-12-14-15-16-17-19-20-21 are reversed and the rest are graded positively. The results of exploratory factor analysis by main components with varimax rotation indicated two components of attitude toward adoption being named respectively cognitive as the aspects of attitude toward adoption and behavioral-emotional aspects of attitude toward adoption. These two components explained 43.25% of the variance of the total sample. Cronbachs alpha coefficient was used to measure the reliability of the questionnaire. Cronbachs alpha coefficient was 0.709 for the whole questionnaire, 0.71 for the first component, and 0.713 for the second one. In addition, there was a significant positive relationship between desired social tendencies and the cognitive aspect of attitude toward adoption as well as the behavioral -emotional aspects of attitude toward adoption (P ≤ 0.01).
Reform and reunion trends
Adoption practices have changed significantly over the course of the 20th century, with each new movement labeled, in some way, as reform. Beginning in the 1970s, efforts to improve adoption became associated with opening records and encouraging family preservation. These ideas arose from suggestions that the secrecy inherent in modern adoption may influence the process of forming an identity, create confusion regarding genealogy, and provide little in the way of medical history.
Family preservation: As concerns over illegitimacy began to decline in the early 1970s, social-welfare agencies began to emphasize that, if possible, mothers and children should be kept together. In the U.S., this was clearly illustrated by the shift in policy of the New York Foundling Home, an adoption-institution that is among the countrys oldest and one that had pioneered sealed records. It established three new principles including "to prevent placements of children...", reflecting the belief that children would be better served by staying with their biological families, a striking shift in policy that remains in force today. In addition, groups such as Origins USA (founded in 1997) started to actively speak about family preservation and the rights of mothers. The intellectual tone of these reform movements was influenced by the publishing of The Primal Wound by Nancy Verrier. "Primal wound" is described as the "devastation which the infant feels because of separation from its birth mother. It is the deep and consequential feeling of abandonment which the baby adoptee feels after the adoption and which may continue for the rest of his life."Open records: After a legal adoption in the United States, an adopted persons original birth certificate is usually amended and replaced with a new post-adoption birth certificate. The names of any birth parents listed on the original birth certificate are replaced on an amended certificate with the names of the adoptive parents, making it appear that the child was born to the adoptive parents. Beginning in the late 1930s and continuing through the 1970s, state laws allowed for the sealing of original birth certificates after an adoption and, except in some states, made the original birth certificate unavailable to the adopted person even at the age of majority.Adopted people have long sought to undo these laws so that they can obtain their own original birth certificates. Movements to unseal original birth certificates and other adoption records for adopted people proliferated in the 1970s along with increased acceptance of illegitimacy. In the United States, Jean Paton founded Orphan Voyage in 1954, and Florence Fisher founded the Adoptees Liberty Movement Association (ALMA) in 1971, calling sealed records "an affront to human dignity". While in 1975, Emma May Vilardi created the first mutual-consent registry, the International Soundex Reunion Registry (ISRR), allowing those separated by adoption to locate one another. and Lee Campbell and other birthmothers established CUB (Concerned United Birthparents). Similar ideas were taking hold globally with grass-roots organizations like Parent Finders in Canada and Jigsaw in Australia. In 1975, England and Wales opened records on moral grounds.By 1979, representatives of 32 organizations from 33 states, Canada and Mexico gathered in Washington, DC, to establish the American Adoption Congress (AAC) passing a unanimous resolution: "Open Records complete with all identifying information for all members of the adoption triad, birthparents, adoptive parents and adoptee at the adoptees age of majority (18 or 19, depending on state) or earlier if all members of the triad agree." Later years saw the evolution of more militant organizations such as Bastard Nation (founded in 1996), groups that helped overturn sealed records in Alabama, Delaware, New Hampshire, Oregon, Tennessee, and Maine. A coalition of New York and national adoptee rights activists successfully worked to overturn a restrictive 83-year-old law in 2019, and adult adopted people born in New York, as well as their descendants, today have the right to request and obtain their own original birth certificates. As of 2021, ten states in the United States recognize the right of adult adopted people to obtain their own original birth certificates, including Alabama, Alaska, Colorado, Connecticut, Kansas, Maine, New Hampshire, New York, Oregon and Rhode Island. Connecticut in 2021 became the tenth state to restore an adopted persons right to request and obtain their original birth certificates.
Reunion
Estimates for the extent of search behavior by adoptees have proven elusive; studies show significant variation. In part, the problem stems from the small adoptee population which makes random surveying difficult, if not impossible.
Nevertheless, some indication of the level of search interest by adoptees can be gleaned from the case of England and Wales which opened adoptees birth records in 1975. The U.K. Office for National Statistics has projected that 33% of all adoptees would eventually request a copy of their original birth records, exceeding original forecasts made in 1975 when it was believed that only a small fraction of the adoptee population would request their records. The projection is known to underestimate the true search rate, however, since many adoptees of the era get their birth records by other means.The research literature states adoptees give four reasons for desiring reunion: 1) they wish for a more complete genealogy, 2) they are curious about events leading to their conception, birth, and relinquishment, 3) they hope to pass on information to their children, and 4) they have a need for a detailed biological background, including medical information. It is speculated by adoption researchers, however, that the reasons given are incomplete: although such information could be communicated by a third-party, interviews with adoptees, who sought reunion, found they expressed a need to actually meet biological relations.It appears the desire for reunion is linked to the adoptees interaction with and acceptance within the community. Internally focused theories suggest some adoptees possess ambiguities in their sense of self, impairing their ability to present a consistent identity. Reunion helps resolve the lack of self-knowledge.Externally focused theories, in contrast, suggest that reunion is a way for adoptees to overcome social stigma. First proposed by Goffman, the theory has four parts: 1) adoptees perceive the absence of biological ties as distinguishing their adoptive family from others, 2) this understanding is strengthened by experiences where non-adoptees suggest adoptive ties are weaker than blood ties, 3) together, these factors engender, in some adoptees, a sense of social exclusion, and 4) these adoptees react by searching for a blood tie that reinforces their membership in the community. The externally focused rationale for reunion suggests adoptees may be well adjusted and happy within their adoptive families, but will search as an attempt to resolve experiences of social stigma.Some adoptees reject the idea of reunion. It is unclear, though, what differentiates adoptees who search from those who do not. One paper summarizes the research, stating, "...attempts to draw distinctions between the searcher and non-searcher are no more conclusive or generalizable than attempts to substantiate ... differences between adoptees and nonadoptees."In sum, reunions can bring a variety of issues for adoptees and parents. Nevertheless, most reunion results appear to be positive. In the largest study to date (based on the responses of 1,007 adoptees and relinquishing parents), 90% responded that reunion was a beneficial experience. This does not, however, imply ongoing relationships were formed between adoptee and parent nor that this was the goal.
Controversial adoption practices
Reform and family preservation efforts have also been strongly associated with the perceived misuse of adoption. In some cases, parents rights have been terminated when their ethnic or socio-economic group has been deemed unfit by society. Some of these practices were generally accepted but have later been considered abusive; others were uncontroversially reprehensible.
Forced adoption based on ethnicity occurred during World War II. In German-occupied Poland, it is estimated that 200,000 Polish children with purportedly Aryan traits were removed from their families and given to German or Austrian couples, and only 25,000 returned to their families after the war.The Stolen Generation of Aboriginal people in Australia were affected by similar policies, as were Native Americans in the United States and First Nations of Canada.These practices have become significant social and political issues in recent years, and in many cases the policies have changed. The United States, for example, now has the 1978 Indian Child Welfare Act, which allows the tribe and family of a Native American child to be involved in adoption decisions, with preference being given to adoption within the childs tribe.From the 1950s through the 1970s, a period called the baby scoop era, adoption practices that involved coercion were directed against unwed mothers, as described for the U.S. in The Girls Who Went Away.
More recently the military dictatorship in Argentina from 1976 to 1983 is known to have given hundreds of babies born to women captives who were then murdered to be brought up by military families.In Spain under Francisco Francos 1939–75 dictatorship the newborns of some left-wing opponents of the regime, or unmarried or poor couples, were removed from their mothers and adopted. New mothers were frequently told their babies had died suddenly after birth and the hospital had taken care of their burials, when in fact they were given or sold to another family. It is believed that up to 300,000 babies were involved. These system—which allegedly involved doctors, nurses, nuns and priests—outlived Francos death in 1975 and carried on as an illegal baby trafficking network until 1987 when a new law regulating adoption was introduced.On January 29, 2010, a group of ten American Baptist missionaries from Idaho attempted to cross the Haiti–Dominican Republic border with 33 Haitian children. The group, known as the New Life Childrens Refuge, did not have proper authorization for transporting the children and were arrested on kidnapping charges. After the process for the adoption of 400 children by families in the US and the Netherlands was expedited, Unicef and SOS Children urged an immediate halt to adoptions from Haiti. Jasmine Whitbread, chief executive of Save the Children said: "The vast majority of the children currently on their own still have family members alive who will be desperate to be reunited with them and will be able to care for them with the right support. Taking children out of the country would permanently separate thousands of children from their families—a separation that would compound the acute trauma they are already suffering and inflict long-term damage on their chances of recovery."
Rehoming in the United States
With the increase in adoption rates over the many decades, the United States has been faced with a new morally questionable practice: rehoming. This is the act of caregivers posting an advertisement when they do not feel the child should be in their care any longer. Investigation of the childs new housing situation is not required in this practice, and this has created an underground market, one where child traffickers can thrive. There is a lack of regulation surrounding this practice and current legislation contradicts each other, making this harder to combat.
When a parent adopts a child, they may not have been aware that the child has special needs and thus, are not equipped to help this child. The child may act out or not fit in with the family so the family turns to rehoming. Rehoming is not adoption and because of that, the government does not have to be notified and adoption agencies are not involved. Thus, re-homing is a prime target for child and sex traffickers. There are laws set in place to protect children through adoption processes and against sex trafficking, but there are barely any laws regarding rehoming. The courts authorize this practice because the U.S. state law may allow a parent, legal guardian or relative within the second degree to place out or board out a child. However, while the U.S. federal bill Preventing Sex Trafficking and Strengthening Families Act would require the family to make rational decisions and prioritize the health of the child, the Interstate Compact on the Placement of Children contradicts this. This states that the family only has to make sure children are placed in adequate care only when the re-homing process is done across state lines. There is no mention of maintaining the childrens safety when rehoming within the same state.
The laws surrounding rehoming are basically non-existent which puts adopted children at the risk of unequipped parents and all other types of dysfunctional homes. This second-chance adoption, as some parents see it, has led to negative effects that failed adoptions have on children as they go through the process of readapting to a new home environment again. With the statute that allows second-degree legal guardians to put their adopted child in the care of someone else, and the rising of re-homing websites and ads on social media, the rehoming process highly exposes children to underground markets and other trafficking prospects. In that regard, laws and statutes concerning adoption and rehoming should be re-evaluated to ensure the full protection of adopted children.
In 2018, the Uniform Law Commission formed a committee to study the rehoming issue, which is also known more formerly as "unregulated transfer of adopted children". The committee indicated that it will draft a uniform or model law to prohibit the unregulated transfer of all adopted children.
Adoption terminology
The language of adoption is changing and evolving, and since the 1970s has been a controversial issue tied closely to adoption reform efforts. The controversy arises over the use of terms which, while designed to be more appealing or less offensive to some persons affected by adoption, may simultaneously cause offense or insult to others. This controversy illustrates the problems in adoption, as well as the fact that coining new words and phrases to describe ancient social practices will not necessarily alter the feelings and experiences of those affected by them. Two of the contrasting sets of terms are commonly referred to as "positive adoption language" (PAL) (sometimes called "respectful adoption language" (RAL)), and "honest adoption language" (HAL).
Positive adoptive language (PAL)
In the 1970s, as adoption search and support organizations developed, there were challenges to the language in common use at the time. As books like Adoption Triangle by Sorosky, Pannor and Baran were published, and support groups formed like CUB (Concerned United Birthparents), a major shift from "natural parent" to "birthparent" occurred. Along with the change in times and social attitudes came additional examination of the language used in adoption.
Social workers and other professionals in the field of adoption began changing terms of use to reflect what was being expressed by the parties involved. In 1979, Marietta Spencer wrote "The Terminology of Adoption" for The Child Welfare League of America (CWLA), which was the basis for her later work "Constructive Adoption Terminology". This influenced Pat Johnstons "Positive Adoption Language" (PAL) and "Respectful Adoption Language" (RAL). The terms contained in "Positive Adoption Language" include the terms "birth mother" (to replace the terms "natural mother" and "real mother"), and "placing" (to replace the term "surrender"). These kinds of recommendations encouraged people to be more aware of their use of adoption terminology.
Honest adoption language (HAL)
"Honest Adoption Language" refers to a set of terms that proponents say reflect the point of view that: (1) family relationships (social, emotional, psychological or physical) that existed prior to the legal adoption often continue past this point or endure in some form despite long periods of separation, and that (2) mothers who have "voluntarily surrendered" children to adoption (as opposed to involuntary terminations through court-authorized child-welfare proceedings) seldom view it as a choice that was freely made, but instead describe scenarios of powerlessness, lack of resources, and overall lack of choice. It also reflects the point of view that the term "birth mother" is derogatory in implying that the woman has ceased being a mother after the physical act of giving birth. Proponents of HAL liken this to the mother being treated as a "breeder" or "incubator". Terms included in HAL include terms that were used before PAL, including "natural mother", "first mother", and "surrendered for adoption".
Inclusive adoption language
There are supporters of various lists, developed over many decades, and there are persons who find them lacking, created to support an agenda, or furthering division. All terminology can be used to demean or diminish, uplift or embrace. In addressing the linguistic problem of naming, Edna Andrews says that using "inclusive" and "neutral" language is based upon the concept that "language represents thought, and may even control thought."Advocates of inclusive language defend it as inoffensive-language usage whose goal is multi-fold:
The rights, opportunities, and freedoms of certain people are restricted because they are reduced to stereotypes.
Stereotyping is mostly implicit, unconscious, and facilitated by the availability of pejorative labels and terms.
Rendering the labels and terms socially unacceptable, people then must consciously think about how they describe someone unlike themselves.
When labeling is a conscious activity, the described persons individual merits become apparent, rather than his or her stereotype.A common problem is that terms chosen by an identity group, as acceptable descriptors of themselves, can be used in negative ways by detractors. This compromises the integrity of the language and turns what was intended to be positive into negative or vice versa, thus often devaluing acceptability, meaning and use.
Language at its best honors the self-referencing choices of the persons involved, uses inclusive terms and phrases, and is sensitive to the feelings of the primary parties. Language evolves with social attitudes and experiences.
Cultural variations
Attitudes and laws regarding adoption vary greatly. Whereas all cultures make arrangements whereby children whose birth parents are unavailable to rear them can be brought up by others, not all cultures have the concept of adoption, that is treating unrelated children as equivalent to biological children of the adoptive parents. Under Islamic Law, for example, adopted children must keep their original surname to be identified with blood relations, and, traditionally, women wear a hijab in the presence of males in their adoptive households. In Egypt, these cultural distinctions have led to making adoption illegal opting instead for a system of foster care.
Homecoming Day
In some countries, such as the United States, "Homecoming Day" is the day when an adoptee is officially united with their new adoptive family.
See also
References
Barbara Melosh, the American Way of Adoption page 10
Further reading
Argent, Hedi. Related by Adoption: a handbook for grandparents and other relatives (2014)
Askeland, Lori. Children and Youth in Adoption, Orphanages, and Foster Care: A Historical Handbook and Guide (2005) excerpt and text search
Carp, E. Wayne, ed. Adoption in America: Historical Perspectives (2002)
Carp, E. Wayne. Family Matters: Secrecy and Disclosure in the History of Adoption (2000)
Carp, E. Wayne. Jean Paton and the Struggle to Reform American Adoption (University of Michigan Press; 2014) 422 pages; Scholarly biography of an activist (1908–2002) who led the struggle for open adoption records
Conn, Peter. Adoption: A Brief Social and Cultural History (2013) excerpt and text search
Eskin, Michael. The Wisdom of Parenthood: An Essay (New York: Upper West Side Philosophers, Inc. 2013)
Fessler, Ann. The Girls Who Went Away: The Hidden History of Women Who Surrendered Children for Adoption in the Decades Before Roe v </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Myositis ossificans and explain in detail? [/INST] Myositis ossificans comprises two syndromes characterized by heterotopic ossification (calcification) of muscle. The World Health Organization, 2020, has grouped myositis ossificans together with fibro-osseous pseudotumor of digits as a single specific entity in the category of fibroblastic and myofibroblastic tumors.
Classification
In the first, and by far most common type, nonhereditary myositis ossificans (commonly referred to simply as "myositis ossificans", as in the remainder of this article), calcifications occur at the site of injured muscle, most commonly in the arms or in the quadriceps of the thighs.
The term myositis ossificans traumatica is sometimes used when the condition is due to trauma. Also myositis ossificans circumscripta is another synonym of myositis ossificans traumatica refers to the new extraosseous bone that appears after trauma.
The second condition, myositis ossificans progressiva (also referred to as fibrodysplasia ossificans progressiva) is an inherited affliction, autosomal dominant pattern, in which the ossification can occur without injury, and typically grows in a predictable pattern. Although this disorder can be passed to offspring by those afflicted with FOP, it is also classified as nonhereditary, as it is most often attributed to a spontaneous genetic mutation upon conception.Most (i.e. 80%) ossifications arise in the thigh or arm, and are caused by a premature return to activity after an injury. Other sites include intercostal spaces, erector spinae, pectoralis muscles, glutei, and the chest. On planar x-ray, hazy densities are sometimes noted approximately one month after injury, while the denser opacities eventually seen may not be apparent until two months have passed.
Pathophysiology
The exact mechanism of myositis ossificans is not clear. Inappropriate response of stem cells in the bone against the injury or inflammation causes inappropriate differentiation of fibroblasts into osteogenic cells. When a skeletal muscle is injured, inflammatory cytokines (Bone morphogenetic protein 2, Bone morphogenetic protein 4, and Transforming growth factor) are released. These cytokines stimulate the endothelial cells of the blood vessels to transform into mesenchymal stem cells. These mesenchymal stem cells then differentiate into chondrocytes, and osteoblasts, resulting in bone formation in soft tissues.The process of myositis ossificans can be divided into three stages: early, intermediate, and mature. The early phase occurs in the first four weeks of injury with inflammatory phase of bone formation. This is followed by intermediate phase of bone formation (four to eight weeks following injury) where calcification started to occur and is visible on X-rays. During the maturation phase, mature bone started to form. As the maturation continues, the bone will consolidate in the coming months and subsequently bone regression occurs.
Diagnosis
Sonographic diagnosis
Calcification is typically depicted 2 weeks earlier by ultrasound (US) when compared to radiographs. The lesion develops in two distinct stages with different presentations at US. In the early stage, termed immature, it depicts a non-specific soft tissue mass that ranges from a hypoechoic area with an outer sheet-like hyperechoic peripheral rim to a highly echogenic area with variable shadowing. In the late stage, termed mature, myositis ossificans is depicted as an elongated calcific deposit that is aligned with the long-axis of the muscle, exhibits acoustic shadowing, and has no soft tissue mass associated. US may suggest the diagnosis at early stage, but imaging features need to evolve with successive maturation of the lesion and formation of the characteristic late stage changes before they become pathognomonic.The differential diagnosis includes many tumoral and nontumoral pathologies. A main concern is to differentiate early myositis ossificans from malignant soft-tissue tumors, and the latter is suggested by a fast-growing process. If clinical or sonographic findings are dubious and extraosseous sarcoma is suspected, biopsy should be performed. At histology, detection of the typical zonal phenomenon is diagnostic of myositis ossificans, though microscopic findings may be misleading during the early stage.
Radiologic diagnosis
The radiological features of myositis ossificans are ‘faint soft tissue calcification within 2–6 weeks, (may have well-defined
bony margins by 8 weeks) separated from periosteum by lucent zone and on CT, the characteristic feature is peripheral ossification’.
Management
Since myositis ossificians is more common in those with bleeding disorders, the formation of bone in soft tissue is thought to be associated with haematoma formation </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Sadness and explain in detail? [/INST] Sadness is an emotional pain associated with, or characterized by, feelings of disadvantage, loss, despair, grief, helplessness, disappointment and sorrow. An individual experiencing sadness may become quiet or lethargic, and withdraw themselves from others. An example of severe sadness is depression, a mood which can be brought on by major depressive disorder or persistent depressive disorder. Crying can be an indication of sadness.Sadness is one of the six basic emotions described by Paul Ekman, along with happiness, anger, surprise, fear, and disgust.: 271–4
Childhood
Sadness is a common experience in childhood. Sometimes, sadness can lead to depression. Some families may have a (conscious or unconscious) rule that sadness is "not allowed", but Robin Skynner has suggested that this may cause problems, arguing that with sadness "screened off", people can become shallow and manic.: 33, 36 Pediatrician T. Berry Brazelton suggests that acknowledging sadness can make it easier for families to address more serious emotional problems.: 46, 48 Sadness is part of the normal process of the child separating from an early symbiosis with the mother and becoming more independent. Every time a child separates a little more, he or she will have to cope with a small loss. If the mother cannot allow the minor distress involved, the child may never learn how to deal with sadness by themselves.: 158–9 Brazelton argues that too much cheering a child up devalues the emotion of sadness for them;: 52 and Selma Fraiberg suggests that it is important to respect a childs right to experience a loss fully and deeply.Margaret Mahler also saw the ability to feel sadness as an emotional achievement, as opposed for example to warding it off through restless hyperactivity. D. W. Winnicott similarly saw in sad crying the psychological root of valuable musical experiences in later life.
Neuroanatomy
A large amount of research has been conducted on the neuroscience of sadness. According to the American Journal of Psychiatry, sadness has been found to be associated with "increases in bilateral activity within the vicinity of the middle and posterior temporal cortex, lateral cerebellum, cerebellar vermis, midbrain, putamen, and caudate." Jose V. Pardo has his M.D and Ph.D and leads a research program in cognitive neuroscience. Using positron emission tomography (PET) Pardo and his colleagues were able to provoke sadness among seven normal men and women by asking them to think about sad things. They observed increased brain activity in the bilateral inferior and orbitofrontal cortex. In a study that induced sadness in subjects by showing emotional film clips, the feeling was correlated with significant increases in regional brain activity, especially in the prefrontal cortex, in the region called Brodmanns area 9, and the thalamus. A significant increase in activity was also observed in the bilateral anterior temporal structures.
Coping mechanisms
People deal with sadness in different ways, and it is an important emotion because it helps to motivate people to deal with their situation. Some coping mechanisms include: getting social support and/or spending time with a pet, creating a list, or engaging in some activity to express sadness. Some individuals, when feeling sad, may exclude themselves from a social setting, so as to take the time to recover from the feeling.While being one of the moods people most want to shake, sadness can sometimes be perpetuated by the very coping strategies chosen, such as ruminating, "drowning ones sorrows", or permanently isolating oneself.: 69–70 As alternative ways of coping with sadness to the above, cognitive behavioral therapy suggests instead either challenging ones negative thoughts, or scheduling some positive event as a distraction.: 72 Being attentive to, and patient with, ones sadness may also be a way for people to learn through solitude; while emotional support to help people stay with their sadness can be further helpful.: 164 Such an approach is fueled by the underlying belief that loss (when felt wholeheartedly) can lead to a new sense of aliveness, and to a re-engagement with the outside world.
Pupil empathy
Pupil size may be an indicator of sadness. A sad facial expression with small pupils is judged to be more intensely sad as the pupil size decreases. A persons own pupil size also mirrors this and becomes smaller when viewing sad faces with small pupils </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Presbyopia and explain in detail? [/INST] Presbyopia is physiological insufficiency of accommodation associated with the aging of the eye that results in progressively worsening ability to focus clearly on close objects. Also known as age-related farsightedness (or age-related long sight in the UK), it affects many adults over the age of 40. A common sign of presbyopia is difficulty reading small print which results in having to hold reading material farther away. Other symptoms associated can be headaches and eyestrain. Different people will have different degrees of problems. Other types of refractive errors may exist at the same time as presbyopia. This condition is similar to hypermetropia or far-sightedness which starts in childhood and exhibits similar symptoms of blur in the vision for close objects.
Presbyopia is a typical part of the aging process. It occurs due to age related changes in the lens (decreased elasticity and increased hardness) and ciliary muscle (decreased strength and ability to move the lens), causing the eye to focus right behind rather than on the retina when looking at close objects. It is a type of refractive error along with nearsightedness, farsightedness, and astigmatism. Diagnosis is by an eye examination.Presbyopia can be corrected using glasses, contact lenses, multifocal intraocular lenses, or LASIK (presbyLASIK) surgery. The most common treatment is glass correction using appropriate convex lens. Glasses used to correct presbyopia may be simple reading glasses, bifocals, trifocals, or progressive lens.People over 40 are at risk for developing presbyopia and all people become affected to some degree. Around 25% of people (1.8 billion globally) are currently affected.
Signs and symptoms
The first symptoms most people notice are difficulty reading fine print, particularly in low light conditions, eyestrain when reading for long periods, blurring of near objects or temporarily blurred vision when changing the viewing distance. Many extreme presbyopes complain that their arms have become "too short" to hold reading material at a comfortable distance.Presbyopia, like other focal imperfections, becomes less noticeable in bright sunlight when the pupil becomes smaller. As with any lens, increasing the focal ratio of the lens increases depth of field by reducing the level of blur of out-of-focus objects (compare the effect of aperture on depth of field in photography).
The onset of presbyopia varies among those with certain professions and those with miotic pupils. In particular, farmers and homemakers seek correction later, whereas service workers and construction workers seek correction earlier. Scuba divers with interest in underwater photography may notice presbyopic changes while diving before they recognize the symptoms in their normal routines due to the near focus in low light conditions.
Interaction with myopia
People with low near-sightedness can read comfortably without eyeglasses or contact lenses even after age forty, but higher myopes might require two pairs of glasses (one for distance, one for near), bifocal, or progressive lenses. However, their myopia does not disappear and the long-distance visual challenges remain. Myopes considering refractive surgery are advised that surgically correcting their nearsightedness may be a disadvantage after age forty, when the eyes become presbyopic and lose their ability to accommodate or change focus, because they will then need to use glasses for reading. Myopes with low astigmatism find near vision better, though not perfect, without glasses or contact lenses when presbyopia sets in, but the more astigmatism, the poorer the uncorrected near vision.A surgical technique offered is to create a "reading eye" and a "distance vision eye", a technique commonly used in contact lens practice, known as monovision. Monovision can be created with contact lenses, so candidates for this procedure can determine if they are prepared to have their corneas reshaped by surgery to cause this effect permanently.
Mechanism
The cause of presbyopia is lens hardening by decreasing levels of α-crystallin, a process which may be sped up by higher temperatures. It results in a near point greater than 25 cm (or equivalently, less than 4 diopters).
In optics, the closest point at which an object can be brought into focus by the eye is called the eyes near point. A standard near point distance of 25 cm is typically assumed in the design of optical instruments, and in characterizing optical devices such as magnifying glasses.There is some confusion over how the focusing mechanism of the eye works. In the 1977 book, Eye and Brain, for example, the lens is said to be suspended by a membrane, the zonula, which holds it under tension. The tension is released, by contraction of the ciliary muscle, to allow the lens to become more round, for close vision. This implies the ciliary muscle, which is outside the zonula, must be circumferential, contracting like a sphincter, to slacken the tension of the zonula pulling outwards on the lens. This is consistent with the fact that our eyes seem to be in the relaxed state when focusing at infinity, and also explains why no amount of effort seems to enable a myopic person to see farther away.
The ability to focus on near objects declines throughout life, from an accommodation of about 20 dioptres (ability to focus at 50 mm away) in a child, to 10 dioptres at age 25 (100 mm), and levels off at 0.5 to 1 dioptre at age 60 (ability to focus down to 1–2 meters only). The expected, maximum, and minimum amplitudes of accommodation in diopters (D) for a corrected patient of a given age can be estimated using Hofstetters formulas: expected amplitude (D) = 18.5 - 0.3 × (age in years), maximum amplitude (D) = 25 - 0.4 × (age in years), minimum amplitude (D) = 15 - 0.25 × (age in years).
Diagnosis
Treatment
In the visual system, images captured by the eye are translated into electric signals that are transmitted to the brain where they are interpreted </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Childhood arthritis and explain in detail? [/INST] Childhood arthritis (also known as juvenile arthritis, JRA, Juvenile Idiopathic Disease or Arthritis, or Stills Disease) is any form of chronic arthritis or chronic arthritis-related conditions which affects individuals under the age of 16. Most types are autoimmune disorders.
Signs and symptoms
Several types of childhood arthritis exist, the most common being juvenile idiopathic arthritis (JIA), also known as juvenile rheumatoid arthritis (JRA) or juvenile chronic arthritis (JCA). Other types of childhood arthritis include juvenile myositis, juvenile lupus, juvenile scleroderma, vasculitis, and fibromyalgia.JRA again can be divided into three main forms: The classification is based upon symptoms, number of joints involved and the presence of certain antibodies in the blood.
Polyarticular arthritis is the first type of arthritis, which affects about 30–40% of children with arthritis and is more common in girls than boys. Typically five or more joints are affected (usually smaller joints such as the hands and feet but many also affect the hips, neck, shoulders and jaw).
Oligoarticular (aka pauciarticular) arthritis can be early or late onset and is the second type of arthritis, affecting about 50% of children with juvenile arthritis. This type affects fewer than four joints (usually the large joints such as knees, ankles or wrists) and may cause eye inflammation in girls with positive anti-nuclear antibodies (ANA). Girls younger than eight are more likely to develop this type of arthritis.
Systemic disease is the least common form, with 10–20% of children (boys and girls equally) being affected with limited movement, swelling and pain in at least one joint </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Hypoxemia and explain in detail? [/INST] Hypoxemia is an abnormally low level of oxygen in the blood. More specifically, it is oxygen deficiency in arterial blood. Hypoxemia has many causes, and often causes hypoxia as the blood is not supplying enough oxygen to the tissues of the body.
Definition
Hypoxemia refers to the low level of oxygen in blood, and the more general term hypoxia is an abnormally low oxygen content in any tissue or organ, or the body as a whole. Hypoxemia can cause hypoxia (hypoxemic hypoxia), but hypoxia can also occur via other mechanisms, such as anemia.Hypoxemia is usually defined in terms of reduced partial pressure of oxygen (mm Hg) in arterial blood, but also in terms of reduced content of oxygen (ml oxygen per dl blood) or percentage saturation of hemoglobin (the oxygen-binding protein within red blood cells) with oxygen, which is either found singly or in combination.While there is general agreement that an arterial blood gas measurement which shows that the partial pressure of oxygen is lower than normal constitutes hypoxemia, there is less agreement concerning whether the oxygen content of blood is relevant in determining hypoxemia. This definition would include oxygen carried by hemoglobin. The oxygen content of blood is thus sometimes viewed as a measure of tissue delivery rather than hypoxemia.Just as extreme hypoxia can be called anoxia, extreme hypoxemia can be called anoxemia.
Signs and symptoms
In an acute context, hypoxemia can cause symptoms such as those in respiratory distress. These include breathlessness, an increased rate of breathing, use of the chest and abdominal muscles to breathe, and lip pursing.: 642 Chronic hypoxemia may be compensated or uncompensated. The compensation may cause symptoms to be overlooked initially, however, further disease or a stress such as any increase in oxygen demand may finally unmask the existing hypoxemia. In a compensated state, blood vessels supplying less-ventilated areas of the lung may selectively contract, to redirect the blood to areas of the lungs which are better ventilated. However, in a chronic context, and if the lungs are not well ventilated generally, this mechanism can result in pulmonary hypertension, overloading the right ventricle of the heart and causing cor pulmonale and right sided heart failure. Polycythemia can also occur. In children, chronic hypoxemia may manifest as delayed growth, neurological development and motor development and decreased sleep quality with frequent sleep arousals.Other symptoms of hypoxemia may include cyanosis, digital clubbing, and symptoms that may relate to the cause of the hypoxemia, including cough and hemoptysis.: 642 Serious hypoxemia typically occurs when the partial pressure of oxygen in blood is less than 60 mmHg (8.0 kPa), the beginning of the steep portion of the oxygen–hemoglobin dissociation curve, where a small decrease in the partial pressure of oxygen results in a large decrease in the oxygen content of the blood. Severe hypoxia can lead to respiratory failure
Causes
Hypoxemia refers to insufficient oxygen in the blood. Thus any cause that influences the rate or volume of air entering the lungs (ventilation) or any cause that influences the transfer of air from the lungs to the blood may cause hypoxemia. As well as these respiratory causes, cardiovascular causes such as shunts may also result in hypoxemia.
Hypoxemia is caused by five categories of etiologies: hypoventilation, ventilation/perfusion mismatch, right-to-left shunt, diffusion impairment, and low PO2. Low PO2 and hypoventilation are associated with a normal alveolar–arterial gradient (A-a gradient) whereas the other categories are associated with an increased A-a gradient. : 229
Ventilation
If the alveolar ventilation is low, there will not be enough oxygen delivered to the alveoli for the bodys use. This can cause hypoxemia even if the lungs are normal, as the cause is in the brainstems control of ventilation or in the bodys inability to breathe effectively.
Respiratory drive
Respiration is controlled by centers in the medulla, which influence the rate of breathing and the depth of each breath. This is influenced by the blood level of carbon dioxide, as determined by central and peripheral chemoreceptors located in the central nervous system and carotid and aortic bodies, respectively. Hypoxia occurs when the breathing center doesnt function correctly or when the signal is not appropriate:
Strokes, epilepsy and cervical neck fractures can all damage the medullary respiratory centres that generates rhythmic impulses and transmit them along the phrenic nerve to the diaphragm, the muscle that is responsible for breathing.
A decreased respiratory drive can also be the result of metabolic alkalosis, a state of decreased carbon dioxide in the blood
Central sleep apnea. During sleep, the breathing centers of the brain can pause their activity, leading to prolonged periods of apnea with potentially serious consequences.
Hyperventilation followed by prolonged breath-holding. This hyperventilation, attempted by some swimmers, reduces the amount of carbon dioxide in the lungs. This reduces the urge to breathe. However, it also means that falling blood oxygen levels are not sensed, and can result in hypoxemia.
Physical states
A variety of conditions that physically limit airflow can lead to hypoxemia.
Suffocation, including temporary interruption or cessation of breathing as in obstructive sleep apnea, or bedclothes may interfere with breathing in infants, a putative cause of SIDS.
Structural deformities of the chest, such as scoliosis and kyphosis, which can restrict breathing and lead to hypoxia.
Muscle weakness, which may limit the ability of the diaphragm, the primary muscle for drawing new air into lungs, to function. This may be a result of a congenital disease, such as motor neuron disease, or an acquired condition, such as fatigue in severe cases of COPD.
Environmental oxygen
In conditions where the proportion of oxygen in the air is low, or when the partial pressure of oxygen has decreased, less oxygen is present in the alveoli of the lungs. The alveolar oxygen is transferred to hemoglobin, a carrier protein inside red blood cells, with an efficiency that decreases with the partial pressure of oxygen in the air.
Altitude. The external partial pressure of oxygen decreases with altitude, for example in areas of high altitude or when flying. This decrease results in decreased carriage of oxygen by hemoglobin. This is particularly seen as a cause of cerebral hypoxia and mountain sickness in climbers of Mount Everest and other peaks of extreme altitude. For example, at the peak of Mount Everest, the partial pressure of oxygen is just 43 mmHg, whereas at sea level the partial pressure is 150 mmHg. For this reason, cabin pressure in aircraft is maintained at 5,000 to 6,000 feet (1500 to 1800 m).
Diving. Hypoxia in diving can result from sudden surfacing. The partial pressures of gases increases when diving by one ATM every ten metres. This means that a partial pressure of oxygen sufficient to maintain good carriage by hemoglobin is possible at depth, even if it is insufficient at the surface. A diver that remains underwater will slowly consume their oxygen, and when surfacing, the partial pressure of oxygen may be insufficient (shallow water blackout). This may manifest at depth as deep water blackout.
Suffocation. Decreased concentration of oxygen in inspired air caused by reduced replacement of oxygen in the breathing mix.
Anaesthetics. Low partial pressure of oxygen in the lungs when switching from inhaled anesthesia to atmospheric air, due to the Fink effect, or diffusion hypoxia.
Air depleted of oxygen has also proven fatal. In the past, anesthesia machines have malfunctioned, delivering low-oxygen gas mixtures to patients. Additionally, oxygen in a confined space can be consumed if carbon dioxide scrubbers are used without sufficient attention to supplementing the oxygen which has been consumed.
Perfusion
Ventilation-perfusion mismatch
This refers to a disruption in the ventilation/perfusion equilibrium. Oxygen entering the lungs typically diffuses across the alveolar-capillary membrane into blood. However this equilibration does not occur when the alveolus is insufficiently ventilated, and as a consequence the blood exiting that alveolus is relatively hypoxemic. When such blood is added to blood from well ventilated alveoli, the mix has a lower oxygen partial pressure than the alveolar air, and so the A-a difference develops. Examples of states that can cause a ventilation-perfusion mismatch include:
Exercise. Whilst modest activity and exercise improves ventilation-perfusion matching, hypoxemia may develop during intense exercise as a result of preexisting lung diseases. During exercise, almost half of the hypoxemia is due to diffusion limitations (again, on average).
Aging. An increasingly poor match between ventilation and perfusion is seen with age, as well as a decreased ability to compensate for hypoxic states. : 646
Diseases that affect the pulmonary interstitium can also result in hypoxia, by affecting the ability of oxygen to diffuse into arteries. An example of these diseases is pulmonary fibrosis, where even at rest a fifth of the hypoxemia is due to diffusion limitations (on average).
Diseases that result in acute or chronic respiratory distress can result in hypoxia. These diseases can be acute in onset (such as obstruction by inhaling something or a pulmonary embolus) or chronic (such as chronic obstructive pulmonary disease).
Cirrhosis can be complicated by refractory hypoxemia due to high rates of blood flow through the lung, resulting in ventilation-perfusion mismatch.
Fat embolism syndrome, in which fat droplets are deposited in the pulmonary capillary bed.
Shunting
Shunting refers to blood that bypasses the pulmonary circulation, meaning that the blood does not receive oxygen from the alveoli. In general, a shunt may be within the heart or lungs, and cannot be corrected by administering oxygen alone. Shunting may occur in normal states:
Anatomic shunting, occurring via the bronchial circulation, which provides blood to the tissues of the lung. Shunting also occurs by the smallest cardiac veins, which empty directly into the left ventricle.
Physiological shunts, occur due to the effect of gravity. The highest concentration of blood in the pulmonary circulation occurs in the bases of the pulmonary tree compared to the highest pressure of gas in the apexes of the lungs. Alveoli may not be ventilated in shallow breathing.Shunting may also occur in disease states:
Acute respiratory distress syndrome (ARDS). Alveolar collapse, which can increase the amount of physiological shunting, may be managed through a combination of increasing oxygenation as well as positive pressure to recruit collapsed alveoli.
Pathological shunts such as patent ductus arteriosus, patent foramen ovale, and atrial septal defects or ventricular septal defects. These states are when blood from the right side of the heart moves straight to the left side, without first passing through the lungs. This is known as a right-to-left shunt, which is often congenital in origin.
Exercise
Exercise-induced arterial hypoxemia occurs during exercise when a trained individual exhibits an arterial oxygen saturation below 93%. It occurs in fit, healthy individuals of varying ages and genders. Adaptations due to training include an increased cardiac output from cardiac hypertrophy, improved venous return, and metabolic vasodilation of muscles, and an increased VO2 max </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Fever and explain in detail? [/INST] Fever, also referred to as pyrexia, is defined as having a temperature above the normal range due to an increase in the bodys temperature set point. There is not a single agreed-upon upper limit for normal temperature with sources using values between 37.2 and 38.3 °C (99.0 and 100.9 °F) in humans. The increase in set point triggers increased muscle contractions and causes a feeling of cold or chills. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure, with this being more common in young children. Fevers do not typically go higher than 41 to 42 °C (106 to 108 °F).A fever can be caused by many medical conditions ranging from non-serious to life-threatening. This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, Lassa, COVID-19, and malaria. Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication or vaccination, and cancer. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.Treatment to reduce fever is generally not required. Treatment of associated pain and inflammation, however, may be useful and help a person rest. Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature. Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms. Hyperthermia requires treatment.Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever evolved as a defense mechanism, treating a fever does not appear to improve or worsen outcomes. Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as fever phobia.
Associated symptoms
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, dehydration, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". Mild to severe delirium (which can also cause hallucinations) may also present itself during high fevers.
Diagnosis
A range for normal temperatures has been found. Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures.
Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
Temperature in the anus (rectum/rectal) is at or over 37.5–38.3 °C (99.5–100.9 °F) An ear (tympanic) or forehead (temporal) temperature may also be used.
Temperature in the mouth (oral) is at or over 37.2 °C (99.0 °F) in the morning or over 37.7 °C (99.9 °F) in the afternoon
Temperature under the arm (axillary) is usually about 0.6 °C (1.1 °F) below core body temperature.In adults, the normal range of oral temperatures in healthy individuals is 35.7–37.7 °C (96.3–99.9 °F) among men and 33.2–38.1 °C (91.8–100.6 °F) among women, while when taken rectally it is 36.7–37.5 °C (98.1–99.5 °F) among men and 36.8–37.1 °C (98.2–98.8 °F) among women, and for ear measurement it is 35.5–37.5 °C (95.9–99.5 °F) among men and 35.7–37.5 °C (96.3–99.5 °F) among women.Normal body temperatures vary depending on many factors, including age, sex, time of day, ambient temperature, activity level, and more. Normal daily temperature variation has been described as 0.5 °C (0.9 °F).: 4012 A raised temperature is not always a fever. For example, the temperature rises in healthy people when they exercise, but this is not considered a fever, as the set point is normal. On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of 37.3 °C (99.1 °F) may represent a clinically significant fever.
Hyperthermia
Hyperthermia is an elevation of body temperature over the temperature set point, due to either too much heat production or not enough heat loss. Hyperthermia is thus not considered fever.: 103 Hyperthermia is not to be confused with hyperpyrexia (which is a very high fever).: 102 Clinically, it is important to distinguish between fever and hyperthermia as hyperthermia may quickly lead to death and does not respond to antipyretic medications. The distinction may however be difficult to make in an emergency setting, and is often established by identifying possible causes.: 103
Types
Various patterns of measured patient temperatures have been observed, some of which may be indicative of a particular medical diagnosis:
Continuous fever, where temperature remains above normal and does not fluctuate more than 1 °C in 24 hours (e.g. in bacterial pneumonia, typhoid, infective endocarditis, tuberculosis, or typhus.
Intermittent fever is present only for a certain period, later cycling back to normal (e.g., in malaria, leishmaniasis, pyemia, sepsis, or African trypanosomiasis.
Remittent fever, where the temperature remains above normal throughout the day and fluctuates more than 1 °C in 24 hours (e.g., in infective endocarditis or brucellosis).
Pel–Ebstein fever is a cyclic fever that is rarely seen in patients with Hodgkins lymphoma.
Undulant fever, seen in brucellosis.
Typhoid fever is a continuous fever showing a characteristic step-ladder pattern, a step-wise increase in temperature with a high plateau.Among the types of intermittent fever are ones specific to cases of malaria caused by different pathogens. These are:
Quotidian fever, with a 24-hour periodicity, typical of malaria caused by Plasmodium knowlesi (P. knowlesi);
Tertian fever, with a 48-hour periodicity, typical of later course malaria caused by P. falciparum, P. vivax, or P. ovale;
Quartan fever, with a 72-hour periodicity, typical of later course malaria caused by P. malariae.In addition, there is disagreement regarding whether a specific fever pattern is associated with Hodgkins lymphoma—the Pel–Ebstein fever, with patients argued to present high temperature for one week, followed by low for the next week, and so on, where the generality of this pattern is debated.Persistent fever that cannot be explained after repeated routine clinical inquiries is called fever of unknown origin. A neutropenic fever, also called febrile neutropenia, is a fever in the absence of normal immune system function. Because of the lack of infection-fighting neutrophils, a bacterial infection can spread rapidly; this fever is, therefore, usually considered to require urgent medical attention. This kind of fever is more commonly seen in people receiving immune-suppressing chemotherapy than in apparently healthy people.
Hyperpyrexia
Hyperpyrexia is an extreme elevation of body temperature which, depending upon the source, is classified as a core body temperature greater than or equal to 40 or 41 °C (104 or 106 °F); the range of hyperpyrexias includes cases considered severe (≥ 40 °C) and extreme (≥ 42 °C). It differs from hyperthermia in that ones thermoregulatory systems set point for body temperature is set above normal, then heat is generated to achieve it. In contrast, hyperthermia involves body temperature rising above its set point due to outside factors. The high temperatures of hyperpyrexia are considered medical emergencies, as they may indicate a serious underlying condition or lead to severe morbidity (including permanent brain damage), or to death. A common cause of hyperpyrexia is an intracranial hemorrhage. Other causes in emergency room settings include sepsis, Kawasaki syndrome, neuroleptic malignant syndrome, drug overdose, serotonin syndrome, and thyroid storm.
Differential diagnosis
Fever is a common symptom of many medical conditions:
Infectious disease, e.g., COVID-19, dengue, Ebola, gastroenteritis, HIV, influenza, Lyme disease, rocky mountain spotted fever, secondary syphilis, malaria, mononucleosis, as well as infections of the skin, e.g., abscesses and boils.
Immunological diseases, e.g., relapsing polychondritis, autoimmune hepatitis, granulomatosis with polyangiitis, Horton disease, inflammatory bowel diseases, Kawasaki disease, lupus erythematosus, sarcoidosis, and Stills disease;
Tissue destruction, as a result of cerebral bleeding, crush syndrome, hemolysis, infarction, rhabdomyolysis, surgery, etc.;
Cancers, particularly blood cancers such as leukemia and lymphomas;
Metabolic disorders, e.g., gout, and porphyria; and
Inherited metabolic disorder, e.g., Fabry disease.Adult and pediatric manifestations for the same disease may differ; for instance, in COVID-19, one metastudy describes 92.8% of adults versus 43.9% of children presenting with fever.In addition, fever can result from a reaction to an incompatible blood product.Teething is not a cause of fever.
Function
Immune function
Fever is thought to contribute to host defense, as the reproduction of pathogens with strict temperature requirements can be hindered, and the rates of some important immunological reactions are increased by temperature. Fever has been described in teaching texts as assisting the healing process in various ways, including:
increased mobility of leukocytes;: 1044
enhanced leukocyte phagocytosis;: 1030
decreased endotoxin effects;: 1029 and
increased proliferation of T cells.: 1030 : 212
Advantages and disadvantages
Having a fever response in response to an infectious disease is generally regarded as protective, whereas fever in non-infections may be maladaptive. Studies have not been consistent on whether treating fever generally worsens or improves mortality risk. Benefits or harms may depend on the type of infection, health status of the patient and other factors. Studies using warm-blooded vertebrates suggest that they recover more rapidly from infections or critical illness due to fever. In sepsis, fever is associated with reduced mortality.
Pathophysiology of fever induction
Hypothalamus
Temperature is regulated in the hypothalamus. The trigger of a fever, called a pyrogen, results in the release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point. There are four receptors in which PGE2 can bind (EP1-4), with a previous study showing the EP3 subtype is what mediates the fever response. Hence, the hypothalamus can be seen as working like a thermostat. When the set point is raised, the body increases its temperature through both active generation of heat and retention of heat. Peripheral vasoconstriction both reduces heat loss through the skin and causes the person to feel cold. Norepinephrine increases thermogenesis in brown adipose tissue, and muscle contraction through shivering raises the metabolic rate.If these measures are insufficient to make the blood temperature in the brain match the new set point in the hypothalamus, the brain orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:
Increased heat production by increased muscle tone, shivering (muscle movements to produce heat) and release of hormones like epinephrine; and
Prevention of heat loss, e.g., through vasoconstriction.When the hypothalamic set point moves back to baseline—either spontaneously or via medication—normal functions such as sweating, and the reverse of the foregoing processes (e.g., vasodilation, end of shivering, and nonshivering heat production) are used to cool the body to the new, lower setting.This contrasts with hyperthermia, in which the normal setting remains, and the body overheats through undesirable retention of excess heat or over-production of heat. Hyperthermia is usually the result of an excessively hot environment (heat stroke) or an adverse reaction to drugs. Fever can be differentiated from hyperthermia by the circumstances surrounding it and its response to anti-pyretic medications.In infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-exercise-associated thermogenesis, also known as non-shivering thermogenesis).Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.
Pyrogens
A pyrogen is a substance that induces fever. In the presence of an infectious agent, such as bacteria, viruses, viroids, etc., the immune response of the body is to inhibit their growth and eliminate them. The most common pyrogens are endotoxins, which are lipopolysaccharides (LPS) produced by Gram-negative bacteria such as E. coli. But pyrogens include non-endotoxic substances (derived from microorganisms other than gram-negative-bacteria or from chemical substances) as well. The types of pyrogens include internal (endogenous) and external (exogenous) to the body.The "pyrogenicity" of given pyrogens varies: in extreme cases, bacterial pyrogens can act as superantigens and cause rapid and dangerous fevers.
Endogenous
Endogenous pyrogens are cytokines released from monocytes (which are part of the immune system). In general, they stimulate chemical responses, often in the presence of an antigen, leading to a fever. Whilst they can be a product of external factors like exogenous pyrogens, they can also be induced by internal factors like damage associated molecular patterns such as cases like rheumatoid arthritis or lupus.Major endogenous pyrogens are interleukin 1 (α and β): 1237–1248 and interleukin 6 (IL-6). Minor endogenous pyrogens include interleukin-8, tumor necrosis factor-β, macrophage inflammatory protein-α and macrophage inflammatory protein-β as well as interferon-α, interferon-β, and interferon-γ.: 1237–1248 Tumor necrosis factor-α (TNF) also acts as a pyrogen, mediated by interleukin 1 (IL-1) release. These cytokine factors are released into general circulation, where they migrate to the brains circumventricular organs where they are more easily absorbed than in areas protected by the blood–brain barrier. The cytokines then bind to endothelial receptors on vessel walls to receptors on microglial cells, resulting in activation of the arachidonic acid pathway.Of these, IL-1β, TNF, and IL-6 are able to raise the temperature setpoint of an organism and cause fever. These proteins produce a cyclooxygenase which induces the hypothalamic production of PGE2 which then stimulates the release of neurotransmitters such as cyclic adenosine monophosphate and increases body temperature.
Exogenous
Exogenous pyrogens are external to the body and are of microbial origin. In general, these pyrogens, including bacterial cell wall products, may act on Toll-like receptors in the hypothalamus and elevate the thermoregulatory setpoint.An example of a class of exogenous pyrogens are bacterial lipopolysaccharides (LPS) present in the cell wall of gram-negative bacteria. According to one mechanism of pyrogen action, an immune system protein, lipopolysaccharide-binding protein (LBP), binds to LPS, and the LBP–LPS complex then binds to a CD14 receptor on a macrophage. The LBP-LPS binding to CD14 results in cellular synthesis and release of various endogenous cytokines, e.g., interleukin 1 (IL-1), interleukin 6 (IL-6), and tumor necrosis factor-alpha (TNFα). A further downstream event is activation of the arachidonic acid pathway.
PGE2 release
PGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.PGE2 is the ultimate mediator of the febrile response. The setpoint temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus. Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.
= Management= =
Fever does not necessarily need to be treated, and most people with a fever recover without specific medical attention. Although it is unpleasant, fever rarely rises to a dangerous level even if untreated. Damage to the brain generally does not occur until temperatures reach 42.0 °C (107.6 °F), and it is rare for an untreated fever to exceed 40.6 °C (105.1 °F). Treating fever in people with sepsis does not affect outcomes. Small trials have shown no benefit of treating fevers of 38.5 °C (101.3 °F) or higher of critical ill patients in ICUs, and one trial was terminated early because patients receiving aggressive fever treatment were dying more often.According to the NIH, the two assumptions which are generally used to argue in favor of treating fevers have not been experimentally validated. These are that (1) a fever is noxious, and (2) suppression of a fever will reduce its noxious effect. Most of the other studies supporting the association of fever with poorer outcomes have been observational in nature. In theory, these critically ill patients and those faced with additional physiologic stress may benefit from fever reduction, but the evidence on both sides of the argument appears to be mostly equivocal.
Conservative measures
Limited evidence supports sponging or bathing feverish children with tepid water. The use of a fan or air conditioning may somewhat reduce the temperature and increase comfort. If the temperature reaches the extremely high level of hyperpyrexia, aggressive cooling is required (generally produced mechanically via conduction by applying numerous ice packs across most of the body or direct submersion in ice water). In general, people are advised to keep adequately hydrated. Whether increased fluid intake improves symptoms or shortens respiratory illnesses such as the common cold is not known.
Medications
Medications that lower fevers are called antipyretics. The antipyretic ibuprofen is effective in reducing fevers in children. It is more effective than acetaminophen (paracetamol) in children. Ibuprofen and acetaminophen may be safely used together in children with fevers. The efficacy of acetaminophen by itself in children with fevers has been questioned. Ibuprofen is also superior to aspirin in children with fevers. Additionally, aspirin is not recommended in children and young adults (those under the age of 16 or 19 depending on the country) due to the risk of Reyes syndrome.Using both paracetamol and ibuprofen at the same time or alternating between the two is more effective at decreasing fever than using only paracetamol or ibuprofen. It is not clear if it increases child comfort. Response or nonresponse to medications does not predict whether or not a child has a serious illness.With respect to the effect of antipyretics on the risk of death in those with infection, studies have found mixed results as of 2019. Animal models have found worsened outcomes with the use of antipyretics in influenza as of 2010 but they have not been studied for this use in humans.
Epidemiology
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children, and occurs in up to 75% of adults who are seriously sick. About 5% of people who go to an emergency room have a fever.
History
A number of types of fever were known as early as 460 BC to 370 BC when Hippocrates was practicing medicine including that due to malaria (tertian or every 2 days and quartan or every 3 days). It also became clear around this time that fever was a symptom of disease rather than a disease in and of itself.Infections presenting with fever were a major source of mortality in humans for about 200,000 years. Until the late nineteenth century, approximately half of all humans died from infections before the age of fifteen.An older term, febricula (a diminutive form of the Latin word for fever), was once used to refer to a low-grade fever lasting only a few days. This term fell out of use in the early 20th century, and the symptoms it referred to are now thought to have been caused mainly by various minor viral respiratory infections.
Society and culture
Mythology
Febris (fever in Latin) is the goddess of fever in Roman mythology. People with fevers would visit her temples.
Tertiana and Quartana are the goddesses of tertian and quartan fevers of malaria in Roman mythology.
Jvarasura (fever-demon in Hindi) is the personification of fever and disease in Hindu and Buddhist mythology.
Paediatrics
Fever is often viewed with greater concern by parents and healthcare professionals than might be deserved, a phenomenon known as fever phobia, which is based in both caregivers and parents misconceptions about fever in children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified "normal" number marked on a thermometer is a clinically significant fever. They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers. The underlying problem, according to professor of pediatrics Barton D. Schmitt, is that "as parents we tend to suspect that our childrens brains may melt." As a result of these misconceptions parents are anxious, give the child fever-reducing medicine when the temperature is technically normal or only slightly elevated, and interfere with the childs sleep to give the child more medicine.
Other species
Fever is an important metric for the diagnosis of disease in domestic animals. The body temperature of animals, which is taken rectally, is different from one species to another </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Wilson–Mikity syndrome and explain in detail? [/INST] Wilson–Mikity syndrome, a form of chronic lung disease (CLD) that exists only in premature infants, leads to progressive or immediate development of respiratory distress. This rare condition affects low birth babies and is characterized by rapid development of lung emphysema after birth, requiring prolonged ventilation and oxygen supplementation. It is closely related to bronchopulmonary dysplasia (BPD), differing mainly in the lack of prior ventilatory support. All the initial patients described with Wilson–Mikity syndrome were very low birth weight infants that had no history of mechanical ventilation, yet developed a syndrome that clinically resembled BPD. Upon the death of some of these infants, autopsies showed histologic changes similar to those seen in BPD.It was characterized by Miriam G. Wilson and Victor G. Mikity in 1960.
Symptoms and signs
The onset of respiratory difficulty occurs at the first day of life and continues up to three weeks into the infants life, at which point treatment is needed for infant survival.
Complications
Mothers who have developed chorioamnionitis during pregnancy put their infant at higher risk for development of Wilson-Mikity syndrome. It is a rare complication that requires prolonged treatment. Infection, however, is not shown to be an etiological factor, but a correlation to chorioamnionitis is identified as a risk.
Cause
The cause of Wilson-Mikity syndrome is unknown.
Diagnosis
The diagnosis of Wilson-Mikity syndrome can be made through two distinct symptoms: analogous characteristics of respiratory distress syndrome and presence of diffuse and streaky infiltrates with small cystic changes, seen through a chest X-ray. Early screening allows for the identification of a collapsed lung, cystic changes within the lung, and possible start of right-sided heart failure. Upon autopsy, alveolar collapse and alveoli rupture can be seen. This can reduce the number of capillaries within the system and lead to cyanosis. Cyanosis occurs from chronic or intermittent respiratory distress and episodes of dyspnea (or apnea). Symptoms can develop within hours post-birth or be gradual; infants will experience transient respiratory distress, causing a lapse in diagnosis by around 30 to 40 days. Dangerous recurrent apnea (or dyspnea) can occur in the first two to six weeks postpartum. This cessation of breathing can progress to cyanosis and lung collapse.
Infants display deteriorating respiratory symptoms along with early chronic lung changes which can be seen on chest radiography. These changes are diagnosed either directly upon birth or within the first month, as the premature infant requires mechanical ventilation for survival.
Treatment
When caught early enough, continuous, mechanical oxygen therapy can be used to reverse the infants poor circulation and decreased blood oxygen, a symptom known as cyanosis. Improvement is gradual; however, cases show that after the first year of treatment using oxygen therapy and mechanical ventilators, infants show normal respiratory activity and are free from chest infiltrates with small cystic changes. Absence of fever (febrile), and normal white blood cell count correspond to successful reversion and allow for a positive prognosis.When not treated properly, methods of reversion using oxygen supplementation and ventilation have the possibility to put the infant at risk for rare complications. If not enough oxygen is administered to the infant, the apnea continues and the infant is unable proper recovery. In contrast, too much oxygen administered can lead to higher risk for retrolental fibroplasia and/or oxygen toxicity within the lungs. Continued dyspnea is a sign that Wilson-Mikity syndrome is still affecting the infant. Increased ventilation to allow proper respiration is then required. Patients in recovery are slowly taken off oxygen support and eventually are able to ventilate with minimal to no respiratory distress.There is a lack of research on the long-term effects of Wilson-Mikity into adulthood.
Epidemiology
Around 75% of affected infants survive and are able to receive oxygen therapies and treatments to overcome this disease. In fatal cases, infants do not have noticeable or substantial respiratory recovery and can develop right-sided heart failure, ultimately leading to death. Patients that are not recovering will also continue to show signs of dyspnea, respiratory distress, and continued low body weight, heightening the risk of death. Infants that survive six months or longer have substantially better prognosis.
See also
Infant respiratory distress syndrome
Bronchopulmonary dysplasia
Chronic obstructive pulmonary disease
Retrolental Fibroplasia
Chorioamnionitis
References
Further reading
Hoepker, A; Seear, M; Petrocheilou, A; Hayes D Jr; Nair, A; Deodhar, J; Kadam, S; OToole, J (Oct 2008). "Wilson-Mikity syndrome: updated diagnostic criteria based on nine cases and a review of the literature". Pediatric Pulmonology </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Mediastinitis and explain in detail? [/INST] Mediastinitis is inflammation of the tissues in the mid-chest, or mediastinum. It can be either acute or chronic. It is thought to be due to four different etiologies:direct contamination
hematogenous or lymphatic spread
extension of infection from the neck or retroperitoneum
extension from the lung or pleuraAcute mediastinitis is usually caused by bacteria and is most often due to perforation of the esophagus. As the infection can progress rapidly, this is considered a serious condition.
Chronic sclerosing (or fibrosing) mediastinitis, while potentially serious, is caused by a long-standing inflammation of the mediastinum, leading to growth of acellular collagen and fibrous tissue within the chest and around the central vessels and airways. It has a different cause, treatment, and prognosis than acute infectious mediastinitis.
Space infections: Pretracheal space – lies anterior to trachea. Pretracheal space infection leads to mediastinitis. Here, the fascia fuses with the pericardium and the parietal pleura, which explains the occurrence of empyema and pericardial effusion in mediastinitis. However, infectious of other spaces can also lead to mediastinitis.
Symptoms
Acute
Acute mediastinitis is an infectious process and can cause fever, chills, tachycardia. Pain can occur with mediastinitis but the location of the pain depends on which part of the mediastinum is involved. When the upper mediastinum is involved, the pain is typically retro-sternal pain. When the lower mediastinum is involved, pain can be located between in the scapulae and radiate around to the chest.
Chronic
Symptoms depend on what organs of the mediastinum the disease is affecting. They might be caused by a constricted airway, constricted esophagus, or constricted blood vessels. Symptoms also depend on how mush fibrosis has occurred. There may be cough, shortness of breath, coughing up blood, pain in the chest, and difficulty in swallowing.
Causes
Acute
Esophageal perforation, a form of direct contamination, accounts for 90% of acute mediastinal infections. Esophageal perforation can arise from vomiting, incidental trauma from a procedure or operation, external trauma, ingestion of corrosive substances, malignancy, or other esophageal disease.Other causes of acute mediastinitis include infection secondary to cervical disease which arises from dental procedures, skin infections of the neck, neck trauma, or neck procedures.Descending necrotizing mediastinitis (DNM) was first described by Herman E. Pearse Jr., M.D. in 1938 and he stated, "the term mediastinitis means little unless qualified by a description of its type and kind." Although Descending Necrotizing Mediastinitis is an acute mediastinitis, it is distinct because it does not originate from structures within the mediastinum. Therefore, the term Descending Necrotizing Mediastinitis implies that the infection of the mediastinum originated from a primary site in the head or neck and descended through fascial spaces into the mediastinum.Though rare in developed countries, acute mediastinitis can be caused by inhalation of bacterial spores such as Anthrax. Historically, this infection was referred to as Wool-sorters Disease. In the lungs, spores can spread via lymphatics to mediastinal lymph nodes, where the mature rods can release exotoxins promoting edema and tissue necrosis. Clinically, persons infected with anthrax can develop a hemorrhagic mediastinitis, which manifests as acute pulmonary hemorrhage and meningitis. Hallmark finding of disease is a widened mediastinum visualized on chest x-ray. Once clinical symptoms of anthrax induced mediastinitis appear, disease is nearly 100% fatal. Individuals with known exposure to spores may be treated prophylactically with antibiotics (fluoroquinolones or tetracycline) to prevent disease progression.
Chronic
There are two types of fibrosing mediastinitis: granulomatous and non-granulomatous. Granulomatous mediastinitis is due to a granulomatous process of the mediastinal lymph nodes leading to fibrosis and chronic abscesses in the mediastinum. The most common causes are histoplasmosis and tuberculosis infections. Non-granulomatous fibrosing mediastinitis is caused by an idiopathic reaction to drugs and radiation therapy </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Indian Institute of Infrastructure and Construction and explain in detail? [/INST] The Indian Institute of Infrastructure and Construction Kollam or IIIC-Kollam is a public institute of international standard situated at Chavara in Kollam, Kerala. The initiative has been undertaken by the Government of Kerala with the support of the Government of India. The academy is coming under the Kerala Academy for Skills Excellence (KASE) to support the skill development programmes for construction related occupations
History
The construction academy was first proposed by the LDF ministry during 2008. They started the background works for the academy and named the project as Kerala Construction Academy, proposed at Chathannoor in Kollam district. When UDF came into power, they have shifted the project from Chathannoor to chavara and renamed as Indian Institute of Infrastructure and Construction.
Campus
The institute is on a 9.02 acres (36,500 m2) campus along the NH-66, near chavara in Kollam Metropolitan Area. KASE has developed a Master Plan and detailed design for the campus with 235,100 square feet (21,800 m2) of facilities including an administrative block, training block, canteen, and substation. Foundation stone was laid by the then Chief Minister of Kerala Mr. Oommen Chandy on 15 March 2013. The campus was inaugurated for academic activities by the Hon. Chief Minister Shri. Pinarayi Vijayan in August 2018.
Courses
During the initial phase, there will be three technician and managerial level courses along with a course in supervisory level. There will be an advanced certificate programme also in plumbing engineering. Candidates with SSLC, Plus Two, ITI, diploma, degree and B.Tech qualification can join for the courses.
Significance
Kerala state depends on migrant workers for the various jobs needed. The dependence on Migrant labourers in Kerala is found mainly in the field of construction </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Stepfather and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Involvement and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Hallux rigidus and explain in detail? [/INST] Hallux rigidus or stiff big toe is degenerative arthritis and stiffness due to bone spurs that affects the metatarsophalangeal joints (MTP) at the base of the hallux (big toe).
Hallux flexus was initially described by Davies-Colley in 1887 as a plantar flexed posture of phalanx relative to the metatarsal head. About the same time, Cotterill first used the term hallux rigidus.
Signs and symptoms
Pain and stiffness in the joint at the base of the big toe during use (walking, standing, bending, etc.)
Difficulty with certain activities (running, squatting)
Swelling and inflammation around the jointAlthough the condition is degenerative, it can occur in patients who are relatively young, particularly active sports people who have at some time suffered trauma to the joint (turf toe). A notable example is NBA star Shaquille ONeal who returned to basketball after surgery.
Causes
This condition, which occurs in adolescents and adults, can be associated with previous trauma.
The true cause is not known. Most commonly, hallux rigidus is thought to be caused by wear and tear of the first metatarsophalangeal joint.
Classification
In 1988, Hattrup and Johnson described the following radiographic classification system:
Grade I – mild changes with maintained joint space and minimal spurring.
Grade II – moderate changes with narrowing of joint space, bony proliferation on the metatarsophalangeal head and phalanx and subchondral sclerosis or cyst.
Grade III – severe changes with significant joint space narrowing, extensive bony proliferation and loose bodies or a dorsal ossicle.
Treatment
Non-surgical
Early treatment for mild cases of hallux rigidus may include prescription foot orthotics, shoe modifications (such as a pad under the joint, and/or a deeper toe box to take the pressure off the toe and/or facilitate walking), specialized footwear (rocker-sole shoes), medications (anti-inflammatory drugs) or injection therapy (corticosteroids to reduce inflammation and pain). Physical therapy programs may be recommended, although there is very limited evidence that they provide benefit for reducing pain and improving function of the joint.
Surgical
The goal of surgery is to eliminate or reduce pain. There are several types of surgery for treatment of hallux rigidus </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Orthopnea and explain in detail? [/INST] Orthopnea or orthopnoea is shortness of breath (dyspnea) that occurs when lying flat, causing the person to have to sleep propped up in bed or sitting in a chair. It is commonly seen as a late manifestation of heart failure, resulting from fluid redistribution into the central circulation, causing an increase in pulmonary capillary pressure and causing difficulty in breathing. It is also seen in cases of abdominal obesity or pulmonary disease. Orthopnea is the opposite of platypnea, shortness of breath that worsens when sitting or standing upright.
Causes
Orthopnea is often a symptom of left ventricular heart failure and/or cardiogenic pulmonary edema. It can also occur in those with asthma and chronic bronchitis, as well as those with sleep apnea or panic disorder. It is also as associated with polycystic liver disease. From a neuromuscular perspective, orthopnea is potentially a sign of severe diaphragmatic weakness. Under such circumstances, patients may describe shortness of breath when they bend over (e.g </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Ankyloglossia and explain in detail? [/INST] Ankyloglossia, also known as tongue-tie, is a congenital oral anomaly that may decrease the mobility of the tongue tip and is caused by an unusually short, thick lingual frenulum, a membrane connecting the underside of the tongue to the floor of the mouth. Ankyloglossia varies in degree of severity from mild cases characterized by mucous membrane bands to complete ankyloglossia whereby the tongue is tethered to the floor of the mouth.
Presentation
Ankyloglossia can affect eating, especially breastfeeding, speech and oral hygiene as well as have mechanical/social effects. Ankyloglossia can also prevent the tongue from contacting the anterior palate. This can then promote an infantile swallow and hamper the progression to an adult-like swallow which can result in an open bite deformity. It can also result in mandibular prognathism; this happens when the tongue contacts the anterior portion of the mandible with exaggerated anterior thrusts.Opinion varies regarding how frequently ankyloglossia truly causes problems. Some professionals believe it is rarely symptomatic, whereas others believe it is associated with a variety of problems. The disagreement among professionals was documented in a study by Messner and Lalakea (2000).
Feeding
Messner et al. studied ankyloglossia and infant feeding. Thirty-six infants with ankyloglossia were compared to a control group without ankyloglossia. The two groups were followed for six months to assess possible breastfeeding difficulties; defined as nipple pain lasting more than six weeks, or infant difficulty latching onto or staying onto the mothers breast. Twenty-five percent of mothers of infants with ankyloglossia reported breastfeeding difficulty compared with only 3% of the mothers in the control group. The study concluded that ankyloglossia can adversely affect breastfeeding in certain infants. Infants with ankyloglossia do not, however, have such big difficulties when feeding from a bottle. Limitations of this study include the small sample size and the fact that the quality of the mothers breastfeeding was not assessed.Wallace and Clark also studied breastfeeding difficulties in infants with ankyloglossia. They followed 10 infants with ankyloglossia who underwent surgical tongue-tie division. Eight of the ten mothers experienced poor infant latching onto the breast, 6/10 experienced sore nipples and 5/10 experienced continual feeding cycles; 3/10 mothers were exclusively breastfeeding. Following a tongue-tie division, 4/10 mothers noted immediate improvements in breastfeeding, 3/10 mothers did not notice any improvements and 6/10 mothers continued breastfeeding for at least four months after the surgery. The study concluded that tongue-tie division may be a possible benefit for infants experiencing breastfeeding difficulties due to ankyloglossia and further investigation is warranted. The limitations of this study include the small sample size and the fact that there was not a control group. In addition, the conclusions were based on subjective parent report as opposed to objective measures.
Speech
Messner and Lalakea studied speech in children with ankyloglossia. They noted that the phonemes likely to be affected due to ankyloglossia include sibilants and lingual sounds such as r. In addition, the authors also state that it is uncertain as to which patients will have a speech disorder that can be linked to ankyloglossia and that there is no way to predict at a young age which patients will need treatment. The authors studied 30 children from one to 12 years of age with ankyloglossia, all of whom underwent frenuloplasty. Fifteen children underwent speech evaluation before and after surgery. Eleven patients were found to have abnormal articulation before surgery and nine of these patients were found to have improved articulation after surgery. Based on the findings, the authors concluded that it is possible for children with ankyloglossia to have normal speech in spite of decreased tongue mobility. However, according to their study, a large percentage of children with ankyloglossia will have articulation deficits that can be linked to tongue-tie and these deficits may be improved with surgery. The authors also note that ankyloglossia does not cause a delay in speech or language, but at the most, problems with enunciation. Limitations of the study include a small sample size as well as a lack of blinding of the speech-language pathologists who evaluated the subjects’ speech.
Several recent systematic reviews and randomized control trials have argued that ankyloglossia does not impact speech sound development and that there is no difference in speech sound development between children who received surgery to release tongue-tie and those who did not.Messner and Lalakea also examined speech and ankyloglossia in another study. They studied 15 patients and speech was grossly normal in all the subjects. However, half of the subjects reported that they thought that their speech was more effortful than other peoples speech.Horton and colleagues discussed the relationship between ankyloglossia and speech. They believe that the tongue-tie contributes to difficulty in range and rate of articulation and that compensation is needed. Compensation at its worst may involve a Cupids bow of the tongue.Although the tongue-tie exists, and even years following surgery, common speech abnormalities include mispronunciation of words, the most common of which is pronouncing Ls as Ws; for example, the word "lemonade" would come out as "wemonade".
Mechanical and social effects
Ankyloglossia can result in mechanical and social effects. Lalakea and Messner studied 15 people, aged 14 to 68 years old. The subjects were given questionnaires in order to assess functional complaints associated with ankyloglossia. Eight subjects noted one or more mechanical limitations which included cuts or discomfort underneath the tongue and difficulties with kissing, licking ones lips, eating an ice cream cone, keeping ones tongue clean and performing tongue tricks. In addition, seven subjects noted social effects such as embarrassment and teasing. The authors concluded that this study confirmed anecdotal evidence of mechanical problems associated with ankyloglossia and it suggests that the kinds of mechanical and social problems noted may be more prevalent than previously thought. Furthermore, the authors note that some patients may be unaware of the extent of the limitations they have due to ankyloglossia, since they have never experienced a normal tongue range of motion. A limitation of this study is the small sample size that also represented a large age range.Lalakea and Messner note that mechanical and social effects may occur even without other problems related to ankyloglossia, such as speech and feeding difficulties. Also, mechanical and social effects may not arise until later in childhood, as younger children may be unable to recognize or report the effects. In addition, some problems, such as kissing, may not come about until later in life.
Tongue posture and mouth breathing
Ankyloglossia most often prohibits the tongue from resting in its ideal posture, at the roof of the mouth. When the tongue rests at the roof of the mouth, it enables nasal breathing. A seemingly unrelated consequence of ankyloglossia is chronic mouth breathing. Mouth breathing is correlated with other health issues such as enlarged tonsils and adenoids, chronic ear infections, and sleep-disordered breathing.
Dental issues
Ankyloglossia is correlated to grinding teeth (bruxism) and temporomandibular joint (TMJ) pain. When the tongue normally rests at the roof of the mouth, it leads to the development of an ideal "U"-shaped palate. Ankyloglossia often causes a narrow, "V"-shaped palate to develop, which crowds teeth and increases the potential need for braces and possibly jaw surgery.
Fascia and Muscle Compensation
The lingual frenulum under the tongue is part of the bodys larger fascia network. When the tongue is restricted by an overly tight frenulum, the tightness can travel to other nearby parts of the body such as the neck causing muscle tightness and poor posture. The tongue being restricted can force other muscles in the neck and jaw to compensate causing muscle soreness.
Diagnosis
According to Horton et al., diagnosis of ankyloglossia may be difficult; it is not always apparent by looking at the underside of the tongue, but is often dependent on the range of movement permitted by the genioglossus muscles. For infants, passively elevating the tongue tip with a tongue depressor may reveal the problem. For older children, making the tongue move to its maximum range will demonstrate the tongue tip restriction. In addition, palpation of genioglossus on the underside of the tongue will aid in confirming the diagnosis.Some signs of ankyloglossia can be difficulty speaking, difficulty eating, ongoing dental issues, jaw pain, or migraines.A severity scale for ankyloglossia, which grades the appearance and function of the tongue, is recommended for use in the Academy of Breastfeeding Medicine.
Treatment
There are varying types of intervention for ankyloglossia. Intervention for ankyloglossia does sometimes include surgery in the form of frenotomy (also called a frenectomy or frenulectomy) or frenuloplasty. This relatively common dental procedure may be done with soft-tissue lasers, such as the CO2 laser.A frenotomy can be performed as a standalone procedure or as part of another surgery. The procedure is typically quick and is performed under local anesthesia. First, the area under the tongue is numbed with an injection. Once the patient is numb, a small incision is made in the tissue and the tongue is freed from its tether. The incision is then closed with dissolvable sutures. Recovery from a frenotomy is typically quick and most patients experience little to no pain or discomfort.According to Lalakea and Messner, surgery can be considered for patients of any age with a tight frenulum, as well as a history of speech, feeding, or mechanical/social difficulties </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Out and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is 1 and explain in detail? [/INST] 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting or measurement. For example, a line segment of unit length is a line segment of length 1. In conventions of sign where zero is considered neither positive nor negative, 1 is the first and smallest positive integer. It is also sometimes considered the first of the infinite sequence of natural numbers, followed by 2, although by other definitions 1 is the second natural number, following 0.
The fundamental mathematical property of 1 is to be a multiplicative identity, meaning that any number multiplied by 1 equals the same number. Most if not all properties of 1 can be deduced from this. In advanced mathematics, a multiplicative identity is often denoted 1, even if it is not a number. 1 is by convention not considered a prime number; this was not universally accepted until the mid-20th century. Additionally, 1 is the smallest possible difference between two distinct natural numbers.
The unique mathematical properties of the number have led to its unique uses in other fields, ranging from science to sports. It commonly denotes the first, leading, or top thing in a group.
Etymology
The word one can be used as a noun, an adjective, and a pronoun.It comes from the English word an, which comes from the Proto-Germanic root *ainaz. The Proto-Germanic root *ainaz comes from the Proto-Indo-European root *oi-no-.Compare the Proto-Germanic root *ainaz to Old Frisian an, Gothic ains, Danish en, Dutch een, German eins and Old Norse einn.
Compare the Proto-Indo-European root *oi-no- (which means "one, single") to Greek oinos (which means "ace" on dice), Latin unus (one), Old Persian aivam, Old Church Slavonic -inu and ino-, Lithuanian vienas, Old Irish oin and Breton un (one).
As a number
One, sometimes referred to as unity, is the first non-zero natural number. It is thus the integer after zero.
Any number multiplied by one remains that number, as one is the identity for multiplication. As a result, 1 is its own factorial, its own square and square root, its own cube and cube root, and so on. One is also the result of the empty product, as any number multiplied by one is itself. It is also the only natural number that is neither composite nor prime with respect to division, but is instead considered a unit (meaning of ring theory).
As a digit
The glyph used today in the Western world to represent the number 1, a vertical line, often with a serif at the top and sometimes a short horizontal line at the bottom, traces its roots back to the Brahmic script of ancient India, where it was a simple vertical line. It was transmitted to Europe via the Maghreb and Andalusia during the Middle Ages, through scholarly works written in Arabic.
In some countries, the serif at the top is sometimes extended into a long upstroke, sometimes as long as the vertical line, which can lead to confusion with the glyph used for seven in other countries. In styles in which the digit 1 is written with a long upstroke, the digit 7 is often written with a horizontal stroke through the vertical line, to disambiguate them. Styles that do not use the long upstroke on digit 1 usually do not use the horizontal stroke through the vertical of the digit 7 either.
While the shape of the character for the digit 1 has an ascender in most modern typefaces, in typefaces with text figures, the glyph usually is of x-height, as, for example, in .
Many older typewriters lack a separate key for 1, using the lowercase letter l or uppercase I instead. It is possible to find cases when the uppercase J is used, though it may be for decorative purposes. In some typefaces, different glyphs are used for I and 1, but the numeral 1 resembles a small caps version of I, with parallel serifs at top and bottom, with the capital I being full-height.
Mathematics
Definitions
Mathematically, 1 is:
in arithmetic (algebra) and calculus, the natural number that follows 0 and the multiplicative identity element of the integers, real numbers and complex numbers;
more generally, in algebra, the multiplicative identity (also called unity), usually of a group or a ring.Formalizations of the natural numbers have their own representations of 1. In the Peano axioms, 1 is the successor of 0. In Principia Mathematica, it is defined as the set of all singletons (sets with one element), and in the Von Neumann cardinal assignment of natural numbers, it is defined as the set {0}.
In a multiplicative group or monoid, the identity element is sometimes denoted 1, but e (from the German Einheit, "unity") is also traditional. However, 1 is especially common for the multiplicative identity of a ring, i.e., when an addition and 0 are also present. When such a ring has characteristic n not equal to 0, the element called 1 has the property that n1 = 1n = 0 (where this 0 is the additive identity of the ring). Important examples are finite fields.
By definition, 1 is the magnitude, absolute value, or norm of a unit complex number, unit vector, and a unit matrix (more usually called an identity matrix). Note that the term unit matrix is sometimes used to mean something quite different.
By definition, 1 is the probability of an event that is absolutely or almost certain to occur.
In category theory, 1 is sometimes used to denote the terminal object of a category.
In number theory, 1 is the value of Legendres constant, which was introduced in 1808 by Adrien-Marie Legendre in expressing the asymptotic behavior of the prime-counting function. Legendres constant was originally conjectured to be approximately 1.08366, but was proven to equal exactly 1 in 1899.
Properties
Tallying is often referred to as "base 1", since only one mark – the tally itself – is needed. This is more formally referred to as a unary numeral system. Unlike base 2 or base 10, this is not a positional notation.
Since the base 1 exponential function (1x) always equals 1, its inverse does not exist (which would be called the logarithm base 1 if it did exist).
There are two ways to write the real number 1 as a recurring decimal: as 1.000..., and as 0.999.... 1 is the first figurate number of every kind, such as triangular number, pentagonal number and centered hexagonal number, to name just a few.
In many mathematical and engineering problems, numeric values are typically normalized to fall within the unit interval from 0 to 1, where 1 usually represents the maximum possible value in the range of parameters. Likewise, vectors are often normalized into unit vectors (i.e., vectors of magnitude one), because these often have more desirable properties. Functions, too, are often normalized by the condition that they have integral one, maximum value one, or square integral one, depending on the application.
Because of the multiplicative identity, if f(x) is a multiplicative function, then f(1) must be equal to 1.
It is also the first and second number in the Fibonacci sequence (0 being the zeroth) and is the first number in many other mathematical sequences.
The definition of a field requires that 1 must not be equal to 0. Thus, there are no fields of characteristic 1. Nevertheless, abstract algebra can consider the field with one element, which is not a singleton and is not a set at all.
1 is the most common leading digit in many sets of data, a consequence of Benfords law.
1 is the only known Tamagawa number for a simply connected algebraic group over a number field.
The generating function that has all coefficients 1 is given by
1
1
−
x
=
1
+
x
+
x
2
+
x
3
+
…
{\displaystyle {\frac {1}{1-x}}=1+x+x^{2}+x^{3}+\ldots }
This power series converges and has finite value if and only if
|
x
|
<
1
{\displaystyle |x|<1}
.
Primality
1 is by convention neither a prime number nor a composite number, but a unit (meaning of ring theory) like −1 and, in the Gaussian integers, i and −i.
The fundamental theorem of arithmetic guarantees unique factorization over the integers only up to units. For example, 4 = 22, but if units are included, is also equal to, say, (−1)6 × 123 × 22, among infinitely many similar "factorizations".
1 appears to meet the naïve definition of a prime number, being evenly divisible only by 1 and itself (also 1). As such, some mathematicians considered it a prime number as late as the middle of the 20th century, but mathematical consensus has generally and since then universally been to exclude it for a variety of reasons (such as complicating the fundamental theorem of arithmetic and other theorems related to prime numbers) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Ear and explain in detail? [/INST] An ear is the organ that enables hearing and, in mammals, body balance using the vestibular system. In mammals, the ear is usually described as having three parts—the outer ear, the middle ear and the inner ear. The outer ear consists of the pinna and the ear canal. Since the outer ear is the only visible portion of the ear in most animals, the word "ear" often refers to the external part alone. The middle ear includes the tympanic cavity and the three ossicles. The inner ear sits in the bony labyrinth, and contains structures which are key to several senses: the semicircular canals, which enable balance and eye tracking when moving; the utricle and saccule, which enable balance when stationary; and the cochlea, which enables hearing. The ears of vertebrates are placed somewhat symmetrically on either side of the head, an arrangement that aids sound localisation.
The ear develops from the first pharyngeal pouch and six small swellings that develop in the early embryo called otic placodes, which are derived from ectoderm.
The ear may be affected by disease, including infection and traumatic damage. Diseases of the ear may lead to hearing loss, tinnitus and balance disorders such as vertigo, although many of these conditions may also be affected by damage to the brain or neural pathways leading from the ear.
The ear has been adorned by earrings and other jewelry in numerous cultures for thousands of years, and has been subjected to surgical and cosmetic alterations.
Structure
The human ear consists of three parts—the outer ear, middle ear and inner ear. The ear canal of the outer ear is separated from the air-filled tympanic cavity of the middle ear by the eardrum. The middle ear contains the three small bones—the ossicles—involved in the transmission of sound, and is connected to the throat at the nasopharynx, via the pharyngeal opening of the Eustachian tube. The inner ear contains the otolith organs—the utricle and saccule—and the semicircular canals belonging to the vestibular system, as well as the cochlea of the auditory system.
Outer ear
The outer ear is the external portion of the ear and includes the fleshy visible pinna (also called the auricle), the ear canal, and the outer layer of the eardrum (also called the tympanic membrane).The pinna consists of the curving outer rim called the helix, the inner curved rim called the antihelix, and opens into the ear canal. The tragus protrudes and partially obscures the ear canal, as does the facing antitragus. The hollow region in front of the ear canal is called the concha. The ear canal stretches for about 1 inch (2.5 cm). The first part of the canal is surrounded by cartilage, while the second part near the eardrum is surrounded by bone. This bony part is known as the auditory bulla and is formed by the tympanic part of the temporal bone. The skin surrounding the ear canal contains ceruminous and sebaceous glands that produce protective ear wax. The ear canal ends at the external surface of the eardrum.Two sets of muscles are associated with the outer ear: the intrinsic and extrinsic muscles. In some mammals, these muscles can adjust the direction of the pinna. In humans, these muscles have little or no effect. The ear muscles are supplied by the facial nerve, which also supplies sensation to the skin of the ear itself, as well as to the external ear cavity. The great auricular nerve, auricular nerve, auriculotemporal nerve, and lesser and greater occipital nerves of the cervical plexus all supply sensation to parts of the outer ear and the surrounding skin.The pinna consists of a single piece of elastic cartilage with a complicated relief on its inner surface and a fairly smooth configuration on its posterior surface. A tubercle, known as Darwins tubercle, is sometimes present, lying in the descending part of the helix and corresponding to the ear-tip of mammals. The earlobe consists of areola and adipose tissue. The symmetrical arrangement of the two ears allows for the localisation of sound. The brain accomplishes this by comparing arrival-times and intensities from each ear, in circuits located in the superior olivary complex and the trapezoid bodies which are connected via pathways to both ears.
Middle ear
The middle ear lies between the outer ear and the inner ear. It consists of an air-filled cavity called the tympanic cavity and includes the three ossicles and their attaching ligaments; the auditory tube; and the round and oval windows. The ossicles are three small bones that function together to receive, amplify, and transmit the sound from the eardrum to the inner ear. The ossicles are the malleus (hammer), incus (anvil), and the stapes (stirrup). The stapes is the smallest named bone in the body. The middle ear also connects to the upper throat at the nasopharynx via the pharyngeal opening of the Eustachian tube.The three ossicles transmit sound from the outer ear to the inner ear. The malleus receives vibrations from sound pressure on the eardrum, where it is connected at its longest part (the manubrium or handle) by a ligament. It transmits vibrations to the incus, which in turn transmits the vibrations to the small stapes bone. The wide base of the stapes rests on the oval window. As the stapes vibrates, vibrations are transmitted through the oval window, causing movement of fluid within the cochlea.The round window allows for the fluid within the inner ear to move. As the stapes pushes the secondary tympanic membrane, fluid in the inner ear moves and pushes the membrane of the round window out by a corresponding amount into the middle ear. The ossicles help amplify sound waves by nearly 15–20 times.
Inner ear
The inner ear sits within the temporal bone in a complex cavity called the bony labyrinth. A central area known as the vestibule contains two small fluid-filled recesses, the utricle and saccule. These connect to the semicircular canals and the cochlea. There are three semicircular canals angled at right angles to each other which are responsible for dynamic balance. The cochlea is a spiral shell-shaped organ responsible for the sense of hearing. These structures together create the membranous labyrinth.The bony labyrinth refers to the bony compartment which contains the membranous labyrinth, contained within the temporal bone. The inner ear structurally begins at the oval window, which receives vibrations from the incus of the middle ear. Vibrations are transmitted into the inner ear into a fluid called endolymph, which fills the membranous labyrinth. The endolymph is situated in two vestibules, the utricle and saccule, and eventually transmits to the cochlea, a spiral-shaped structure. The cochlea consists of three fluid-filled spaces: the vestibular duct, the cochlear duct, and the tympanic duct. Hair cells responsible for transduction—changing mechanical changes into electrical stimuli are present in the organ of Corti in the cochlea.
Blood supply
The blood supply of the ear differs according to each part of the ear.
The outer ear is supplied by a number of arteries. The posterior auricular artery provides the majority of the blood supply. The anterior auricular arteries provide some supply to the outer rim of the ear and scalp behind it. The posterior auricular artery is a direct branch of the external carotid artery, and the anterior auricular arteries are branches from the superficial temporal artery. The occipital artery also plays a role.The middle ear is supplied by the mastoid branch of either the occipital or posterior auricular arteries and the deep auricular artery, a branch of the maxillary artery. Other arteries which are present but play a smaller role include branches of the middle meningeal artery, ascending pharyngeal artery, internal carotid artery, and the artery of the pterygoid canal.The inner ear is supplied by the anterior tympanic branch of the maxillary artery; the stylomastoid branch of the posterior auricular artery; the petrosal branch of middle meningeal artery; and the labyrinthine artery, arising from either the anterior inferior cerebellar artery or the basilar artery.
Function
Hearing
Sound waves travel through the outer ear, are modulated by the middle ear, and are transmitted to the vestibulocochlear nerve in the inner ear. This nerve transmits information to the temporal lobe of the brain, where it is registered as sound.
Sound that travels through the outer ear impacts on the eardrum, and causes it to vibrate. The three ossicles bones transmit this sound to a second window (the oval window) which protects the fluid-filled inner ear. In detail, the pinna of the outer ear helps to focus a sound, which impacts on the eardrum. The malleus rests on the membrane, and receives the vibration. This vibration is transmitted along the incus and stapes to the oval window. Two small muscles, the tensor tympani and stapedius, also help modulate noise. The two muscles reflexively contract to dampen excessive vibrations. Vibration of the oval window causes vibration of the endolymph within the vestibule and the cochlea.The inner ear houses the apparatus necessary to change the vibrations transmitted from the outside world via the middle ear into signals passed along the vestibulocochlear nerve to the brain. The hollow channels of the inner ear are filled with liquid, and contain a sensory epithelium that is studded with hair cells. The microscopic "hairs" of these cells are structural protein filaments that project out into the fluid. The hair cells are mechanoreceptors that release a chemical neurotransmitter when stimulated. Sound waves moving through fluid flows against the receptor cells of the organ of Corti. The fluid pushes the filaments of individual cells; movement of the filaments causes receptor cells to become open to receive the potassium-rich endolymph. This causes the cell to depolarise, and creates an action potential that is transmitted along the spiral ganglion, which sends information through the auditory portion of the vestibulocochlear nerve to the temporal lobe of the brain.The human ear can generally hear sounds with frequencies between 20 Hz and 20 kHz (the audio range). Sounds outside this range are considered infrasound (below 20 Hz) or ultrasound (above 20 kHz) Although hearing requires an intact and functioning auditory portion of the central nervous system as well as a working ear, human deafness (extreme insensitivity to sound) most commonly occurs because of abnormalities of the inner ear, rather than in the nerves or tracts of the central auditory system.
Balance
Providing balance, when moving or stationary, is also a central function of the ear. The ear facilitates two types of balance: static balance, which allows a person to feel the effects of gravity, and dynamic balance, which allows a person to sense acceleration.
Static balance is provided by two ventricles, the utricle and the saccule. Cells lining the walls of these ventricles contain fine filaments, and the cells are covered with a fine gelatinous layer. Each cell has 50–70 small filaments, and one large filament, the kinocilium. Within the gelatinous layer lie otoliths, tiny formations of calcium carbonate. When a person moves, these otoliths shift position. This shift alters the positions of the filaments, which opens ion channels within the cell membranes, creating depolarisation and an action potential that is transmitted to the brain along the vestibulocochlear nerve.Dynamic balance is provided through the three semicircular canals. These three canals are orthogonal (at right angles) to each other. At the end of each canal is a slight enlargement, known as the ampulla, which contains numerous cells with filaments in a central area called the cupula. The fluid in these canals rotates according to the momentum of the head. When a person changes acceleration, the inertia of the fluid changes. This affects the pressure on the cupula, and results in the opening of ion channels. This causes depolarisation, which is passed as a signal to the brain along the vestibulocochlear nerve. Dynamic balance also helps maintain eye tracking when moving, via the vestibulo–ocular reflex.
Development
During embryogenesis the ear develops as three distinct structures: the inner ear, the middle ear and the outer ear. Each structure originates from a different germ layer: the ectoderm, endoderm and mesenchyme.
Inner ear
After implantation, around the second to third week the developing embryo consists of three layers: endoderm, mesoderm and ectoderm. The first part of the ear to develop is the inner ear, which begins to form from the ectoderm around the 22nd day of the embryos development. Specifically, the inner ear derives from two thickenings called otic placodes on either side of the head. Each otic placode recedes below the ectoderm, forms an otic pit and then an otic vesicle. This entire mass will eventually become surrounded by mesenchyme to form the bony labyrinth.Around the 33rd day of development, the vesicles begin to differentiate. Closer to the back of the embryo, they form what will become the utricle and semicircular canals. Closer to the front of the embryo, the vesicles differentiate into a rudimentary saccule, which will eventually become the saccule and cochlea. Part of the saccule will eventually give rise and connect to the cochlear duct. This duct appears approximately during the sixth week and connects to the saccule through the ductus reuniens.As the cochlear ducts mesenchyme begins to differentiate, three cavities are formed: the scala vestibuli, the scala tympani and the scala media. Both the scala vestibuli and the scala tympani contain an extracellular fluid called perilymph. The scala media contains endolymph. A set of membranes called the vestibular membrane and the basilar membrane develop to separate the cochlear duct from the vestibular duct and the tympanic duct, respectively.Parts of the otic vesicle in turn form the vestibulocochlear nerve. These form bipolar neurons which supply sensation to parts of the inner ear (namely the sensory parts of the semicircular canals, macular of the utricle and saccule, and organ of Corti). The nerve begins to form around the 28th day.
Molecular regulationMost of the genes responsible for the regulation of inner ear formation and its morphogenesis are members of the homeobox gene family such as Pax, Msx and Otx homeobox genes. The development of inner ear structures such as the cochlea is regulated by Dlx5/Dlx6, Otx1/Otx2 and Pax2, which in turn are controlled by the master gene Shh. Shh is secreted by the notochord.
Middle ear
The middle ear and its components develop from the first and second pharyngeal arches. The tympanic cavity and auditory tube develop from the first part of the pharyngeal pouch between the first two arches in an area which will also go on to develop the pharynx. This develops as a structure called the tubotympanic recess. The ossicles (malleus, incus and stapes) normally appear during the first half of fetal development. The first two (malleus and incus) derive from the first pharyngeal arch and the stapes derives from the second. All three ossicles develop from the neural crest. Eventually cells from the tissue surrounding the ossicles will experience apoptosis and a new layer of endodermal epithelial will constitute the formation of the tympanic cavity wall.
Outer ear
Unlike structures of the inner and middle ear, which develop from pharyngeal pouches, the ear canal originates from the dorsal portion of the first pharyngeal cleft. It is fully expanded by the end of the 18th week of development. The eardrum is made up of three layers (ectoderm, endoderm and connective tissue). The pinna originates as a fusion of six hillocks. The first three hillocks are derived from the lower part of the first pharyngeal arch and form the tragus, crus of the helix, and helix, respectively. The final three hillocks are derived from the upper part of the second pharyngeal arch and form the antihelix, antitragus, and earlobe. The outer ears develop in the lower neck. As the mandible forms they move towards their final position level with the eyes.
Clinical significance
Hearing loss
Hearing loss may be either partial or total. This may be a result of injury or damage, congenital disease, or physiological causes. When hearing loss is a result of injury or damage to the outer ear or middle ear, it is known as conductive hearing loss. When deafness is a result of injury or damage to the inner ear, vestibulochoclear nerve, or brain, it is known as sensorineural hearing loss.
Causes of conductive hearing loss include an ear canal blocked by ear wax, ossicles that are fixed together or absent, or holes in the eardrum. Conductive hearing loss may also result from middle ear inflammation causing fluid build-up in the normally air-filled space, such as by otitis media. Tympanoplasty is the general name of the operation to repair the middle ears eardrum and ossicles. Grafts from muscle fascia are ordinarily used to rebuild an intact eardrum. Sometimes artificial ear bones are placed to substitute for damaged ones, or a disrupted ossicular chain is rebuilt in order to conduct sound effectively.
Hearing aids or cochlear implants may be used if the hearing loss is severe or prolonged. Hearing aids work by amplifying the sound of the local environment and are best suited to conductive hearing loss. Cochlear implants transmit the sound that is heard as if it were a nervous signal, bypassing the cochlea. Active middle ear implants send sound vibrations to the ossicles in the middle ear, bypassing any non-functioning parts of the outer and middle ear.
Congenital abnormalities
Anomalies and malformations of the pinna are common. These anomalies include chromosome syndromes such as ring 18. Children may also present cases of abnormal ear canals and low ear implantation. In rare cases no pinna is formed (atresia), or is extremely small (microtia). Small pinnae can develop when the auricular hillocks do not develop properly. The ear canal can fail to develop if it does not channelise properly or if there is an obstruction. Reconstructive surgery to treat hearing loss is considered as an option for children older than five, with a cosmetic surgical procedure to reduce the size or change the shape of the ear is called an otoplasty. The initial medical intervention is aimed at assessing the babys hearing and the condition of the ear canal, as well as the middle and inner ear. Depending on the results of tests, reconstruction of the outer ear is done in stages, with planning for any possible repairs of the rest of the ear.Approximately one out of one thousand children suffer some type of congenital deafness related to the development of the inner ear. Inner ear congenital anomalies are related to sensorineural hearing loss and are generally diagnosed with a computed tomography (CT) scan or a magnetic resonance imaging (MRI) scan. Hearing loss problems also derive from inner ear anomalies because its development is separate from that of the middle and external ear. Middle ear anomalies can occur because of errors during head and neck development. The first pharyngeal pouch syndrome associates middle ear anomalies to the malleus and incus structures as well as to the non-differentiation of the annular stapedial ligament. Temporal bone and ear canal anomalies are also related to this structure of the ear and are known to be associated with sensorineural hearing loss and conductive hearing loss.
Vertigo
Vertigo refers to the inappropriate perception of motion. This is due to dysfunction of the vestibular system. One common type of vertigo is benign paroxysmal positional vertigo, when an otolith is displaced from the ventricles to the semicircular canal. The displaced otolith rests on the cupola, causing a sensation of movement when there is none. Ménières disease, labyrinthitis, strokes, and other infective and congenital diseases may also result in the perception of vertigo.
Injury
Outer earInjuries to the external ear occur fairly frequently, and can leave minor to major deformity. Injuries include: laceration, avulsion injuries, burn and repeated twisting or pulling of an ear, for discipline or torture. Chronic damage to the ears can cause cauliflower ear, a common condition in boxers and wrestlers in which the cartilage around the ears becomes lumpy and distorted owing to persistence of a haematoma around the perichondrium, which can impair blood supply and healing. Owing to its exposed position, the external ear is susceptible to frostbite as well as skin cancers, including squamous-cell carcinoma and basal-cell carcinomas.
Middle earThe ear drum may become perforated in the event of a large sound or explosion, when diving or flying (called barotrauma), or by objects inserted into the ear. Another common cause of injury is due to an infection such as otitis media. These may cause a discharge from the ear called otorrhea, and are often investigated by otoscopy and audiometry. Treatment may include watchful waiting, antibiotics and possibly surgery, if the injury is prolonged or the position of the ossicles is affected. Skull fractures that go through the part of the skull containing the ear structures (the temporal bone) can also cause damage to the middle ear. A cholesteatoma is a cyst of squamous skin cells that may develop from birth or secondary to other causes such as chronic ear infections. It may impair hearing or cause dizziness or vertigo, and is usually investigated by otoscopy and may require a CT scan. The treatment for cholesteatoma is surgery.
Inner earThere are two principal damage mechanisms to the inner ear in industrialised society, and both injure hair cells. The first is exposure to elevated sound levels (noise trauma), and the second is exposure to drugs and other substances (ototoxicity). A large number of people are exposed to sound levels on a daily basis that are likely to lead to significant hearing loss. The National Institute for Occupational Safety and Health has recently published research on the estimated numbers of persons with hearing difficulty (11%) and the percentage of those that can be attributed to occupational noise exposure (24%). Furthermore, according to the National Health and Nutrition Examination Survey (NHANES), approximately twenty-two million (17%) US workers reported exposure to hazardous workplace noise. Workers exposed to hazardous noise further exacerbate the potential for developing noise-induced hearing loss when they do not wear hearing protection.
Tinnitus
Tinnitus is the hearing of sound when no external sound is present. While often described as a ringing, it may also sound like a clicking, hiss or roaring. Rarely, unclear voices or music are heard. The sound may be soft or loud, low pitched or high pitched and appear to be coming from one ear or both. Most of the time, it comes on gradually. In some people, the sound causes depression, anxiety, or concentration difficulties.Tinnitus is not a disease but a symptom that can result from a number of underlying causes. One of the most common causes is noise-induced hearing loss. Other causes include: ear infections, disease of the heart or blood vessels, Ménières disease, brain tumors, emotional stress, exposure to certain medications, a previous head injury, and earwax. It is more common in those with depression and anxiety.
Society and culture
The ears have been ornamented with jewelry for thousands of years, traditionally by piercing of the earlobe. In ancient and modern cultures, ornaments have been placed to stretch and enlarge the earlobes, allowing for larger plugs to be slid into a large fleshy gap in the lobe. Tearing of the earlobe from the weight of heavy earrings, or from traumatic pull of an earring (for example, by snagging on a sweater), is fairly common.Injury to the ears has been present since Roman times as a method of reprimand or punishment – "In Roman times, when a dispute arose that could not be settled amicably, the injured party cited the name of the person thought to be responsible before the Praetor; if the offender did not appear within the specified time limit, the complainant summoned witnesses to make statements. If they refused, as often happened, the injured party was allowed to drag them by the ear and to pinch them hard if they resisted. Hence the French expression "se faire tirer l’oreille", of which the literal meaning is "to have ones ear pulled" and the figurative meaning "to take a lot of persuading". We use the expression "to tweak (or pull) someones ears" to mean "inflict a punishment"."The pinnae have an effect on facial appearance. In Western societies, protruding ears (present in about 5% of ethnic Europeans) have been considered unattractive, particularly if asymmetric. The first surgery to reduce the projection of prominent ears was published in the medical literature by Ernst Dieffenbach in 1845, and the first case report in 1881.
Pointy ears are a characteristic of some creatures in folklore such as the French croquemitaine, Brazilian curupira or Japanese earth spider. It has been a feature of characters on art as old as that of Ancient Greece and medieval Europe. Pointy ears are a common characteristic of many creatures in the fantasy genre, including elves, faeries, pixies, hobbits, or orcs. They are a characteristic of creatures in the horror genre, such as vampires. Pointy ears are also found in the science fiction genre; for example among the Vulcan and Romulan races of the Star Trek universe and the Nightcrawler character from the X-Men universe.Georg von Békésy was a Hungarian biophysicist born in Budapest, Hungary. In 1961, he was awarded the Nobel Prize in Physiology or Medicine for his research on the function of the cochlea in the mammalian hearing organ.The Vacanti mouse was a laboratory mouse that had what looked like a human ear grown on its back. The "ear" was actually an ear-shaped cartilage structure grown by seeding cow cartilage cells into a biodegradable ear-shaped mold and then implanted under the skin of the mouse; then the cartilage naturally grew by itself. It was developed as an alternative to ear repair or grafting procedures and the results met with much publicity and controversy in 1997.
Other animals
The pinna helps direct sound through the ear canal to the eardrum. The complex geometry of ridges on the inner surface of some mammalian ears helps to sharply focus sounds produced by prey, using echolocation signals. These ridges can be regarded as the acoustic equivalent of a fresnel lens, and may be seen in a wide range of animals, including the bat, aye-aye, lesser galago, bat-eared fox, mouse lemur and others.Some large primates such as gorillas and orang-utans (and also humans) have undeveloped ear muscles that are non-functional vestigial structures, yet are still large enough to be easily identified. An ear muscle that cannot move the ear, for whatever reason, has lost that biological function. This serves as evidence of homology between related species. In humans, there is variability in these muscles, such that some people are able to move their ears in various directions, and it has been said that it may be possible for others to gain such movement by repeated trials. In such primates, the inability to move the ear is compensated for mainly by the ability to easily turn the head on a horizontal plane, an ability which is not common to most monkeys—a function once provided by one structure is now replaced by another.In some animals with mobile pinnae (like the horse), each pinna can be aimed independently to better receive the sound. For these animals, the pinnae help localise the direction of the sound source.
The ear, with its blood vessels close to the surface, is an essential thermoregulator in some land mammals, including the elephant, the fox, and the rabbit. There are five types of ear carriage in domestic rabbits, some of which have been bred for exaggerated ear length—a potential health risk that is controlled in some countries. Abnormalities in the skull of a half-lop rabbit were studied by Charles Darwin in 1868. In marine mammals, earless seals are one of three groups of Pinnipedia.
Invertebrates
Only vertebrate animals have ears, though many invertebrates detect sound using other kinds of sense organs </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Left bundle branch block and explain in detail? [/INST] Left bundle branch block (LBBB) is a conduction abnormality in the heart that can be seen on an electrocardiogram (ECG). In this condition, activation of the left ventricle of the heart is delayed, which causes the left ventricle to contract later than the right ventricle.
Causes
Among the causes of LBBB are:
Aortic stenosis
Dilated cardiomyopathy
Acute myocardial infarction
Extensive coronary artery disease
Primary disease of the cardiac electrical conduction system
Long standing hypertension leading to aortic root dilatation and subsequent aortic regurgitation
Lyme disease
Mechanisms
Slow or absent conduction through the left bundle branch means that it takes longer than normal for the left ventricle to fully depolarise. This can be due to a damaged bundle branch that is completely unable to conduct, but may represent intact conduction that is slower than normal. LBBB may be fixed, present at all times, but may be intermittent for example occurring only during rapid heart rates </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Erythema toxicum neonatorum and explain in detail? [/INST] Erythema toxicum neonatorum is a common, non-threatening rash in newborns. It appears in 4-70% of newborns within the first week of life, and it typically improves within 1–2 weeks. It only occurs during the newborn period, but may appear slightly later in premature babies. The rash has a variable appearance. It typically includes blotchy red spots, often with overlying firm, yellow-white bumps or pus-filled boils. There may be only a few or many lesions. The lesions can appear almost anywhere on the body, and individual lesions may appear and disappear within hours. There are no other symptoms associated with erythema toxicum neonatorum, and the rash does not have any long-term effects on the skin. Erythema toxicum neonatorum is not harmful and does not require any treatment.
Epidemiology
The exact prevalence of erythema toxicum neonatorum is unknown, and studies estimate prevalence as low as 3.7 percent to as high as 72 percent. It is one of the most commonly diagnosed rashes in healthy babies. It is more common among infants born at higher gestational age and is rare among premature infants. Erythema toxicum neonatorum is more likely to develop in infants delivered vaginally. Higher birth weight is an additional risk factor. There may be a slightly increased risk in males, but this association is unclear. There are no known associations with race or ethnicity.
Presentation
Erythema toxicum neonatorum usually appears during the first week of life, most often on day two. It may develop several days or weeks later in premature babies.The rash has a variable appearance, ranging from a few blotchy red spots to many yellow-white bumps and boils. The classic presentation is 1–3 mm, firm, yellow-white bumps with a surrounding red halo. The rash is often described as "flea-bitten." There may be only a few or many lesions, and they may be clustered or widespread. The rash often appears on the cheeks first and may later spread throughout the face, trunk, arms, and legs. Lesions most often appear on the thighs, buttocks, and trunk. The palms and soles are not affected, likely because the skin in these areas does not have hair follicles.Individual lesions may grow and shrink over hours or days. They usually fade within a week or two. The rash may recur within the next few weeks, but this is rare.
Cause
Erythema toxicum neonatorum is related to activation of the immune system, but its exact cause is unknown. Many inflammatory factors have been detected in erythema toxicum neonatorum lesions, including IL-1alpha, IL-1beta, IL-8, and eotaxin. These molecules cause redness and swelling in the skin and attract immune cells.Eosinophils and other immune cells are found in the upper layer of the skin in erythema toxicum neonatorum lesions. Immune cells tend to cluster around hair follicles in particular. The leading hypothesis about the cause of erythema toxicum neonatorum is that bacteria activate the immune system when they enter hair follicles for the first time. This is part of a normal process in which bacteria from the environment start to grow on a babys skin. It is unknown whether the immune response that causes erythema toxicum neonatorum is helpful to the baby. Recent research indicates an association with Demodex mites infestation (demodicosis).
Diagnosis
Health professionals can diagnose erythema toxicum neonatorum with a skin exam. Most cases of erythema toxicum neonatorum can be diagnosed without further testing. If more testing is needed to make a diagnosis, the contents of a lesion can be examined under a microscope. A health professional may make a small cut into a pus-filled lesion and collect a swab of pus for testing. Lesions caused by erythema toxicum neonatorum contain eosinophils and other immune cells. These cells can be seen under a microscope when a special stain is applied to the sample.Since the appearance of erythema toxicum neonatorum varies, it may be confused with other newborn rashes. Some newborn infections cause bumps or boils, which may look like erythema toxicum neonatorum. Bacterial infections, including Staphylococcus and Streptococcus infections, almost always cause additional symptoms. These symptoms may be severe, and they are usually not limited to rash. Bacterial rashes can be diagnosed by testing pus from a lesion along with a blood sample. Bacteria can be seen under a microscope with a special stain or may be found on a culture. Fungal infection with Candida may also cause a similar rash in newborns, but it usually causes additional symptoms like thrush. Similarly, fungus can be seen under a microscope or found on a culture. Some viral infections may cause a rash with boils on a reddish base </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Disease and explain in detail? [/INST] A disease is a particular abnormal condition that negatively affects the structure or function of all or part of an organism, and that is not immediately due to any external injury. Diseases are often known to be medical conditions that are associated with specific signs and symptoms. A disease may be caused by external factors such as pathogens or by internal dysfunctions. For example, internal dysfunctions of the immune system can produce a variety of different diseases, including various forms of immunodeficiency, hypersensitivity, allergies and autoimmune disorders.
In humans, disease is often used more broadly to refer to any condition that causes pain, dysfunction, distress, social problems, or death to the person affected, or similar problems for those in contact with the person. In this broader sense, it sometimes includes injuries, disabilities, disorders, syndromes, infections, isolated symptoms, deviant behaviors, and atypical variations of structure and function, while in other contexts and for other purposes these may be considered distinguishable categories. Diseases can affect people not only physically, but also mentally, as contracting and living with a disease can alter the affected persons perspective on life.
Death due to disease is called death by natural causes. There are four main types of disease: infectious diseases, deficiency diseases, hereditary diseases (including both genetic diseases and non-genetic hereditary diseases), and physiological diseases. Diseases can also be classified in other ways, such as communicable versus non-communicable diseases. The deadliest diseases in humans are coronary artery disease (blood flow obstruction), followed by cerebrovascular disease and lower respiratory infections. In developed countries, the diseases that cause the most sickness overall are neuropsychiatric conditions, such as depression and anxiety.
The study of disease is called pathology, which includes the study of etiology, or cause.
Terminology
Concepts
In many cases, terms such as disease, disorder, morbidity, sickness and illness are used interchangeably; however, there are situations when specific terms are considered preferable.
Disease
The term disease broadly refers to any condition that impairs the normal functioning of the body. For this reason, diseases are associated with the dysfunction of the bodys normal homeostatic processes. Commonly, the term is used to refer specifically to infectious diseases, which are clinically evident diseases that result from the presence of pathogenic microbial agents, including viruses, bacteria, fungi, protozoa, multicellular organisms, and aberrant proteins known as prions. An infection or colonization that does not and will not produce clinically evident impairment of normal functioning, such as the presence of the normal bacteria and yeasts in the gut, or of a passenger virus, is not considered a disease. By contrast, an infection that is asymptomatic during its incubation period, but expected to produce symptoms later, is usually considered a disease. Non-infectious diseases are all other diseases, including most forms of cancer, heart disease, and genetic disease.
Acquired disease
An acquired disease is one that began at some point during ones lifetime, as opposed to disease that was already present at birth, which is congenital disease. Acquired sounds like it could mean "caught via contagion", but it simply means acquired sometime after birth. It also sounds like it could imply secondary disease, but acquired disease can be primary disease.
Acute disease
An acute disease is one of a short-term nature (acute); the term sometimes also connotes a fulminant nature
Chronic condition or chronic disease
A chronic disease is one that persists over time, often characterized as at least six months but may also include illnesses that are expected to last for the entirety of ones natural life.
Congenital disorder or congenital disease
A congenital disorder is one that is present at birth. It is often a genetic disease or disorder and can be inherited. It can also be the result of a vertically transmitted infection from the mother, such as HIV/AIDS.
Genetic disease
A genetic disorder or disease is caused by one or more genetic mutations. It is often inherited, but some mutations are random and de novo.
Hereditary or inherited disease
A hereditary disease is a type of genetic disease caused by genetic mutations that are hereditary (and can run in families)
Iatrogenic disease
An iatrogenic disease or condition is one that is caused by medical intervention, whether as a side effect of a treatment or as an inadvertent outcome.
Idiopathic disease
An idiopathic disease has an unknown cause or source. As medical science has advanced, many diseases with entirely unknown causes have had some aspects of their sources explained and therefore shed their idiopathic status. For example, when germs were discovered, it became known that they were a cause of infection, but particular germs and diseases had not been linked. In another example, it is known that autoimmunity is the cause of some forms of diabetes mellitus type 1, even though the particular molecular pathways by which it works are not yet understood. It is also common to know certain factors are associated with certain diseases; however, association and causality are two very different phenomena, as a third cause might be producing the disease, as well as an associated phenomenon.
Incurable disease
A disease that cannot be cured. Incurable diseases are not necessarily terminal diseases, and sometimes a diseases symptoms can be treated sufficiently for the disease to have little or no impact on quality of life.
Primary disease
A primary disease is a disease that is due to a root cause of illness, as opposed to secondary disease, which is a sequela, or complication that is caused by the primary disease. For example, a common cold is a primary disease, where rhinitis is a possible secondary disease, or sequela. A doctor must determine what primary disease, a cold or bacterial infection, is causing a patients secondary rhinitis when deciding whether or not to prescribe antibiotics.
Secondary disease
A secondary disease is a disease that is a sequela or complication of a prior, causal disease, which is referred to as the primary disease or simply the underlying cause (root cause). For example, a bacterial infection can be primary, wherein a healthy person is exposed to bacteria and becomes infected, or it can be secondary to a primary cause, that predisposes the body to infection. For example, a primary viral infection that weakens the immune system could lead to a secondary bacterial infection. Similarly, a primary burn that creates an open wound could provide an entry point for bacteria, and lead to a secondary bacterial infection.
Terminal disease
A terminal disease is one that is expected to have the inevitable result of death. Previously, AIDS was a terminal disease; it is now incurable, but can be managed indefinitely using medications.
Illness
The terms illness and sickness are both generally used as synonyms for disease; however, the term illness is occasionally used to refer specifically to the patients personal experience of his or her disease. In this model, it is possible for a person to have a disease without being ill (to have an objectively definable, but asymptomatic, medical condition, such as a subclinical infection, or to have a clinically apparent physical impairment but not feel sick or distressed by it), and to be ill without being diseased (such as when a person perceives a normal experience as a medical condition, or medicalizes a non-disease situation in his or her life – for example, a person who feels unwell as a result of embarrassment, and who interprets those feelings as sickness rather than normal emotions). Symptoms of illness are often not directly the result of infection, but a collection of evolved responses – sickness behavior by the body – that helps clear infection and promote recovery. Such aspects of illness can include lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate.
Disorder
A disorder is a functional abnormality or disturbance. Medical disorders can be categorized into mental disorders, physical disorders, genetic disorders, emotional and behavioral disorders, and functional disorders. The term disorder is often considered more value-neutral and less stigmatizing than the terms disease or illness, and therefore is preferred terminology in some circumstances. In mental health, the term mental disorder is used as a way of acknowledging the complex interaction of biological, social, and psychological factors in psychiatric conditions; however, the term disorder is also used in many other areas of medicine, primarily to identify physical disorders that are not caused by infectious organisms, such as metabolic disorders.
Medical condition or health condition
A medical condition or health condition is a broad concept that includes all diseases, lesions, disorders, or nonpathologic condition that normally receives medical treatment, such as pregnancy or childbirth. While the term medical condition generally includes mental illnesses, in some contexts the term is used specifically to denote any illness, injury, or disease except for mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the widely used psychiatric manual that defines all mental disorders, uses the term general medical condition to refer to all diseases, illnesses, and injuries except for mental disorders. This usage is also commonly seen in the psychiatric literature. Some health insurance policies also define a medical condition as any illness, injury, or disease except for psychiatric illnesses.
As it is more value-neutral than terms like disease, the term medical condition is sometimes preferred by people with health issues that they do not consider deleterious. On the other hand, by emphasizing the medical nature of the condition, this term is sometimes rejected, such as by proponents of the autism rights movement.
The term medical condition is also a synonym for medical state, in which case it describes an individual patients current state from a medical standpoint. This usage appears in statements that describe a patient as being in critical condition, for example.
Morbidity
Morbidity (from Latin morbidus sick, unhealthy) is a diseased state, disability, or poor health due to any cause. The term may refer to the existence of any form of disease, or to the degree that the health condition affects the patient. Among severely ill patients, the level of morbidity is often measured by ICU scoring systems. Comorbidity, or co-existing disease, is the simultaneous presence of two or more medical conditions, such as schizophrenia and substance abuse.
In epidemiology and actuarial science, the term morbidity (also morbidity rate or morbidity frequency) can refer to either the incidence rate, the prevalence of a disease or medical condition, or the percentage of people who experience a given condition within a given timeframe (e.g., 20% of people will get influenza in a year). This measure of sickness is contrasted with the mortality rate of a condition, which is the proportion of people dying during a given time interval. Morbidity rates are used in actuarial professions, such as health insurance, life insurance, and long-term care insurance, to determine the correct premiums to charge to customers. Morbidity rates help insurers predict the likelihood that an insured will contract or develop any number of specified diseases.
Pathosis or pathology
Pathosis (plural pathoses) is synonymous with disease. The word pathology also has this sense, in which it is commonly used by physicians in the medical literature, although some editors prefer to reserve pathology to its other senses. Sometimes a slight connotative shade causes preference for pathology or pathosis implying "some [as yet poorly analyzed] pathophysiologic process" rather than disease implying "a specific disease entity as defined by diagnostic criteria being already met". This is hard to quantify denotatively, but it explains why cognitive synonymy is not invariable.
Syndrome
A syndrome is the association of several signs and symptoms, or other characteristics that often occur together, regardless of whether the cause is known. Some syndromes such as Down syndrome are known to have only one cause (an extra chromosome at birth). Others such as Parkinsonian syndrome are known to have multiple possible causes. Acute coronary syndrome, for example, is not a single disease itself but is rather the manifestation of any of several diseases including myocardial infarction secondary to coronary artery disease. In yet other syndromes, however, the cause is unknown. A familiar syndrome name often remains in use even after an underlying cause has been found or when there are a number of different possible primary causes. Examples of the first-mentioned type are that Turner syndrome and DiGeorge syndrome are still often called by the "syndrome" name despite that they can also be viewed as disease entities and not solely as sets of signs and symptoms.
Predisease
Predisease is a subclinical or prodromal vanguard of a disease. Prediabetes and prehypertension are common examples. The nosology or epistemology of predisease is contentious, though, because there is seldom a bright line differentiating a legitimate concern for subclinical/prodromal/premonitory status (on one hand) and conflict of interest–driven disease mongering or medicalization (on the other hand). Identifying legitimate predisease can result in useful preventive measures, such as motivating the person to get a healthy amount of physical exercise, but labeling a healthy person with an unfounded notion of predisease can result in overtreatment, such as taking drugs that only help people with severe disease or paying for drug prescription instances whose benefit–cost ratio is minuscule (placing it in the waste category of CMS "waste, fraud, and abuse" classification). Three requirements for the legitimacy of calling a condition a predisease are:
a truly high risk for progression to disease – for example, a pre-cancer will almost certainly turn into cancer over time
actionability for risk reduction – for example, removal of the precancerous tissue prevents it from turning into a potentially deadly cancer
benefit that outweighs the harm of any interventions taken – removing the precancerous tissue prevents cancer, and thus prevents a potential death from cancer.
Types by body system
Mental
Mental illness is a broad, generic label for a category of illnesses that may include affective or emotional instability, behavioral dysregulation, cognitive dysfunction or impairment. Specific illnesses known as mental illnesses include major depression, generalized anxiety disorders, schizophrenia, and attention deficit hyperactivity disorder, to name a few. Mental illness can be of biological (e.g., anatomical, chemical, or genetic) or psychological (e.g., trauma or conflict) origin. It can impair the affected persons ability to work or study and can harm interpersonal relationships. The term insanity is used technically as a legal term.
Organic
An organic disease is one caused by a physical or physiological change to some tissue or organ of the body. The term sometimes excludes infections. It is commonly used in contrast with mental disorders. It includes emotional and behavioral disorders if they are due to changes to the physical structures or functioning of the body, such as after a stroke or a traumatic brain injury, but not if they are due to psychosocial issues.
Stages
In an infectious disease, the incubation period is the time between infection and the appearance of symptoms. The latency period is the time between infection and the ability of the disease to spread to another person, which may precede, follow, or be simultaneous with the appearance of symptoms. Some viruses also exhibit a dormant phase, called viral latency, in which the virus hides in the body in an inactive state. For example, varicella zoster virus causes chickenpox in the acute phase; after recovery from chickenpox, the virus may remain dormant in nerve cells for many years, and later cause herpes zoster (shingles).
Acute disease
An acute disease is a short-lived disease, like the common cold.
Chronic disease
A chronic disease is one that lasts for a long time, usually at least six months. During that time, it may be constantly present, or it may go into remission and periodically relapse. A chronic disease may be stable (does not get any worse) or it may be progressive (gets worse over time). Some chronic diseases can be permanently cured. Most chronic diseases can be beneficially treated, even if they cannot be permanently cured.
Clinical disease
One that has clinical consequences; in other words, the stage of the disease that produces the characteristic signs and symptoms of that disease. AIDS is the clinical disease stage of HIV infection.
Cure
A cure is the end of a medical condition or a treatment that is very likely to end it, while remission refers to the disappearance, possibly temporarily, of symptoms. Complete remission is the best possible outcome for incurable diseases.
Flare-up
A flare-up can refer to either the recurrence of symptoms or an onset of more severe symptoms.
Progressive disease
Progressive disease is a disease whose typical natural course is the worsening of the disease until death, serious debility, or organ failure occurs. Slowly progressive diseases are also chronic diseases; many are also degenerative diseases. The opposite of progressive disease is stable disease or static disease: a medical condition that exists, but does not get better or worse.
Refractory disease
A refractory disease is a disease that resists treatment, especially an individual case that resists treatment more than is normal for the specific disease in question.
Subclinical disease
Also called silent disease, silent stage, or asymptomatic disease. This is a stage in some diseases before the symptoms are first noted.
Terminal phase
If a person will die soon from a disease, regardless of whether that disease typically causes death, then the stage between the earlier disease process and active dying is the terminal phase.
Recovery
Recovery can refer to the repairing of physical processes (tissues, organs etc.) and the resumption of healthy functioning after damage causing processes have been cured.
Extent
Localized disease
A localized disease is one that affects only one part of the body, such as athletes foot or an eye infection.
Disseminated disease
A disseminated disease has spread to other parts; with cancer, this is usually called metastatic disease.
Systemic disease
A systemic disease is a disease that affects the entire body, such as influenza or high blood pressure.
Classification
Diseases may be classified by cause, pathogenesis (mechanism by which the disease is caused), or by symptom(s). Alternatively, diseases may be classified according to the organ system involved, though this is often complicated since many diseases affect more than one organ.
A chief difficulty in nosology is that diseases often cannot be defined and classified clearly, especially when cause or pathogenesis are unknown. Thus diagnostic terms often only reflect a symptom or set of symptoms (syndrome).
Classical classification of human disease derives from the observational correlation between pathological analysis and clinical syndromes. Today it is preferred to classify them by their cause if it is known.The most known and used classification of diseases is the World Health Organizations ICD. This is periodically updated. Currently, the last publication is the ICD-11.
Causes
Only some diseases such as influenza are contagious and commonly believed infectious. The microorganisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa, and fungi. Infectious diseases can be transmitted, e.g. by hand-to-mouth contact with infectious material on surfaces, by bites of insects or other carriers of the disease, and from contaminated water or food (often via fecal contamination), etc. Also, there are sexually transmitted diseases. In some cases, microorganisms that are not readily spread from person to person play a role, while other diseases can be prevented or ameliorated with appropriate nutrition or other lifestyle changes.
Some diseases, such as most (but not all) forms of cancer, heart disease, and mental disorders, are non-infectious diseases. Many non-infectious diseases have a partly or completely genetic basis (see genetic disorder) and may thus be transmitted from one generation to another.
Social determinants of health are the social conditions in which people live that determine their health. Illnesses are generally related to social, economic, political, and environmental circumstances. Social determinants of health have been recognized by several health organizations such as the Public Health Agency of Canada and the World Health Organization to greatly influence collective and personal well-being. The World Health Organizations Social Determinants Council also recognizes Social determinants of health in poverty.
When the cause of a disease is poorly understood, societies tend to mythologize the disease or use it as a metaphor or symbol of whatever that culture considers evil. For example, until the bacterial cause of tuberculosis was discovered in 1882, experts variously ascribed the disease to heredity, a sedentary lifestyle, depressed mood, and overindulgence in sex, rich food, or alcohol, all of which were social ills at the time.When a disease is caused by a pathogenic organism (e.g., when malaria is caused by Plasmodium), one should not confuse the pathogen (the cause of the disease) with disease itself. For example, West Nile virus (the pathogen) causes West Nile fever (the disease). The misuse of basic definitions in epidemiology is frequent in scientific publications.
Types of causes
Airborne
An airborne disease is any disease that is caused by pathogens and transmitted through the air.
Foodborne
Foodborne illness or food poisoning is any illness resulting from the consumption of food contaminated with pathogenic bacteria, toxins, viruses, prions or parasites.
Infectious
Infectious diseases, also known as transmissible diseases or communicable diseases, comprise clinically evident illness (i.e., characteristic medical signs or symptoms of disease) resulting from the infection, presence and growth of pathogenic biological agents in an individual host organism. Included in this category are contagious diseases – an infection, such as influenza or the common cold, that commonly spreads from one person to another – and communicable diseases – a disease that can spread from one person to another, but does not necessarily spread through everyday contact.
Lifestyle
A lifestyle disease is any disease that appears to increase in frequency as countries become more industrialized and people live longer, especially if the risk factors include behavioral choices like a sedentary lifestyle or a diet high in unhealthful foods such as refined carbohydrates, trans fats, or alcoholic beverages.
Non-communicable
A non-communicable disease is a medical condition or disease that is non-transmissible. Non-communicable diseases cannot be spread directly from one person to another. Heart disease and cancer are examples of non-communicable diseases in humans.
Prevention
Many diseases and disorders can be prevented through a variety of means. These include sanitation, proper nutrition, adequate exercise, vaccinations and other self-care and public health measures, such as obligatory face mask mandates.
Treatments
Medical therapies or treatments are efforts to cure or improve a disease or other health problems. In the medical field, therapy is synonymous with the word treatment. Among psychologists, the term may refer specifically to psychotherapy or "talk therapy". Common treatments include medications, surgery, medical devices, and self-care. Treatments may be provided by an organized health care system, or informally, by the patient or family members.
Preventive healthcare is a way to avoid an injury, sickness, or disease in the first place. A treatment or cure is applied after a medical problem has already started. A treatment attempts to improve or remove a problem, but treatments may not produce permanent cures, especially in chronic diseases. Cures are a subset of treatments that reverse diseases completely or end medical problems permanently. Many diseases that cannot be completely cured are still treatable. Pain management (also called pain medicine) is that branch of medicine employing an interdisciplinary approach to the relief of pain and improvement in the quality of life of those living with pain.Treatment for medical emergencies must be provided promptly, often through an emergency department or, in less critical situations, through an urgent care facility.
Epidemiology
Epidemiology is the study of the factors that cause or encourage diseases. Some diseases are more common in certain geographic areas, among people with certain genetic or socioeconomic characteristics, or at different times of the year.
Epidemiology is considered a cornerstone methodology of public health research and is highly regarded in evidence-based medicine for identifying risk factors for diseases. In the study of communicable and non-communicable diseases, the work of epidemiologists ranges from outbreak investigation to study design, data collection, and analysis including the development of statistical models to test hypotheses and the documentation of results for submission to peer-reviewed journals. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. Epidemiologists rely on a number of other scientific disciplines such as biology (to better understand disease processes), biostatistics (the current raw information available), Geographic Information Science (to store data and map disease patterns) and social science disciplines (to better understand proximate and distal risk factors). Epidemiology can help identify causes as well as guide prevention efforts.
In studying diseases, epidemiology faces the challenge of defining them. Especially for poorly understood diseases, different groups might use significantly different definitions. Without an agreed-on definition, different researchers may report different numbers of cases and characteristics of the disease.Some morbidity databases are compiled with data supplied by states and territories health authorities, at national levels or larger scale (such as European Hospital Morbidity Database (HMDB)) which may contain hospital discharge data by detailed diagnosis, age and sex. The European HMDB data was submitted by European countries to the World Health Organization Regional Office for Europe.
Burdens of disease
Disease burden is the impact of a health problem in an area measured by financial cost, mortality, morbidity, or other indicators.
There are several measures used to quantify the burden imposed by diseases on people. The years of potential life lost (YPLL) is a simple estimate of the number of years that a persons life was shortened due to a disease. For example, if a person dies at the age of 65 from a disease, and would probably have lived until age 80 without that disease, then that disease has caused a loss of 15 years of potential life. YPLL measurements do not account for how disabled a person is before dying, so the measurement treats a person who dies suddenly and a person who died at the same age after decades of illness as equivalent. In 2004, the World Health Organization calculated that 932 million years of potential life were lost to premature death.The quality-adjusted life year (QALY) and disability-adjusted life year (DALY) metrics are similar but take into account whether the person was healthy after diagnosis. In addition to the number of years lost due to premature death, these measurements add part of the years lost to being sick. Unlike YPLL, these measurements show the burden imposed on people who are very sick, but who live a normal lifespan. A disease that has high morbidity, but low mortality, has a high DALY and a low YPLL. In 2004, the World Health Organization calculated that 1.5 billion disability-adjusted life years were lost to disease and injury. In the developed world, heart disease and stroke cause the most loss of life, but neuropsychiatric conditions like major depressive disorder cause the most years lost to being sick.
Society and culture
How a society responds to diseases is the subject of medical sociology.
A condition may be considered a disease in some cultures or eras but not in others. For example, obesity can represent wealth and abundance, and is a status symbol in famine-prone areas and some places hard-hit by HIV/AIDS. Epilepsy is considered a sign of spiritual gifts among the Hmong people.Sickness confers the social legitimization of certain benefits, such as illness benefits, work avoidance, and being looked after by others. The person who is sick takes on a social role called the sick role. A person who responds to a dreaded disease, such as cancer, in a culturally acceptable fashion may be publicly and privately honored with higher social status. In return for these benefits, the sick person is obligated to seek treatment and work to become well once more. As a comparison, consider pregnancy, which is not interpreted as a disease or sickness, even if the mother and baby may both benefit from medical care.
Most religions grant exceptions from religious duties to people who are sick. For example, one whose life would be endangered by fasting on Yom Kippur or during Ramadan is exempted from the requirement, or even forbidden from participating. People who are sick are also exempted from social duties. For example, ill health is the only socially acceptable reason for an American to refuse an invitation to the White House.The identification of a condition as a disease, rather than as simply a variation of human structure or function, can have significant social or economic implications. The controversial recognition of diseases such as repetitive stress injury (RSI) and post-traumatic stress disorder (PTSD) has had a number of positive and negative effects on the financial and other responsibilities of governments, corporations, and institutions towards individuals, as well as on the individuals themselves. The social implication of viewing aging as a disease could be profound, though this classification is not yet widespread.
Lepers were people who were historically shunned because they had an infectious disease, and the term "leper" still evokes social stigma. Fear of disease can still be a widespread social phenomenon, though not all diseases evoke extreme social stigma.
Social standing and economic status affect health. Diseases of poverty are diseases that are associated with poverty and low social status; diseases of affluence are diseases that are associated with high social and economic status. Which diseases are associated with which states vary according to time, place, and technology. Some diseases, such as diabetes mellitus, may be associated with both poverty (poor food choices) and affluence (long lifespans and sedentary lifestyles), through different mechanisms. The term lifestyle diseases describes diseases associated with longevity and that are more common among older people. For example, cancer is far more common in societies in which most members live until they reach the age of 80 than in societies in which most members die before they reach the age of 50.
Language of disease
An illness narrative is a way of organizing a medical experience into a coherent story that illustrates the sick individuals personal experience.
People use metaphors to make sense of their experiences with disease. The metaphors move disease from an objective thing that exists to an affective experience. The most popular metaphors draw on military concepts: Disease is an enemy that must be feared, fought, battled, and routed. The patient or the healthcare provider is a warrior, rather than a passive victim or bystander. The agents of communicable diseases are invaders; non-communicable diseases constitute internal insurrection or civil war. Because the threat is urgent, perhaps a matter of life and death, unthinkably radical, even oppressive, measures are societys and the patients moral duty as they courageously mobilize to struggle against destruction. The War on Cancer is an example of this metaphorical use of language. This language is empowering to some patients, but leaves others feeling like they are failures.Another class of metaphors describes the experience of illness as a journey: The person travels to or from a place of disease, and changes himself, discovers new information, or increases his experience along the way. He may travel "on the road to recovery" or make changes to "get on the right track" or choose "pathways" </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Hartnup disease and explain in detail? [/INST] Hartnup disease (also known as "pellagra-like dermatosis" and "Hartnup disorder") is an autosomal recessive metabolic disorder affecting the absorption of nonpolar amino acids (particularly tryptophan that can be, in turn, converted into serotonin, melatonin, and niacin). Niacin is a precursor to nicotinamide, a necessary component of NAD+.: 541 The causative gene, SLC6A19, is located on chromosome 5. It is named after the British family, Hartnup, who had this disease.
Signs and symptoms
Hartnup disease manifests during infancy with variable clinical presentation: failure to thrive, photosensitivity, intermittent ataxia, nystagmus, and tremor.Nicotinamide is necessary for neutral amino acid transporter production in the proximal renal tubules found in the kidney, and intestinal mucosal cells found in the small intestine. Therefore, a symptom stemming from this disorder results in increased amounts of amino acids in the urine.
Pellagra, a similar condition, is also caused by low nicotinamide; this disorder results in dermatitis, diarrhea, and dementia.Hartnup disease is a disorder of amino acid transport in the intestine and kidneys; otherwise, the intestine and kidneys function normally, and the effects of the disease occur mainly in the brain and skin. Symptoms may begin in infancy or early childhood, but sometimes they begin as late as early adulthood. Symptoms may be triggered by sunlight, fever, drugs, or emotional or physical stress. A period of poor nutrition nearly always precedes an attack. The attacks usually become progressively less frequent with age. Most symptoms occur sporadically and are caused by a deficiency of niacinamide. A rash develops on parts of the body exposed to the sun. Mental retardation, short stature, headaches, unsteady gait, and collapsing or fainting are common. Psychiatric problems (such as anxiety, rapid mood changes, delusions, and hallucinations) may also result.
Causes
Hartnup disease is inherited as an autosomal recessive trait. Heterozygotes are normal. Consanguinity is common. The failure of amino-acid transport was reported in 1960 from the increased presence of indoles (bacterial metabolites of tryptophan) and tryptophan in the urine of patients as part of a generalized aminoaciduria of the disease. The excessive loss of tryptophan from malabsorption was the cause of the pellagra like symptoms. From studies on ingestion of tryptophan it seemed that there was a generalized problem with amino-acid transport. In 2004, a causative gene, SLC6A19, was located on band 5p15.33. SLC6A19 is a sodium-dependent and chloride-independent neutral amino acid transporter, expressed predominantly in the kidneys and intestine.
Diagnosis
The defective gene controls the absorption of certain amino acids from the intestine and the reabsorption of those amino acids in the kidneys. Consequently, a person with Hartnup disease cannot absorb amino acids properly from the intestine and cannot reabsorb them properly from tubules in the kidneys </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Incontinentia pigmenti and explain in detail? [/INST] Incontinentia pigmenti (IP) is a rare X-linked dominant genetic disorder that affects the skin, hair, teeth, nails and central nervous system. It is named from its appearance under a microscope.The disease is characterized by skin abnormalities that begin in childhood, usually a blistering rash which heals, followed by the development of harder skin growths. The skin may develop grey or brown patches which fade with time. Other symptoms can include hair loss, dental abnormalities, eye abnormalities that can lead to vision loss and lined or pitted fingernails and toenails. Associated problems can include delayed development, intellectual disability, seizures and other neurological problems. Most males with the disease do not survive to childbirth.
Incontinentia pigmenti is caused by a mutation in the IKBKG gene, which encodes the NEMO protein, which serves to protect cells against TNF-alpha-induced apoptosis. A lack of IKBKG therefore makes cells more prone to apoptosis.
There is no specific treatment; individual conditions must be managed by specialists.
Presentation
The skin lesions evolve through characteristic stages:
blistering (from birth to about four months of age),
a wart-like rash (for several months),
swirling macular hyperpigmentation (from about six months of age into adulthood), followed by
linear hypopigmentation.Alopecia, dental anomalies, and dystrophic nails are observed. Some patients have retinal vascular abnormalities predisposing to retinal detachment in early childhood. Cognitive delays or intellectual disability are occasionally seen.The discolored skin is caused by excessive deposits of melanin (normal skin pigment).
Most newborns with IP will develop discolored skin within the first two weeks.
The pigmentation involves the trunk and extremities, is slate-grey, blue or brown, and is distributed in irregular marbled or wavy lines.
The discoloration sometimes fades with age.Neurological problems can include cerebral atrophy, the formation of small cavities in the central white matter of the brain, and the loss of neurons in the cerebellar cortex.
About 20% of children with IP will have slow motor development, muscle weakness in one or both sides of the body, intellectual disability, and seizures.
They are also likely to have visual problems, which can include: crossed eyes, cataracts, retinal detachment, and severe visual loss.
Dental problems are also common, and can include hypodontia, abnormally shaped teeth, and delayed tooth eruption.Breast anomalies can occur in 1% of patients and can include hypoplasia or supernumerary nipples.
Skeletal and structural anomalies can occur in approximately 14% of patients, including:
Somatic asymmetry
Hemivertebrae
Scoliosis
Spina bifida
Syndactyly
Acheiria (congenital absence of the hands—note: other limbs may be affected)
Ear anomalies
Extra ribs
Skull deformities
Genetics
IP is inherited in an X-linked dominant manner. IP is lethal in most, but not all, males. A female with IP may have inherited the IKBKG mutation from either parent or have a new gene mutation. Parents may either be clinically affected or have germline mosaicism. Affected women have a 50% risk of transmitting the mutant IKBKG allele at conception; however, most affected male conceptuses miscarry. Thus, the effective ratio for liveborn children from a mother carrying the mutation is 33% unaffected females, 33% affected females, and 33% unaffected males. Genetic counseling, prenatal testing, and preimplantation genetic diagnosis is available.In females, the cells expressing the mutated IKBKG gene due to lyonization selectively die around the time of birth, so the X-inactivation is extremely skewed.IP is caused by mutations in a gene called NEMO (NF-κB essential modulator).
Diagnosis
The diagnosis of IP is established by clinical findings and occasionally by corroborative skin biopsy </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Effusion and explain in detail? [/INST] In physics and chemistry, effusion is the process in which a gas escapes from a container through a hole of diameter considerably smaller than the mean free path of the molecules. Such a hole is often described as a pinhole and the escape of the gas is due to the pressure difference between the container and the exterior. Under these conditions, essentially all molecules which arrive at the hole continue and pass through the hole, since collisions between molecules in the region of the hole are negligible. Conversely, when the diameter is larger than the mean free path of the gas, flow obeys the Sampson flow law.
In medical terminology, an effusion refers to accumulation of fluid in an anatomic space, usually without loculation. Specific examples include subdural, mastoid, pericardial and pleural effusions.
Etymology
The word effusion derives from the Latin word, effundo, which means "shed, pour forth, pour out, utter, lavish, waste."
Effusion into vacuum
Effusion from an equilibrated container into outside vacuum can be calculated based on kinetic theory. The number of atomic or molecular collisions with a wall of a container per unit area per unit time (impingement rate) is given by:
assuming mean free path is much greater than pinhole diameter and the gas can be treated as an ideal gas.If a small area
A
{\displaystyle A}
on the container is punched to become a small hole, the effusive flow rate will be
where
M
{\displaystyle M}
is the molar mass,
N
A
{\displaystyle N_{A}}
is the Avogadro constant, and
R
=
N
A
k
B
{\displaystyle R=N_{A}k_{B}}
is the gas constant.
The average velocity of effused particles is
Combined with the effusive flow rate, the recoil/thrust force on the system itself is
An example is the recoil force on a balloon with a small hole flying in vacuum.
Measures of flow rate
According to the kinetic theory of gases, the kinetic energy for a gas at a temperature
T
{\displaystyle T}
is
1
2
m
v
r
m
s
2
=
3
2
k
B
T
{\displaystyle {\frac {1}{2}}mv_{\rm {rms}}^{2}={\frac {3}{2}}k_{\rm {B}}T}
where
m
{\displaystyle m}
is the mass of one molecule,
v
r
m
s
{\displaystyle v_{\rm {rms}}}
is the root-mean-square speed of the molecules, and
k
B
{\displaystyle k_{\rm {B}}}
is the Boltzmann constant. The average molecular speed can be calculated from the Maxwell speed distribution
as
v
a
v
g
=
8
/
3
π
v
r
m
s
≈
0.921
v
r
m
s
{\textstyle v_{\rm {avg}}={\sqrt {8/3\pi }}\ v_{\rm {rms}}\approx 0.921\ v_{\rm {rms}}}
(or, equivalently,
v
r
m
s
=
3
π
/
8
v
a
v
g
≈
1.085
v
a
v
g
{\textstyle v_{\rm {rms}}={\sqrt {3\pi /8}}\ v_{\rm {avg}}\approx 1.085\ v_{\rm {avg}}}
) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Parasomnia and explain in detail? [/INST] Parasomnias are a category of sleep disorders that involve abnormal movements, behaviors, emotions, perceptions, and dreams that occur while falling asleep, sleeping, between sleep stages, or during arousal from sleep. Parasomnias are dissociated sleep states which are partial arousals during the transitions between wakefulness, NREM sleep, and REM sleep, and their combinations.
Classification
The newest version of the International Classification of Sleep Disorders (ICSD, 3rd. Ed.) uses State Dissociation as the paradigm for parasomnias. Unlike before, where wakefulness, non-rapid eye movement (NREM) sleep, and rapid eye movement (REM) sleep were considered exclusive states, research has shown that combinations of these states are possible and thus, may result in unusual unstable states that could eventually manifest as parasomnias or as altered levels of awareness.Although the previous definition is technically correct, it contains flaws. The consideration of the State Dissociation paradigm facilitates the understanding of the sleep disorder and provides a classification of 10 core categories.
Non-rapid eye movement (NREM)-related parasomnias
NREM parasomnias are arousal disorders that occur during stage 3 (or 4 by the R&K standardization) of NREM sleep—also known as slow wave sleep (SWS). They are caused by a physiological activation in which the patients brain exits from SWS and is caught in between a sleeping and waking state. In particular, these disorders involve activation of the autonomic nervous system, motor system, or cognitive processes during sleep or sleep-wake transitions.Some NREM parasomnias (sleep-walking, night-terrors, and confusional arousal) are common during childhood but decrease in frequency with increasing age. They can be triggered in certain individuals, by alcohol, sleep deprivation, physical activity, emotional stress, depression, medications, or a fevered illness. These disorders of arousal can range from confusional arousals, somnambulism, to night terrors. Other specific disorders include sleepeating, sleep sex, teeth grinding, rhythmic movement disorder, restless legs syndrome, and somniloquy.
Differential diagnosis for NREM-related parasomnias
Sleep-disordered breathing
REM-related parasomnias
Nocturnal seizures
Psychogenic dissociative disorders
Confusional arousals
Confusional arousal is a condition when an individual awakens from sleep and remains in a confused state. It is characterized by the individuals partial awakening and sitting up to look around. They usually remain in bed and then return to sleep. These episodes last anywhere from seconds to minutes and may not be reactive to stimuli. Confusional arousal is more common in children than in adults. It has a lifetime prevalence of 18.5% in children and a lifetime prevalence of 2.9–4.2% in adults. Infants and toddlers usually experience confusional arousals beginning with large amounts of movement and moaning, which can later progress to occasional thrashings or inconsolable crying. In rare cases, confusional arousals can cause injuries and drowsy driving accidents, thus it can also be considered dangerous. Another sleeping disorder may be present triggering these incomplete arousals.
Somniloquy (sleep talking)
A form of parasomnia where a person will speak during their dreams.
Sleep-related abnormal sexual behavior
Sleep-related abnormal sexual behavior, Sleep sex, or sexsomnia, is a form of confusional arousal that may overlap with somnambulism. Thereby, a person will engage in sexual acts while still asleep. It can include such acts as masturbation, inappropriate fondling themselves or others, having sex with another person; and in more extreme cases, sexual assault. These behaviors are unconscious, occur frequently without dreaming, and bring along clinical, social, and legal implications. It has a lifetime prevalence of 7.1% and an annual prevalence of 2.7%.
Sleepwalking (somnambulism)
Sleepwalking has a prevalence of 1–17% in childhood, with the most frequent occurrences around the age of eleven to twelve. About 4% of adults experience somnambulism.
Normal sleep cycles include states varying from drowsiness all the way to deep sleep. Every time an individual sleeps, he or she goes through various sequences of non-REM and REM sleep. Anxiety and fatigue are often connected with sleepwalking. For adults, alcohol, sedatives, medications, medical conditions and mental disorders are all associated with sleepwalking. Sleep walking may involve sitting up and looking awake when the individual is actually asleep, and getting up and walking around, moving items or undressing themselves. They will also be confused when waking up or opening their eyes during sleep. Sleep walking can be associated with sleeptalking.
Sleep terrors (night terrors/ pavor nocturnus)
Sleep terror is the most disruptive arousal disorder since it may involve loud screams and panic; in extreme cases, it may result in bodily harm or property damage by running about or hitting walls. All attempts to console the individual are futile and may prolong or intensify their confused state. Usually they experience amnesia after the event but it may not be complete amnesia. Up to 3% of adults have sleep terrors and exhibited behavior of this parasomnia can range from mild to extremely violent. This is very prevalent in those who have violent post-traumatic stress disorder (PTSD). They typically occur in stage 3 sleep.
Sleep-related eating disorder (SRED)
The Diagnostic and Statistical Manual of Mental Disorders (DSM-V) classifies sleep-related eating disorder under sleepwalking, while ICSD classifies it as NREM-related parasomnia. It is conceptualized as a mixture of binge-eating behavior and arousal disorder. Thereby, preferentially high-caloric food is consumed in an uncontrolled manner. However, SRED should not be confused with nocturnal eating syndrome, which is characterized by an excessive consumption of food before or during sleep in full consciousness. Since sleep-related eating disorders are associated with other sleep disorders, successful treatment of the latter can reduce symptoms of this parasomnia.
Rapid eye movement (REM)-related parasomnias
REM sleep behavior disorder
Unlike other parasomnias, rapid eye movement sleep behavior disorder (RBD) in which muscle atonia is absent is most common in older adults. This allows the individual to act out their dreams and may result in repeated injury—bruises, lacerations, and fractures—to themselves or others. Patients may take self-protection measures by tethering themselves to bed, using pillow barricades, or sleeping in an empty room on a mattress. Besides ensuring the sleep environment is a safe place, pharmacologic therapy using melatonin and clonazepam is also common as a treatment for RBD, even though they might not eliminate all abnormal behaviours. Before starting a treatment with clonazepam, a screening for obstructive sleep apnea should performed. However, clonazepam needs to be manipulated carefully because of its significant side effects, i.e., morning confusion or memory impairment, mainly in patients with neurodegenerative disorders with dementia.Demographically, 90% of RBD patients are males, and most are older than 50 years of age. However, this prevalence in males could be biased due to the fact that women tends to have a less violent type of RBD, which leads to lower reports at sleep centres and different clinical characteristics. While men might have more aggressive behaviour during dreaming, women have presented more disturbance in their sleep. RBD may be also influenced by a genetic compound, since primary relatives seem to have significantly more chance to develop RBD compared with non-relatives control group.Typical clinical features of REM sleep behavior disorder are:
Male gender predilection
Mean age of onset 50–65 years (range 20–80 years)
Vocalisation, screaming, swearing that may be associated with dreams
Motor activity, simple or complex, that may result in injury to patient or bed-partner
Occurrence usually in latter half of sleep period (REM sleep)
May be associated with neurodegenerative disease Acute RBD occurs mostly as a result of a side-effect in prescribed medication—usually antidepressants. Furthermore, substance abuse or withdrawal can result in RBD.Chronic RBD is idiopathic, meaning of unknown origin, or associated with neurological disorders. There is a growing association of chronic RBD with neurodegenerative disorders—Parkinsons disease, multiple system atrophy (MSA), or dementia—as an early indicator of these conditions by as much as 10 years. RBD associated with neurological disorders is frequently related to abnormal accumulation of alpha-synuclein, and more than 80% of patients with idiopathic RBD might develop Lewy body disease (LBD). Patients with narcolepsy are also more likely to develop RBD.
The diagnosis is based on clinical history, including partners account and needs to be confirmed by polysomnography (PSG), mainly for its accuracy in differentiating RBD from other sleep disorders, since there is a loss of REM atonia with excessive muscle tone. However, screening questionnaires, such as RBDSQ, are also very useful for diagnosing RBD.
Recurrent isolated sleep paralysis
Recurrent isolated sleep paralysis is an inability to perform voluntary movements at sleep onset, or upon waking from sleep. Although the affected individual is conscious and recall is present, the person is not able to speak or move. However, respiration remains unimpaired. The episodes last seconds to minutes and diminish spontaneously. The lifetime prevalence is 7%. Sleep paralysis is associated with sleep-related hallucinations. Predisposing factors for the development of recurrent isolated sleep paralysis are sleep deprivation, an irregular sleep-wake cycle, e.g. caused by shift work, or stress. A possible cause could be the prolongation of REM sleep muscle atonia upon awakening.
Nightmare disorder
Nightmares are like dreams primarily associated with REM sleep. Nightmare disorder is defined as recurrent nightmares associated with awakening dysphoria that impairs sleep or daytime functioning. It is rare in children, however persists until adulthood. About 2/3 of the adult population report experiencing nightmares at least once in their life.
Catathrenia
Before the ICSD-3, Catathrenia was classified as a rapid-eye-movement sleep parasomnia, but is now classified as sleep-related breathing disorder. It consists of breath holding and expiratory groaning during sleep, is distinct from both somniloquy and obstructive sleep apnea. The sound is produced during exhalation as opposed to snoring which occurs during inhalation. It is usually not noticed by the person producing the sound but can be extremely disturbing to sleep partners, although once aware of it, they tend to be woken up by their own groaning as well. Bed partners generally report hearing the person take a deep breath, hold it, then slowly exhale; often with a high-pitched squeak or groaning sound.
Sleep-Related painful erections
Painful erections appear only during the sleep. This condition is present during the REM sleep. Sexual activity does not produce any pain. There is not any lesion or physical damage but an hypertonia of the pelvic floor could be one cause. It affects men of all ages but especially from the middle-age. Some pharmacologic treatment as propranolol, clozapine, clonazepam, baclofen and various antidepressants, seems to be effective.
Other parasomnias
Exploding head syndrome
Sleep-related hallucinations
Sleep-related hallucinations are brief episodes of dream-like imagery that can be of any sensory modality, i.e., auditory, visual, or tactile. They are differentiated between hypnagogic hallucination, that occur at sleep onset, and hypnapompic hallucinations, which occur at the transition of sleep to awakening. Although normal individuals have reported nocturnal hallucinations, they are more frequent in comorbidity with other sleep disorders, e.g. narcolepsy.
Sleep enuresis
Parasomnias due to medical disorder
Parasomnias due to medication or substance
Parasomnia, unspecific
Sleep drunkenness, also known as confusional arousal, is the feeling of confusion or sudden action upon waking up from deep sleep. Severe sleep inertia, one cause of oversleeping, is considered to develop sleep drunkenness.
Isolated symptom/normal variant
Sleep talking (somniloquy)
According to ICSD-3 it is not defined a disorder in particular. It is rather an isolated symptom or normal variant and ranges from isolated speech to full conversations without recall. With a lifetime prevalence of 69% it is considered fairly common. Sleep talking is associated with REM-related parasomnias as well as with disorders or arousal. It occurs in all sleep states. As yet, there is no specific treatment for sleeptalking available.
Diagnosis
Parasomnias are most commonly diagnosed by means of questionnaires. These questionnaires include a detailed analyses of the clinical history and contain questions to:
Rule out sleep deprivation
Rule out effects of intoxication or withdrawal
Rule out sleep disorders causing sleep instability
Rule out medical disorders or treatments associated with sleep instability
Confirm presence of NREM parasomnias in other family members and during the patients childhood
Determine the timing of the events
Determine the morphology of the events.Furthermore, a sleep diary is helpful to exclude that sleep deprivation could be a precipitating factor. An additional tool could be the partners log of the events. The following questions should therefore be considered:
Do you or your bed partner believe that you move your arms, legs, or body too much, or have unusual behaviors during sleep?
Do you move while dreaming, as if you are simultaneously attempting to carry out the dream? l Have you ever hurt yourself or your bed partner during sleep?
Do you sleepwalk or have sleep terrors with loud screaming?
Do your legs feel restless or begin to twitch a lot or jump around when you are drowsy or sleepy, either at bedtime or during the day?
Do you eat food or drink fluids without full awareness during the night? Do you wake up in the morning feeling bloated and with no desire to eat breakfast?In potentially harmful or disturbing cases a specialist in sleep disorders should be approached. Video polysomnographic documentation is necessary only in REM sleep behavior disorder (RBD), since it is an essential diagnostic criteria in the ICSD to demonstrate the absence of muscle atonia and to exclude comorbid sleep disorders. For most of the other parasomnias, polysomnographic monitoring is a costly, but still supportive tool in the clinical diagnosis.The use of actigraphy can be promising in the diagnostical assessment of NREM-related parasomnias, for example to rule out sleep deprivation or other sleep disorders, like circadian sleep-wake rhythm disorder which often develops among shift workers. However, there is currently no generally accepted standardized technique available of identifying and quantifying periodic limb movements in sleep (PLMS) that distinguishes movements resulting from parasomnias, nocturnal seizures, and other dyskinesias. Eventually, using actigraphy for parasomnias in general is disputed.
Treatment
Parasomnias can be considered as potentially harmful to oneself as well as to bed partners, and are associated with other disorders. Children with parasomnias do not undergo medical intervention, because they tend to recover the NREM-related disorder with the process of growth. In those cases, the parents receive education on sleep hygiene to reduce and eventually eliminate precipitating factors.In adults psychoeducation about a proper sleep hygiene can reduce the risk to develop parasomnia. Case studies have shown that pharmacological interventions can improve symptoms of parasomnia, however mostly they are accompanied by side-effects. Behavioral treatments, i.e., relaxation therapy, biofeedback, hypnosis, and stress reduction, may also be helpful, but are not considered as universally effective.
Prognosis
NREM-related parasomnias which are common in childhood show a good prognosis, since severity decreases with age, the symptoms tend to resolve during puberty. Adults with NREM-related parasomnias, however, are faced with a stronger persistence of the symptoms, therefore, full remission is quite unlikely and is also associated with violent complications, including homicide. The variant sleep-related eating disorders is chronic, without remission, but treatable.REM sleep behavior disorder (RBD) can mostly be handled well with the use of melatonin or clonazepam. However, there is high comorbidity with neurodegenerative disorders, that is in up to 93% of cases </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Sleep disorder and explain in detail? [/INST] A sleep disorder, or somnipathy, is a medical disorder of an individuals sleep patterns. Some sleep disorders are severe enough to interfere with normal physical, mental, social and emotional functioning. Polysomnography and actigraphy are tests commonly ordered for diagnosing sleep disorders.
Sleep disorders are broadly classified into dyssomnias, parasomnias, circadian rhythm sleep disorders involving the timing of sleep, and other disorders including ones caused by medical or psychological conditions. When a person struggles to fall asleep and/or stay asleep with no obvious cause, it is referred to as insomnia, the most common sleep disorder. Others include sleep apnea, narcolepsy and hypersomnia (excessive sleepiness at inappropriate times), sleeping sickness (disruption of sleep cycle due to infection), sleepwalking, and night terrors.
Sleep disruptions can be caused by various issues, including teeth grinding (bruxism) and night terrors. Management of sleep disturbances that are secondary to mental, medical or substance abuse disorders should focus on the underlying conditions.Primary sleep disorders are common in both children and adults. However, there is a significant lack of awareness in children with sleep disorders, due to most cases being unidentified. Several common factors involved in the onset of a sleep disorder include increased medication use, age-related changes in circadian rhythms, environmental changes, lifestyle changes, pre-diagnosed physiological problems, or stress. Among the elderly, the risk of developing sleep disordered breathing, periodic limb movements, restless legs syndrome, REM sleep behavior disorders, insomnia, and circadian rhythm disturbances is especially increased.
List of conditions
There are a number of sleep disorders. The following list includes some of them:
Bruxism, involuntary grinding or clenching of the teeth while sleeping
Catathrenia, nocturnal groaning during prolonged exhalation
Delayed sleep phase disorder (DSPD), inability to awaken and fall asleep at socially acceptable times but no problem with sleep maintenance, a disorder of circadian rhythms. Other such disorders are advanced sleep phase disorder (ASPD), non-24-hour sleep–wake disorder (non-24) in the sighted or in the blind, and irregular sleep wake rhythm, all much less common than DSPD, as well as the situational shift work sleep disorder.
Fatal familial insomnia, an extremely rare prion disease that causes a complete cessation of sleep, leading quickly to death by sleep deprivation.
Hypopnea syndrome, abnormally shallow breathing or slow respiratory rate while sleeping
Idiopathic hypersomnia, a primary, neurologic cause of long-sleeping, sharing many similarities with narcolepsy
Insomnia disorder (primary insomnia), chronic difficulty in falling asleep and/or maintaining sleep when no other cause is found for these symptoms. Insomnia can also be comorbid with or secondary to other disorders.
Kleine–Levin syndrome, a rare disorder characterized by persistent episodic hypersomnia and cognitive or mood changes
Narcolepsy, characterized by excessive daytime sleepiness (EDS) and so-called "sleep attacks," relatively sudden-onset, irresistible urges to sleep, which may interfere with occupational and social commitments. About 70% of those who have narcolepsy also have cataplexy, a sudden weakness in the motor muscles that can result in collapse to the floor while retaining full conscious awareness.
Night terror, Pavor nocturnus, sleep terror disorder, an abrupt awakening from sleep with behavior consistent with terror
Nocturia, a frequent need to get up and urinate at night. It differs from enuresis, or bed-wetting, in which the person does not arouse from sleep, but the bladder nevertheless empties.
Parasomnias, disruptive sleep-related events involving inappropriate actions during sleep, for example sleepwalking, night-terrors and catathrenia.
Periodic limb movements in sleep (PLMS), sudden involuntary movement of the arms and/or legs during sleep. In the absence of other sleep disorders, PLMS may cause sleep disruption and impair sleep quality, leading to periodic limb movement disorder (PLMD).
Other limb movements in sleep, including hypneic jerks and nocturnal myoclonus.
Rapid eye movement sleep behavior disorder (RBD), acting out violent or dramatic dreams while in REM sleep, sometimes injuring bed partner or self (REM sleep disorder or RSD)
Restless legs syndrome (RLS), an irresistible urge to move legs.
Shift work sleep disorder (SWSD), a situational circadian rhythm sleep disorder. (Jet lag was previously included as a situational circadian rhythm sleep disorder, but it does not appear in DSM-5, see Diagnostic and Statistical Manual of Mental Disorders for more).
Sleep apnea, obstructive sleep apnea, obstruction of the airway during sleep, causing lack of sufficient deep sleep, often accompanied by snoring. Other forms of sleep apnea are less common. Obstructive sleep apnea (OSA) is a medical disorder that is caused by repetitive collapse of the upper airway (back of the throat) during sleep. For the purposes of sleep studies, episodes of full upper airway collapse for at least ten seconds are called apneas.
Sleep paralysis, characterized by temporary paralysis of the body shortly before or after sleep. Sleep paralysis may be accompanied by visual, auditory or tactile hallucinations. It is not a disorder unless severe, and is often seen as part of narcolepsy.
Sleepwalking or somnambulism, engaging in activities normally associated with wakefulness (such as eating or dressing), which may include walking, without the conscious knowledge of the subject.
Somniphobia, one cause of sleep deprivation, a dread/ fear of falling asleep or going to bed. Signs of the illness include anxiety and panic attacks before and during attempts to sleep.
Types
Dyssomnias – A broad category of sleep disorders characterized by either hypersomnia or insomnia. The three major subcategories include intrinsic (i.e., arising from within the body), extrinsic (secondary to environmental conditions or various pathologic conditions), and disturbances of circadian rhythm.Insomnia: Insomnia may be primary or it may be comorbid with or secondary to another disorder such as a mood disorder (i.e., emotional stress, anxiety, depression) or underlying health condition (i.e., asthma, diabetes, heart disease, pregnancy or neurological conditions).
Primary hypersomnia: Hypersomnia of central or brain origin
Narcolepsy: A chronic neurological disorder (or dyssomnia), which is caused by the brains inability to control sleep and wakefulness.
Idiopathic hypersomnia: A chronic neurological disease similar to narcolepsy, in which there is an increased amount of fatigue and sleep during the day. Patients who have idiopathic hypersomnia cannot obtain a healthy amount of sleep for a regular day of activities. This hinders the patients ability to perform well, and patients have to deal with this for the rest of their lives.
Recurrent hypersomnia, including Kleine–Levin syndrome
Post traumatic hypersomnia
Menstrual-related hypersomnia
Sleep disordered breathing (SDB), including (non-exhaustive):
Several types of sleep apnea
Snoring
Upper airway resistance syndrome
Restless leg syndrome
Periodic limb movement disorder
Circadian rhythm sleep disorders
Delayed sleep phase disorder
Advanced sleep phase disorder
Non-24-hour sleep–wake disorder
Parasomnias – A category of sleep disorders that involve abnormal and unnatural movements, behaviors, emotions, perceptions, and dreams in connection with sleep.
Bedwetting or sleep enuresis
Bruxism (Tooth-grinding)
Catathrenia – nocturnal groaning
Exploding head syndrome – Waking up in the night hearing loud noises.
Sleep terror (or Pavor nocturnus) – Characterized by a sudden arousal from deep sleep with a scream or cry, accompanied by some behavioral manifestations of intense fear.
REM sleep behavior disorder
Sleepwalking (or somnambulism)
Sleep talking (or somniloquy)
Sleep sex (or sexsomnia)
Medical or psychiatric conditions that may produce sleep disorders
22q11.2 deletion syndrome
Alcoholism
Mood disorders
Depression
Anxiety disorder
Nightmare disorder
Panic
Psychosis (such as Schizophrenia)
Sleeping sickness – a parasitic disease which can be transmitted by the Tsetse fly.
Causes
A systematic review found that traumatic childhood experiences (such as family conflict or sexual trauma) significantly increases the risk for a number of sleep disorders in adulthood, including sleep apnea, narcolepsy, and insomnia.In addition, an evidence-based synopsis suggests that idiopathic REM sleep behavior disorder (iRBD) may have a hereditary component. A total of 632 participants, half with iRBD and half without, completed self-report questionnaires. The results of the study suggest that people with iRBD are more likely to report having a first-degree relative with the same sleep disorder than people of the same age and sex that do not have the disorder. More research needs to be conducted to further understand the hereditary nature of sleep disorders.
A population susceptible to the development of sleep disorders includes people who have experienced a traumatic brain injury (TBI). Because many researchers have focused on this issue, a systematic review was conducted to synthesize their findings. The results indicate that individuals who experienced a TBI are most disproportionately at risk for developing narcolepsy, obstructive sleep apnea, excessive daytime sleepiness, and insomnia. The studys complete findings can be found in the table below:
Sleep disorders and Neurodegenerative Diseases
Neurodegenerative diseases have often been associated with sleep disorders, mainly when they are characterized by abnormal accumulation of alpha-synuclein, such as multiple system atrophy (MSA), Parkinsons disease (PD) and Lewy body disease (LBD). For instance, people diagnosed with PD have often presented different kinds of sleep concerns, commonly in regard to insomnia (around 70% of the PD population), hypersomnia (more than 50% of the PD population), and REM sleep behavior disorder (RBD) - that may affect around 40% of the PD population and it is associated with increased motor symptoms. Furthermore, RBD has been highlighted as a strong precursor for future development of those neurodegenerative diseases over several years in prior, which seems to be a great opportunity for improving the treatments of the disease.Sleep disturbances have been also observed in Alzheimers disease (AD), affecting about 45% of its population. When based on caregiver reports, this percentage increases to about 70%. As well as in PD population, insomnia and hypersomnia are frequently recognized in AD patients, which have been associated with accumulation of beta-amyloid, circadian rhythm sleep disorders (CRSD) and melatonin alteration. Additionally, changes in sleep architecture are observed in AD. Although sleep architecture seems to naturally change with age, its development appears aggravated in AD patients. SWS potentially decreases (and is sometimes absent), spindles and the length of time spent in REM sleep are also reduced, while its latency increases. Poor sleep onset in AD has been associated with dream-related hallucination, increased restlessness, wandering and agitation that seem related to sundowning - a typical chronobiological phenomenon presented in the disease.The neurodegenerative conditions are commonly related to structural brain impairment, which might disrupt the states of sleep and wakefulness, circadian rhythm, motor or non motor functioning. On the other hand, sleep disturbances are frequently related to worsening patients cognitive functioning, emotional state and quality of life. Furthermore, these abnormal behavioral symptoms negatively contribute to overwhelming their relatives and caregivers. The limited research related to it and the increasing life expectancy calls for a deeper understanding of the relationship between sleep disorders and neurodegenerative disease.
Sleep disturbances and Alzheimers Disease
More than 70% of people with dementia are affected by Alzheimers disease (AD). Despite this high number, our understanding of the underlying mechanisms for disease progression remains very limited. However, recent studies have highlighted a link between sleep disturbances and Alzheimers disease.Sleep changes with normal aging. Over time, a decrease in time sleeping and a decrease in the quantity of NREM sleep can be observed, specifically in NREM SWS (less than 10% of the SWS is maintained). Older people also are more prone to insomnia or sleep apnea.In Alzheimers disease, in addition to cognitive decline and memory impairment, there are also significant sleep disturbances with modified sleep architecture. The latter may consist in sleep fragmentation, reduced sleep duration, insomnia, increased daytime napping, decreased quantity of some sleep stages, and a growing resemblance between some sleep stages (N1 and N2). More than 65% of people with Alzheimers disease have this type of sleep disturbance.One factor that could explain this change in sleep architecture is a change in circadian rhythm, which regulates sleep. A disruption of the circadian rhythm would generate sleep disturbances. Some studies show that people with AD have a delayed circadian rhythm, whereas in normal aging, an advanced circadian rhythm is present.In addition to these psychological symptoms, at a neurological level there are two main symptoms of Alzheimers disease. The first is an accumulation of beta-amyloid waste forming aggregate "plaques". The second is an accumulation of tau protein.It has been shown that the sleep-wake cycle acts on the beta-amyloid burden, which is a central component found in AD. As individuals awaken, the production of beta-amyloid protein will be more consistent than its production during sleep. This is explained by two phenomena. The first is that the metabolic activity will be higher during waking, thus resulting in greater secretion of beta-amyloid protein. The second is that oxidative stress will also increase, which leads to greater AB production.On the other hand, it is during sleep that beta-amyloid residues are degraded to prevent plaque formation. The glymphatic system is responsible for this through the phenomenon of glymphatic clearance. Thus, during wakefulness, the AB burden is greater because the metabolic activity and oxidative stress are higher, and there is no protein degradation by the glymphatic clearance. During sleep, the burden is reduced as there is less metabolic activity and oxidative stress (in addition to the glymphatic clearance that occurs).Glymphatic clearance occurs during the NREM SWS sleep. This sleep stage decreases in normal aging, resulting in less glymphatic clearance and increased AB burden that will form AB plaques. Therefore, sleep disturbances in individuals with AD will amplify this phenomenon.
The decrease in the quantity and quality of the NREM SWS, as well as the disturbances of sleep will therefore increase the AB plaques. This initially occurs in the hippocampus, which is a brain structure integral in long-term memory formation. Hippocampus cell death occurs, which contributes to diminished memory performance and cognitive decline found in AD.Although the causal relationship is unclear, the development of AD correlates with the development of prominent sleep disorders. In the same way, sleep disorders exacerbate disease progression, forming a positive feedback relationship. As a result, sleep disturbances are no longer only a symptom of AD; the relationship between sleep disturbances and AD is bidirectional.At the same time, it has been shown that memory consolidation in long-term memory (which depends on the hippocampus) occurs during NREM sleep. This indicates that a decrease in the NREM sleep will result in less consolidation, resulting in poorer memory performances in hippocampal-dependent long-term memory. This drop in performance is one of the central symptoms of AD.Recent studies have also linked sleep disturbances, neurogenesis and AD. The subgranular zone and the subventricular zone continued to produce new neurons in adult brains. These new cells are then incorporated into neuronal circuits and the subragranular zone, which is found in the hippocampus. These new cells contribute to learning and memory, playing an essential role in hippocampal-dependent memory.However, recent studies have shown that several factors can interrupt neurogenesis, including stress and prolonged sleep deprivation (more than one day). The sleep disturbances encountered in AD could therefore suppress neurogenesis—and thus impair hippocampal functions. This would contribute to diminished memory performances and the progression of AD, and the progression of AD would aggravate sleep disturbances.Changes in sleep architecture found in patients with AD occur during the preclinical phase of AD. These changes could be used to detect those most at risk of developing AD. However, this is still only theoretical.
While the exact mechanisms and the causal relationship between sleep disturbances and AD remains unclear, these findings already provide a better understanding and offer possibilities to improve targeting of at-risk populations—and the implementation of treatments to curb the cognitive decline of AD patients.
Sleep disorder symptoms in psychiatric illnesses
Schizophrenia
In individuals with psychiatric illnesses sleep disorders may include a variety of clinical symptoms, including but not limited to: excessive daytime sleepiness, difficulty falling asleep, difficulty staying asleep, nightmares, sleep talking, sleepwalking, and poor sleep quality. Sleep disturbances - insomnia, hypersomnia and delayed sleep-phase disorder - are quite prevalent in severe mental illnesses such as psychotic disorders. In those with schizophrenia, sleep disorders contribute to cognitive deficits in learning and memory. Sleep disturbances often occur before the onset of psychosis.
Sleep deprivation can also produce hallucinations, delusions and depression. A 2019 study investigated the three above-mentioned sleep disturbances in schizophrenia-spectrum (SCZ) and bipolar (BP) disorders in 617 SCZ individuals, 440 BP individuals, and 173 healthy controls (HC). Sleep disturbances were identified using the Inventory for Depressive Symptoms - clinician rated scale (IDS-C). Results suggested that at least one type of sleep disturbance was reported in 78% of the SCZ population, in 69% individuals with BD, and in 39% of healthy controls. The SCZ group reported the most number of sleep disturbances compared to the BD and HC groups; specifically, hypersomnia was more frequent among individuals with SCZ, and delayed sleep phase disorder was three times more common in the SCZ group compared to the BD group. Insomnias were the most frequently reported sleep disturbance across all three groups.
Bipolar disorder
One of the main behavioral symptoms of bipolar disorder is abnormal sleep. Studies have suggested that 23-78% of individuals with bipolar disorders consistently report symptoms of excessive time spent sleeping, or hypersomnia. The pathogenesis of bipolar disorder, including the higher risk of suicidal ideation, could possibly be linked to circadian rhythm variability, and sleep disturbances are a good predictor of mood swings. The most common sleep-related symptom of bipolar disorder is insomnia, in addition to hypersomnia, nightmares, poor sleep quality, OSA, extreme daytime sleepiness, etc. Moreover, animal models have shown that sleep debt can induce episodes of bipolar mania in laboratory mice, but these models are still limited in their potential to explain bipolar disease in humans with all its multifaceted symptoms, including those related to sleep disturbances.
Major depressive disorder (MDD)
Sleep disturbances (insomnia or hypersomnia) are not a necessary diagnostic criterion—but one of the most frequent symptoms of individuals with major depressive disorder (MDD). Among individuals with MDD, insomnia and hypersomnia have prevalence estimates of 88% and 27%, respectively, whereas individuals with insomnia have a threefold increased risk of developing MDD. Depressed mood and sleep efficiency strongly co-vary, and while sleep regulation problems may precede depressive episodes, such depressive episodes may also precipitate sleep deprivation. Fatigue, as well as sleep disturbances such as irregular and excessive sleepiness, are linked to symptoms of depression. Recent research has even pointed to sleep problems and fatigues as potential driving forces bridging MDD symptoms to those of co-occurring generalized anxiety disorder.
Treatment
Treatments for sleep disorders generally can be grouped into four categories:
Behavioral and psychotherapeutic treatment
Rehabilitation and management
Medication
Other somatic treatmentNone of these general approaches are sufficient for all patients with sleep disorders. Rather, the choice of a specific treatment depends on the patients diagnosis, medical and psychiatric history, and preferences, as well as the expertise of the treating clinician. Often, behavioral/psychotherapeutic and pharmacological approaches may be compatible, and can effectively be combined to maximize therapeutic benefits.
Management of sleep disturbances that are secondary to mental, medical, or substance abuse disorders should focus on the underlying conditions. Medications and somatic treatments may provide the most rapid symptomatic relief from certain disorders, such as narcolepsy, which is best treated with prescription drugs such as modafinil. Others, such as chronic and primary insomnia, may be more amenable to behavioral interventions—with more durable results.
Chronic sleep disorders in childhood, which affect some 70% of children with developmental or psychological disorders, are under-reported and under-treated. Sleep-phase disruption is also common among adolescents, whose school schedules are often incompatible with their natural circadian rhythm. Effective treatment begins with careful diagnosis using sleep diaries and perhaps sleep studies. Modifications in sleep hygiene may resolve the problem, but medical treatment is often warranted.Special equipment may be required for treatment of several disorders such as obstructive apnea, circadian rhythm disorders and bruxism. In severe cases, it may be necessary for individuals to accept living with the disorder, however well managed.
Some sleep disorders have been found to compromise glucose metabolism.
Allergy treatment
Histamine plays a role in wakefulness in the brain. An allergic reaction over produces histamine, causing wakefulness and inhibiting sleep. Sleep problems are common in people with allergic rhinitis. A study from the N.I.H. found that sleep is dramatically impaired by allergic symptoms, and that the degree of impairment is related to the severity of those symptoms. Treatment of allergies has also been shown to help sleep apnea.
Acupuncture
A review of the evidence in 2012 concluded that current research is not rigorous enough to make recommendations around the use of acupuncture for insomnia. The pooled results of two trials on acupuncture showed a moderate likelihood that there may be some improvement to sleep quality for individuals with insomnia.: 15 This form of treatment for sleep disorders is generally studied in adults, rather than children. Further research would be needed to study the effects of acupuncture on sleep disorders in children.
Hypnosis
Research suggests that hypnosis may be helpful in alleviating some types and manifestations of sleep disorders in some patients. "Acute and chronic insomnia often respond to relaxation and hypnotherapy approaches, along with sleep hygiene instructions." Hypnotherapy has also helped with nightmares and sleep terrors. There are several reports of successful use of hypnotherapy for parasomnias specifically for head and body rocking, bedwetting and sleepwalking.Hypnotherapy has been studied in the treatment of sleep disorders in both adults and children.
Music therapy
Although more research should be done to increase the reliability of this method of treatment, research suggests that music therapy can improve sleep quality in acute and chronic sleep disorders. In one particular study, participants (18 years or older) who had experienced acute or chronic sleep disorders were put in a randomly controlled trial, and their sleep efficiency, in the form of overall time asleep, was observed. In order to assess sleep quality, researchers used subjective measures (i.e. questionnaires) and objective measures (i.e. polysomnography). The results of the study suggest that music therapy did improve sleep quality in subjects with acute or chronic sleep disorders, though only when tested subjectively. Although these results are not fully conclusive and more research should be conducted, it still provides evidence that music therapy can be an effective treatment for sleep disorders.In another study specifically looking to help people with insomnia, similar results were seen. The participants that listened to music experienced better sleep quality than those who did not listen to music. Listening to slower pace music before bed can help decrease the heart rate, making it easier to transition into sleep. Studies have indicated that music helps induce a state of relaxation that shifts an individuals internal clock towards the sleep cycle. This is said to have an effect on children and adults with various cases of sleep disorders. Music is most effective before bed once the brain has been conditioned to it, helping to achieve sleep much faster.
Melatonin
Research suggests that melatonin is useful in helping people fall asleep faster (decreased sleep latency), stay asleep longer, and experience improved sleep quality. To test this, a study was conducted that compared subjects who had taken melatonin to subjects with primary sleep disorders who had taken a placebo. Researchers assessed sleep onset latency, total minutes slept, and overall sleep quality in the melatonin and placebo groups to note the differences. In the end, researchers found that melatonin decreased sleep onset latency and increased total sleep time but had an insignificant and inconclusive impact on the quality of sleep compared to the placebo group.
Sleep medicine
Due to rapidly increasing knowledge and understanding of sleep in the 20th century, including the discovery of REM sleep in the 1950s and circadian rhythm disorders in the 70s and 80s, the medical importance of sleep was recognized. By the 1970s in the US, clinics and laboratories devoted to the study of sleep and sleep disorders had been founded, and a need for standards arose. The medical community began paying more attention to primary sleep disorders, such as sleep apnea, as well as the role and quality of sleep in other conditions.
Specialists in sleep medicine were originally and continue to be certified by the American Board of Sleep Medicine. Those passing the Sleep Medicine Specialty Exam received the designation "diplomate of the ABSM." Sleep medicine is now a recognized subspecialty within internal medicine, family medicine, pediatrics, otolaryngology, psychiatry and neurology in the United States. Certification in Sleep medicine shows that the specialist:"has demonstrated expertise in the diagnosis and management of clinical conditions that occur during sleep, that disturb sleep, or that are affected by disturbances in the wake-sleep cycle. This specialist is skilled in the analysis and interpretation of comprehensive polysomnography, and well-versed in emerging research and management of a sleep laboratory."
Competence in sleep medicine requires an understanding of a myriad of very diverse disorders. Many of which present with similar symptoms such as excessive daytime sleepiness, which, in the absence of volitional sleep deprivation, "is almost inevitably caused by an identifiable and treatable sleep disorder", such as sleep apnea, narcolepsy, idiopathic hypersomnia, Kleine–Levin syndrome, menstrual-related hypersomnia, idiopathic recurrent stupor, or circadian rhythm disturbances. Another common complaint is insomnia, a set of symptoms which can have a great many different causes, physical and mental. Management in the varying situations differs greatly and cannot be undertaken without a correct diagnosis.Sleep dentistry (bruxism, snoring and sleep apnea), while not recognized as one of the nine dental specialties, qualifies for board-certification by the American Board of Dental Sleep Medicine (ABDSM). The qualified dentists collaborate with sleep physicians at accredited sleep centers, and can provide oral appliance therapy and upper airway surgery to treat or manage sleep-related breathing disorders. The resulting diplomate status is recognized by the American Academy of Sleep Medicine (AASM), and these dentists are organized in the Academy of Dental Sleep Medicine (USA).Occupational therapy is an area of medicine that can also address a diagnosis of sleep disorder, as rest and sleep is listed in the Occupational Therapy Practice Framework (OTPF) as its own occupation of daily living. Rest and sleep are described as restorative in order to support engagement in other occupational therapy occupations. In the OTPF, the occupation of rest and sleep is broken down into rest, sleep preparation, and sleep participation. Occupational therapists have been shown to help improve restorative sleep through the use of assistive devices/equipment, cognitive behavioral therapy for Insomnia, therapeutic activities, and/or lifestyle interventions.In the UK, knowledge of sleep medicine and possibilities for diagnosis and treatment seem to lag. The Imperial College Healthcare shows attention to obstructive sleep apnea syndrome (OSA) and very few other sleep disorders. Some NHS trusts have specialist clinics for respiratory and/or neurological sleep medicine.
Epidemiology
Children and young adults
According to one meta-analysis of sleep disorders in children, confusional arousals and sleepwalking are the two most common sleep disorders among children. An estimated 17.3% of kids between 3 and 13 years old experience confusional arousals. About 17% of children sleepwalk, with the disorder being more common among boys than girls, the peak ages of sleepwalking are from 8 to 12 years old.A different systematic review offers a high range of prevalence rates of sleep bruxism for children. Parasomnias like sleepwalking and talking typically occur during the first part of an individuals sleep cycle, the first slow wave of sleep During the first slow wave of sleep period of the sleep cycle the mind and body slow down causing one to feel drowsy and relaxed. At this stage it is the easiest to wake up, therefore many children do not remember what happened during this time.
Nightmares are also considered a parasomnia among children, who typically remember what took place during the nightmare. However, nightmares only occur during the last stage of sleep - Rapid Eye Movement (REM) sleep. REM is the deepest stage of sleep, it is named for the host of neurological and physiological responses an individual can display during this period of the sleep cycle which are similar to being awake.
Between 15.29% and 38.6% of preschoolers grind their teeth at least one night a week. All but one of the included studies reports decreasing bruxist prevalence as age increased, as well as a higher prevalence among boys than girls.Another systematic review noted 7-16% of young adults have delayed sleep phase disorder. This disorder reaches peak prevalence when people are in their 20s. Between 20 and 26% of adolescents report a sleep onset latency of greater than 30 minutes. Also, 7-36% have difficulty initiating sleep. Asian teens tend to have a higher prevalence of all of these adverse sleep outcomes—than their North American and European counterparts.By adulthood, parasomnias can normally be resolved due to a persons growth; however, 4% of people have recurring symptoms.
Effects of Untreated Sleep Disorders
Children and young adults who do not get enough sleep due to sleep disorders also have many other health problems such as obesity and physical problems where it could interfere with everyday life. It is recommended that children and young adults stick to the hours of sleep recommended by the CDC, as it helps increase mental health, physical health, and more.
Insomnia
Insomnia is a prevalent form of sleep deprivation. Individuals with insomnia may have problems falling asleep, staying asleep, or a combination of both resulting in hyposomnia - i.e. insufficient quantity and poor quality of sleep.Combining results from 17 studies on insomnia in China, a pooled prevalence of 15.0% is reported for the country. This result is consistent among other East Asian countries; however, this is considerably lower than a series of Western countries (50.5% in Poland, 37.2% in France and Italy, 27.1% in USA). Men and women residing in China experience insomnia at similar rates.A separate meta-analysis focusing on this sleeping disorder in the elderly mentions that those with more than one physical or psychiatric malady experience it at a 60% higher rate than those with one condition or less. It also notes a higher prevalence of insomnia in women over the age of 50 than their male counterparts.A study that was resulted from a collaboration between Massachusetts General Hospital and Merck describes the development of an algorithm to identify patients with sleep disorders using electronic medical records. The algorithm that incorporated a combination of structured and unstructured variables identified more than 36,000 individuals with physician-documented insomnia.Insomnia can start off at the basic level but about 40% of people who struggle with insomnia have worse symptoms. There are treatments that can help with insomnia and that includes medication, planning out a sleep schedule, limiting oneself from caffeine intake, and cognitive behavioral therapy.
Obstructive sleep apnea
Obstructive sleep apnea (OSA) affects around 4% of men and 2% of women in the United States </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Murray Valley encephalitis virus and explain in detail? [/INST] Murray Valley encephalitis virus (MVEV) is a zoonotic flavivirus endemic to northern Australia and Papua New Guinea. It is the causal agent of Murray Valley encephalitis (MVE; previously known as Australian encephalitis or Australian X disease). In humans, it can cause permanent neurological disease or death. MVEV is related to Kunjin virus, which has a similar ecology, but a lower morbidity rate. Although the arbovirus is endemic to Northern Australia, it has occasionally spread to the southern states during times of heavy rainfall during the summer monsoon season via seasonal flooding of the Murray-Darling River system. These outbreaks can be "...decades apart, with no or very few cases identified in between".
Vector
MVEV is a mosquito-borne virus that is maintained in a bird-mosquito-bird cycle </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Blepharospasm and explain in detail? [/INST] Blepharospasm is any abnormal contraction of the orbicularis oculi muscle. The condition should be distinguished from the more common, and milder, involuntary quivering of an eyelid, known as myokymia, or fasciculation. In most cases, blepharospasm symptoms last for a few days and then disappear without treatment, but in some cases the twitching is chronic and persistent, causing life-long challenges. In these cases, the symptoms are often severe enough to result in functional blindness. The persons eyelids feel like they are clamping shut and will not open without great effort. People have normal eyes, but for periods of time are effectively blind due to their inability to open their eyelids. In contrast, the reflex blepharospasm is due to any pain in and around the eye.
It is of two types: essential and reflex blepharospasm. The benign essential blepharospasm (BEB) is a focal dystonia—a neurological movement disorder involving involuntary and sustained contractions of the muscles around the eyes. The term essential indicates that the cause is unknown, but fatigue, stress, or an irritant are possible contributing factors. Blepharospasm is sometimes part of benign fasciculation syndrome.
Although there is no cure, botulinum toxin injections may help temporarily. A surgical procedure known as myectomy may also be useful. BEB is a fairly rare disease, affecting only one in every 20,000 people in the United States. The word is from Greek: βλέφαρον / blepharon, eyelid, and σπασμός / spasmos, spasm, an uncontrolled muscle contraction.
Signs and symptoms
Excessive blinking and spasming of one or both eyes – characterized by uncontrollable eyelid closure of durations longer than the typical blink reflex. The spells of spasming may last for minutes or even hours
Uncontrollable contractions or twitches of the eye muscles and surrounding facial area. Some sufferers have twitching symptoms that radiate into the nose, face, cheeks, and sometimes, the neck area
Dryness of the eyes
Sensitivity to the sun and bright light
Causes
Some causes of blepharospasm have been identified; however, the causes of many cases of blepharospasm remain unknown. Some people with blepharospasm have a history of dry eyes, light sensitivity, and even fatigue. Others report no eye problems before onset of symptoms.
Some drugs can induce blepharospasm, such as those used to treat Parkinsons disease, as well as hormone treatments, including estrogen-replacement therapy for women going through menopause. Blepharospasm can also be a symptom of acute withdrawal from benzodiazepines. Prolonged use of benzodiazepines can induce blepharospasm and is a known risk factor for the development of blepharospasm.Blepharospasm may also come from abnormal functioning of the brains basal ganglia. Simultaneous dry eye and dystonias such as Meiges syndrome have been observed. Blepharospasms can be caused by concussions in some rare cases, when a blow to the back of the head damages the basal ganglia.Multiple sclerosis can cause blepharospasm.
Treatment
Drug therapy for blepharospasm has proved generally unpredictable and short-termed. Anticholinergics, tranquillizing drugs and botulinum toxin are the mostly used therapeutic options. However serious side effects can be observed as well as failure of therapy. It is therefore not surprising that new therapies are constantly being tested. In this backdrop new evidence shows Mosapride can be a safe and affordable therapeutic option for blepharospasm.Botulinum toxin injections have been used to induce localized, partial paralysis. Among most sufferers, botulinum toxin injection is the preferred treatment method. Injections are generally administered every three months, with variations based on patient response and usually give almost immediate relief (though for some it may take more than a week) of symptoms from the muscle spasms. Most patients can resume a relatively normal life with regular botulinum toxin treatments </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Determinism and explain in detail? [/INST] Determinism is a philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and considerations. The opposite of determinism is some kind of indeterminism (otherwise called nondeterminism) or randomness. Determinism is often contrasted with free will, although some philosophers claim that the two are compatible.Determinism is often used to mean causal determinism, which in physics is known as cause-and-effect. This is the concept that events within a given paradigm are bound by causality in such a way that any state of an object or event is completely determined by its prior states. This meaning can be distinguished from other varieties of determinism mentioned below.
Debates about determinism often concern the scope of determined systems; some maintain that the entire universe is a single determinate system, and others identifying more limited determinate systems (or multiverse). Historical debates involve many philosophical positions and varieties of determinism. They include debates concerning determinism and free will, technically denoted as compatibilistic (allowing the two to coexist) and incompatibilistic (denying their coexistence is a possibility).
Determinism should not be confused with the self-determination of human actions by reasons, motives, and desires. Determinism is about interactions which affect our cognitive processes in our life. It is about the cause and the result of what we have done. Cause and result are always bounded together in cognitive processes. It assumes that if an observer has sufficient information about an object or human being, that such an observer might be able to predict every consequent move of that object or human being. Determinism rarely requires that perfect prediction be practically possible.
Varieties
"Determinism" may commonly refer to any of the following viewpoints.
Causal
Causal determinism, sometimes synonymous with historical determinism (a sort of path dependence), is "the idea that every event is necessitated by antecedent events and conditions together with the laws of nature." However, it is a broad enough term to consider that:...Ones deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. In other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way.Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe. Causal determinists believe that there is nothing in the universe that has no cause or is self-caused.
Causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. In the case of nomological determinism, these conditions are considered events also, implying that the future is determined completely by preceding events—a combination of prior states of the universe and the laws of nature. Yet they can also be considered metaphysical of origin (such as in the case of theological determinism).
Nomological
Nomological determinism, generally synonymous with physical determinism (its opposite being physical indeterminism), the most common form of causal determinism, is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Nomological determinism is sometimes illustrated by the thought experiment of Laplaces demon. Nomological determinism is sometimes called scientific determinism, although that is a misnomer.
Necessitarianism
Necessitarianism is closely related to the causal determinism described above. It is a metaphysical principle that denies all mere possibility; there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that everything occurs for a reason and by necessity.
Predeterminism
Predeterminism is the idea that all events are determined in advance. The concept is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain.
Predeterminism can be used to mean such pre-established causal determinism, in which case it is categorized as a specific type of determinism. It can also be used interchangeably with causal determinism—in the context of its capacity to determine future events. Despite this, predeterminism is often considered as independent of causal determinism.
Biological
The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form of biological determinism, sometimes called genetic determinism. Biological determinism is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature.
Friedrich Nietzsche explained that a human being is "determined" by his/her body, since he/she is subject to passions, impulsions and instincts.
Fatalism
Fatalism is normally distinguished from "determinism", as a form of teleological determinism. Fatalism is the idea that everything is fated to happen, so that humans have no control over their future. Fate has arbitrary power, and need not follow any causal or otherwise deterministic laws. Types of fatalism include hard theological determinism and the idea of predestination, where there is a God who determines all that humans will do. This may be accomplished either by knowing their actions in advance, via some form of omniscience or by decreeing their actions in advance.
Theological determinism
Theological determinism is a form of determinism that holds that all events that happen are either preordained (i.e., predestined) to happen by a monotheistic deity, or are destined to occur given its omniscience. Two forms of theological determinism exist, referred to as strong and weak theological determinism.Strong theological determinism is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity."Weak theological determinism is based on the concept of divine foreknowledge—"because Gods omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed." There exist slight variations on this categorisation, however. Some claim either that theological determinism requires predestination of all events and outcomes by the divinity—i.e., they do not classify the weaker version as theological determinism unless libertarian free will is assumed to be denied as a consequence—or that the weaker version does not constitute theological determinism at all.With respect to free will, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism.Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. Some have asserted that Augustine of Hippo introduced theological determinism into Christianity in 412 CE, whereas all prior Christian authors supported free will against Stoic and Gnostic determinism. However, there are many Biblical passages that seem to support the idea of some kind of theological determinism.
Adequate determinism
Adequate determinism is the idea, because of quantum decoherence, that quantum indeterminacy can be ignored for most macroscopic events. Random quantum events "average out" in the limit of large numbers of particles (where the laws of quantum mechanics asymptotically approach the laws of classical mechanics). Stephen Hawking explains a similar idea: he says that the microscopic world of quantum mechanics is one of determined probabilities. That is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate (albeit still not perfectly certain) at larger scales. Something as large as an animal cell, then, would be "adequately determined" (even in light of quantum indeterminacy).
Many-worlds
The many-worlds interpretation accepts the linear causal sets of sequential events with adequate consistency yet also suggests constant forking of causal chains creating "multiple universes" to account for multiple outcomes from single events. Meaning the causal set of events leading to the present are all valid yet appear as a singular linear time stream within a much broader unseen conic probability field of other outcomes that "split off" from the locally observed timeline. Under this model causal sets are still "consistent" yet not exclusive to singular iterated outcomes.
The interpretation sidesteps the exclusive retrospective causal chain problem of "could not have done otherwise" by suggesting "the other outcome does exist" in a set of parallel universe time streams that split off when the action occurred. This theory is sometimes described with the example of agent based choices but more involved models argue that recursive causal splitting occurs with all particle wave functions at play. This model is highly contested with multiple objections from the scientific community.
Philosophical varieties
Determinism in nature/nurture controversy
Although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the debate on nature and nurture. They will suggest that one factor will entirely determine behavior. As scientific understanding has grown, however, the strongest versions of these theories have been widely rejected as a single-cause fallacy. In other words, the modern deterministic theories attempt to explain how the interaction of both nature and nurture is entirely predictable. The concept of heritability has been helpful in making this distinction.
Biological determinism, sometimes called genetic determinism, is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature.
Behaviorism involves the idea that all behavior can be traced to specific causes—either environmental or reflexive. John B. Watson and B. F. Skinner developed this nurture-focused determinism.
Cultural materialism, contends that the physical world impacts and sets constraints on human behavior.
Cultural determinism, along with social determinism, is the nurture-focused theory that the culture in which we are raised determines who we are.
Environmental determinism, also known as climatic or geographical determinism, proposes that the physical environment, rather than social conditions, determines culture. Supporters of environmental determinism often also support behavioral determinism. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington, Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated.
Determinism and prediction
Other deterministic theories actually seek only to highlight the importance of a particular factor in predicting the future. These theories often use the factor as a sort of guide or constraint on the future. They need not suppose that complete knowledge of that one factor would allow us to make perfect predictions.
Psychological determinism can mean that humans must act according to reason, but it can also be synonymous with some sort of psychological egoism. The latter is the view that humans will always act according to their perceived best interest.
Linguistic determinism proposes that language determines (or at least limits) the things that humans can think and say and thus know. The Sapir–Whorf hypothesis argues that individuals experience the world based on the grammatical structures they habitually use.
Economic determinism attributes primacy to economic structure over politics in the development of human history. It is associated with the dialectical materialism of Karl Marx.
Technological determinism is the theory that a societys technology drives the development of its social structure and cultural values.
Structural determinism
Structural determinism is the philosophical view that actions, events, and processes are predicated on and determined by structural factors. Given any particular structure or set of estimable components, it is a concept that emphasizes rational and predictable outcomes. Chilean biologists Humberto Maturana and Francisco Varela popularized the notion, writing that a living systems general order is maintained via a circular process of ongoing self-referral, and thus its organization and structure defines the changes it undergoes. According to the authors, a system can undergo changes of state (alteration of structure without loss of identity) or disintegrations (alteration of structure with loss of identity). Such changes or disintegrations are not ascertained by the elements of the disturbing agent, as each disturbance will only trigger responses in the respective system, which in turn, are determined by each system’s own structure.
On an individualistic level, what this means is that human beings as free and independent entities are triggered to react by external stimuli or change in circumstance. However, their own internal state and existing physical and mental capacities determine their responses to those triggers. On a much broader societal level, structural determinists believe that larger issues in the society—especially those pertaining to minorities and subjugated communities—are predominantly assessed through existing structural conditions, making change of prevailing conditions difficult, and sometimes outright impossible. For example, the concept has been applied to the politics of race in the United States of America and other Western countries such as the United Kingdom and Australia, with structural determinists lamenting structural factors for the prevalence of racism in these countries. Additionally, Marxists have conceptualized the writings of Karl Marx within the context of structural determinism as well. For example, Louis Althusser, a structural Marxist, argues that the state, in its political, economic, and legal structures, reproduces the discourse of capitalism, in turn, allowing for the burgeoning of capitalistic structures.
Proponents of the notion highlight the usefulness of structural determinism to study complicated issues related to race and gender, as it highlights often gilded structural conditions that block meaningful change. Critics call it too rigid, reductionist and inflexible. Additionally, they also criticize the notion for overemphasizing deterministic forces such as structure over the role of human agency and the ability of the people to act. These critics argue that politicians, academics, and social activists have the capability to bring about significant change despite stringent structural conditions.
With free will
Philosophers have debated both the truth of determinism, and the truth of free will. This creates the four possible positions in the figure. Compatibilism refers to the view that free will is, in some sense, compatible with determinism. The three incompatibilist positions deny this possibility. The hard incompatibilists hold that free will is incompatible with both determinism and indeterminism, the libertarians that determinism does not hold, and free will might exist, and the hard determinists that determinism does hold and free will does not exist. The Dutch philosopher Baruch Spinoza was a determinist thinker, and argued that human freedom can be achieved through knowledge of the causes that determine our desire and affections. He defined human servitude as the state of bondage of anyone who is aware of their own desires, but ignorant of the causes that determined them. However, the free or virtuous person becomes capable, through reason and knowledge, to be genuinely free, even as they are being "determined". For the Dutch philosopher, acting out of ones own internal necessity is genuine freedom while being driven by exterior determinations is akin to bondage. Spinozas thoughts on human servitude and liberty are respectively detailed in the fourth and fifth volumes of his work Ethics.
The standard argument against free will, according to philosopher J. J. C. Smart, focuses on the implications of determinism for free will. He suggests free will is denied whether determinism is true or not. He says that if determinism is true, all actions are predicted and no one is assumed to be free; however, if determinism is false, all actions are presumed to be random and as such no one seems free because they have no part in controlling what happens.
With the soul
Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings.
A number of positions can be delineated:
Immaterial souls are all that exist (idealism).
Immaterial souls exist and exert a non-deterministic causal influence on bodies (traditional free-will, interactionist dualism).
Immaterial souls exist but are part of a deterministic framework.
Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism, occasionalism)
Immaterial souls do not exist – there is no mind-body dichotomy, and there is a materialistic explanation for intuitions to the contrary.
With ethics and morality
Another topic of debate is the implication that determinism has on morality.
Philosopher and incompatibilist Peter van Inwagen introduced this thesis, when arguments that free will is required for moral judgments, as such:
The moral judgment that X should not have been done implies that something else should have been done instead.
That something else should have been done instead implies that there was something else to do.
That there was something else to do, implies that something else could have been done.
That something else could have been done implies that there is free will.
If there is no free will to have done other than X we cannot make the moral judgment that X should not have been done.
History
Determinism was developed by the Greek philosophers during the 7th and 6th centuries BCE by the Pre-socratic philosophers Heraclitus and Leucippus, later Aristotle, and mainly by the Stoics. Some of the main philosophers who have dealt with this issue are Marcus Aurelius, Omar Khayyám, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron dHolbach (Paul Heinrich Dietrich), Pierre-Simon Laplace, Arthur Schopenhauer, William James, Friedrich Nietzsche, Albert Einstein, Niels Bohr, Ralph Waldo Emerson and, more recently, John Searle, Ted Honderich, and Daniel Dennett.
Mecca Chiesa notes that the probabilistic or selectionistic determinism of B. F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all. Mechanistic determinism assumes that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not.
Western tradition
In the West, some elements of determinism have been expressed in Greece from the 6th century BCE by the Presocratics Heraclitus and Leucippus. The first notions of determinism appears to originate with the Stoics, as part of their theory of universal causal determinism. The resulting philosophical debates, which involved the confluence of elements of Aristotelian Ethics with Stoic psychology, led in the 1st–3rd centuries CE in the works of Alexander of Aphrodisias to the first recorded Western debate over determinism and freedom, an issue that is known in theology as the paradox of free will. The writings of Epictetus as well as middle Platonist and early Christian thought were instrumental in this development. Jewish philosopher Moses Maimonides said of the deterministic implications of an omniscient god: "Does God know or does He not know that a certain individual will be good or bad? If thou sayest He knows, then it necessarily follows that [that] man is compelled to act as God knew beforehand he would act, otherwise Gods knowledge would be imperfect."
Newtonian mechanics
Determinism in the West is often associated with Newtonian mechanics/physics, which depicts the physical matter of the universe as operating according to a set of fixed laws. The "billiard ball" hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplaces demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.
Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events; for example, if an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question ones measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring ones observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been successful. But it fails as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, "uncertainty" was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves.
Newtonian mechanics, as well as any following physical theories, are results of observations and experiments, and so they describe "how it all works" within a tolerance. However, old western scientists believed if there are any logical connections found between an observed cause and effect, there must be also some absolute natural laws behind. Belief in perfect natural laws driving everything, instead of just describing what we should expect, led to searching for a set of universal simple laws that rule the world. This movement significantly encouraged deterministic views in Western philosophy, as well as the related theological views of classical pantheism.
Eastern tradition
The idea that the entire universe is a deterministic system has been articulated in both Eastern and non-Eastern religions, philosophy, and literature.
The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars as divine beings, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena.In the I Ching and philosophical Taoism, the ebb and flow of favorable and unfavorable conditions suggests the path of least resistance is effortless (see: Wu wei). In the philosophical schools of the Indian Subcontinent, the concept of karma deals with similar philosophical issues to the western concept of determinism. Karma is understood as a spiritual mechanism which causes the eternal cycle of birth, death, and rebirth (saṃsāra). Karma, either positive or negative, accumulates according to an individuals actions throughout their life, and at their death determines the nature of their next life in the cycle of Saṃsāra. Most major religions originating in India hold this belief to some degree, most notably Hinduism, Jainism, Sikhism, and Buddhism.
The views on the interaction of karma and free will are numerous, and diverge from each other. For example, in Sikhism, gods grace, gained through worship, can erase ones karmic debts, a belief which reconciles the principle of karma with a monotheistic god one must freely choose to worship. Jainists believe in compatibilism, in which the cycle of Saṃsara is a completely mechanistic process, occurring without any divine intervention. The Jains hold an atomic view of reality, in which particles of karma form the fundamental microscopic building material of the universe.
Ājīvika
In ancient India, the Ājīvika school of philosophy founded by Makkhali Gosāla (around 500 BCE), otherwise referred to as "Ājīvikism" in Western scholarship, upheld the Niyati ("Fate") doctrine of absolute fatalism or determinism, which negates the existence of free will and karma, and is therefore considered one of the nāstika or "heterodox" schools of Indian philosophy. The oldest descriptions of the Ājīvika fatalists and their founder Gosāla can be found both in the Buddhist and Jaina scriptures of ancient India. The predetermined fate of living beings and the impossibility to achieve liberation (moksha) from the eternal cycle of birth, death, and rebirth was the major distinctive philosophical and metaphysical doctrine of this heterodox school of Indian philosophy, annoverated among the other Śramaṇa movements that emerged in India during the Second urbanization (600–200 BCE).
Buddhism
Buddhist philosophy contains several concepts which some scholars describe as deterministic to various levels.
One concept which is argued to support a hard determinism is the idea of dependent origination, which claims that all phenomena (dharma) are necessarily caused by some other phenomenon, which it can be said to be dependent on, like links in a massive chain. In traditional Buddhist philosophy, this concept is used to explain the functioning of the cycle of saṃsāra; all actions exert a karmic force, which will manifest results in future lives. In other words, righteous or unrighteous actions in one life will necessarily cause good or bad responses in another.Another Buddhist concept which many scholars perceive to be deterministic is the idea of non-self, or anatta. In Buddhism, attaining enlightenment involves one realizing that in humans there is no fundamental core of being which can be called the "soul", and that humans are instead made of several constantly changing factors which bind them to the cycle of Saṃsāra.Some scholars argue that the concept of non-self necessarily disproves the ideas of free will and moral culpability. If there is no autonomous self, in this view, and all events are necessarily and unchangeably caused by others, then no type of autonomy can be said to exist, moral or otherwise. However, other scholars disagree, claiming that the Buddhist conception of the universe allows for a form of compatibilism. Buddhism perceives reality occurring on two different levels, the ultimate reality which can only be truly understood by the enlightened, and the illusory and false material reality. Therefore, Buddhism perceives free will as a notion belonging to material reality, while concepts like non-self and dependent origination belong to the ultimate reality; the transition between the two can be truly understood, Buddhists claim, by one who has attained enlightenment.
Modern scientific perspective
Generative processes
Although it was once thought by scientists that any indeterminism in quantum mechanics occurred at too small a scale to influence biological or neurological systems, there is indication that nervous systems are influenced by quantum indeterminism due to chaos theory. It is unclear what implications this has for the problem of free will given various possible reactions to the problem in the first place. Many biologists do not grant determinism: Christof Koch, for instance, argues against it, and in favour of libertarian free will, by making arguments based on generative processes (emergence). Other proponents of emergentist or generative philosophy, cognitive sciences, and evolutionary psychology, argue that a certain form of determinism (not necessarily causal) is true. They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist.
As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. When chess is simplified to 7 or fewer pieces, however, endgame tables are available that dictate which moves to play to achieve a perfect game. This implies that, given a less complex environment (with the original 32 pieces reduced to 7 or fewer pieces), a perfectly predictable game of chess is possible. In this scenario, the winning player can announce that a checkmate will happen within a given number of moves, assuming a perfect defense by the losing player, or fewer moves if the defending player chooses sub-optimal moves as the game progresses into its inevitable, predicted conclusion. By this analogy, it is suggested, the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate nearly infinite and practically unpredictable behavioural responses. In theory, if all these events could be accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable. Another hands-on example of generative processes is John Horton Conways playable Game of Life. Nassim Taleb is wary of such models, and coined the term "ludic fallacy."
Compatibility with the existence of science
Certain philosophers of science argue that, while causal determinism (in which everything including the brain/mind is subject to the laws of causality) is compatible with minds capable of science, fatalism and predestination is not. These philosophers make the distinction that causal determinism means that each step is determined by the step before and therefore allows sensory input from observational data to determine what conclusions the brain reaches, while fatalism in which the steps between do not connect an initial cause to the results would make it impossible for observational data to correct false hypotheses. This is often combined with the argument that if the brain had fixed views and the arguments were mere after-constructs with no causal effect on the conclusions, science would have been impossible and the use of arguments would have been a meaningless waste of energy with no persuasive effect on brains with fixed views.
Mathematical models
Many mathematical models of physical systems are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically; in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model even though the underlying system is governed by deterministic equations.
Quantum and classical mechanics
Day-to-day physics
Since the beginning of the 20th century, quantum mechanics—the physics of the extremely small—has revealed previously concealed aspects of events. Before that, Newtonian physics—the physics of everyday life—dominated. Taken in isolation (rather than as an approximation to quantum mechanics), Newtonian physics depicts a universe in which objects move in perfectly determined ways. At the scale where humans exist and interact with the universe, Newtonian mechanics remain useful, and make relatively accurate predictions (e.g. calculating the trajectory of a bullet). But whereas in theory, absolute knowledge of the forces accelerating a bullet would produce an absolutely accurate prediction of its path, modern quantum mechanics casts reasonable doubt on this main thesis of determinism.
Quantum realm
Quantum physics works differently in many ways from Newtonian physics. Physicist Aaron D. OConnell explains that understanding our universe, at such small scales as atoms, requires a different logic than day-to-day life does. OConnell does not deny that it is all interconnected: the scale of human existence ultimately does emerge from the quantum scale. OConnell argues that we must simply use different models and constructs when dealing with the quantum world. Quantum mechanics is the product of a careful application of the scientific method, logic and empiricism. The Heisenberg uncertainty principle is frequently confused with the observer effect. The uncertainty principle actually describes how precisely we may measure the position and momentum of a particle at the same time—if we increase the accuracy in measuring one quantity, we are forced to lose accuracy in measuring the other. "These uncertainty relations give us that measure of freedom from the limitations of classical concepts which is necessary for a consistent description of atomic processes."
This is where statistical mechanics come into play, and where physicists begin to require rather unintuitive mental models: A particles path simply cannot be exactly specified in its full quantum description. "Path" is a classical, practical attribute in our everyday life, but one that quantum particles do not meaningfully possess. The probabilities discovered in quantum mechanics do nevertheless arise from measurement (of the perceived path of the particle). As Stephen Hawking explains, the result is not traditional determinism, but rather determined probabilities. In some cases, a quantum particle may indeed trace an exact path, and the probability of finding the particles in that path is one (certain to be true). In fact, as far as prediction goes, the quantum development is at least as predictable as the classical motion, but the key is that it describes wave functions that cannot be easily expressed in ordinary language. As far as the thesis of determinism is concerned, these probabilities, at least, are quite determined. These findings from quantum mechanics have found many applications, and allow us to build transistors and lasers. Put another way: personal computers, Blu-ray players and the Internet all work because humankind discovered the determined probabilities of the quantum world.On the topic of predictable probabilities, the double-slit experiments are a popular example. Photons are fired one-by-one through a double-slit apparatus at a distant screen. They do not arrive at any single point, nor even the two points lined up with the slits (the way it might be expected of bullets fired by a fixed gun at a distant target). Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions with the target can be calculated reliably. In that sense the behavior of light in this apparatus is deterministic, but there is no way to predict where in the resulting interference pattern any individual photon will make its contribution (although, there may be ways to use weak measurement to acquire more information without violating the uncertainty principle).
Some (including Albert Einstein) have argued that the inability to predict any more than probabilities is simply due to ignorance. The idea is that, beyond the conditions and laws can be observed or deduced, there are also hidden factors or "hidden variables" that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically determinative way. In actuality, they proceed in an absolutely deterministic way.
John S. Bell criticized Einsteins work in his famous Bells theorem, which, under a strict set of assumptions, demonstrates that quantum mechanics can make statistical predictions that would be violated if local hidden variables really existed. A number of experiments have tried to verify such predictions, and so far they do not appear to be violated. Current experiments continue to verify the result, including the 2015 "Loophole Free Test" that plugged all known sources of error and the 2017 "Cosmic Bell Test" experiment that used cosmic data streaming from different directions toward the Earth, precluding the possibility the sources of data could have had prior interactions.
Bells theorem has been criticized from the perspective of its strict set of assumptions. A foundational assumption to quantum mechanics is the Principle of locality. To abandon this assumption would require the construction of a non-local hidden variable theory. Therefore, it is possible to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics. Bohms Interpretation, though, violates special relativity and it is highly controversial whether or not it can be reconciled without giving up on determinism.
Another foundational assumption to quantum mechanics is that of free will, which has been argued to be foundational to the scientific method as a whole. Bell acknowledged that abandoning this assumption would both allow for the maintenance of determinism as well as locality. This perspective is known as superdeterminism, and is defended by some physicists such as Sabine Hossenfelder and Tim Palmer.More advanced variations on these arguments include quantum contextuality, by Bell, Simon B. Kochen and Ernst Specker, which argues that hidden variable theories cannot be "sensible", meaning that the values of the hidden variables inherently depend on the devices used to measure them.
This debate is relevant because there are possibly specific situations in which the arrival of an electron at a screen at a certain point and time would trigger one event, whereas its arrival at another point would trigger an entirely different event (e.g. see Schrödingers cat – a thought experiment used as part of a deeper debate).
Thus, quantum physics casts reasonable doubt on the traditional determinism of classical, Newtonian physics in so far as reality does not seem to be absolutely determined. This was the subject of the famous Bohr–Einstein debates between Einstein and Niels Bohr and there is still no consensus.Adequate determinism (see Varieties, above) is the reason that Stephen Hawking calls libertarian free will "just an illusion".
See also
References
Notes
Bibliography
Daniel Dennett (2003) Freedom Evolves. Viking Penguin.
John Earman (2007) "Aspects of Determinism in Modern Physics" in Butterfield, J., and Earman, J., eds., Philosophy of Physics, Part B. North Holland: 1369–1434.
George Ellis (2005) "Physics and the Real World", Physics Today.
Epstein, J.M. (1999). "Agent Based Models and Generative Social Science". Complexity. IV (5): 41–60. Bibcode:1999Cmplx...4e..41E. CiteSeerX 10.1.1.118.546. doi:10.1002/(sici)1099-0526(199905/06)4:5<41::aid-cplx9>3.0.co;2-f.
Epstein, J.M. and Axtell R. (1996) Growing Artificial Societies – Social Science from the Bottom. MIT Press.
Harris, James A. (2005) Of Liberty and Necessity: The Free Will Debate in Eighteenth-Century British Philosophy. Clarendon Press.
Kenrick, D. T.; Li, N. P.; Butner, J. (2003). "Dynamical evolutionary psychology: Individual decision rules and emergent social norms" (PDF). Psychological Review. 110 (1): 3–28. CiteSeerX 10.1.1.526.5218. doi:10.1037/0033-295x.110.1.3. PMID 12529056. S2CID 43306158.
Albert Messiah, Quantum Mechanics, English translation by G. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons, vol. I, chapter IV, section III.
Ernest Nagel (3 March 1960). "Determinism in history". Philosophy and Phenomenological Research. 20 (3): 291–317. doi:10.2307/2105051. JSTOR 2105051. (Online version found here)
John T Roberts (2006) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Retinal haemorrhage and explain in detail? [/INST] Retinal hemorrhage (UK English: retinal haemorrhage) is a disorder of the eye in which bleeding occurs in the retina, the light sensitive tissue, located on the back wall of the eye. There are photoreceptor cells in the retina called rods and cones, which transduce light energy into nerve signals that can be processed by the brain to form visual images. Retinal hemorrhage is strongly associated with child abuse in infants and young children and often leaves such abused infants permanently blind. In older children and adults, retinal hemorrhage can be caused by several medical conditions such as hypertension, retinal vein occlusion (a blockage of a retinal vein), anemia, leukemia or diabetes.
Signs and symptoms
At the early stage, a retinal hemorrhage may not show any symptom at all.
Some symptoms may include:
Seeing floaters in the vision
Seeing cobwebs in the vision
Seeing haze or shadows
Distorted vision
Rapid flashes of light in peripheral vision
Red tint to vision
Blurriness
Sudden blindness
Headache
Causes
In adults, retinal hemorrhages are largely spontaneous, secondary to chronic medical conditions such as hypertension. They also commonly occur in high altitude climbers, most likely due to the effects of systemic hypoxia on the eye. Risk is correlated with the maximum altitude reached, duration of exposure to high altitude conditions, and climb rate.In infants, retinal hemorrhages (RH) are highly associated with child abuse. The incidence of RH in abusive head trauma is approximately 85%. In a comprehensive review of 62 studies comprising 998 children, 504 of whom were abused, RH were found in 78% of cases of abusive head trauma (AHT) versus 5% of the cases of non-abusive head trauma. In a child with head trauma and RH, the odds ratio that this is AHT is 14.7 (95% CI 6.39, 33.62) and the probability of abuse is 91% </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Employment and explain in detail? [/INST] Employment is a relationship between two parties regulating the provision of paid labour services. Usually based on a contract, one party, the employer, which might be a corporation, a not-for-profit organization, a co-operative, or any other entity, pays the other, the employee, in return for carrying out assigned work. Employees work in return for wages, which can be paid on the basis of an hourly rate, by piecework or an annual salary, depending on the type of work an employee does, the prevailing conditions of the sector and the bargaining power between the parties. Employees in some sectors may receive gratuities, bonus payments or stock options. In some types of employment, employees may receive benefits in addition to payment. Benefits may include health insurance, housing, disability insurance. Employment is typically governed by employment laws, organisation or legal contracts.
Employees and employers
An employee contributes labour and expertise to an endeavor of an employer or of a person conducting a business or undertaking (PCB) and is usually hired to perform specific duties which are packaged into a job. In a corporate context, an employee is a person who is hired to provide services to a company on a regular basis in exchange for compensation and who does not provide these services as part of an independent business.
Independent contractor
An issue that arises in most companies, especially the ones that are in the gig economy, is the classification of workers. A lot of workers that fulfill gigs are often hired as independent contractors.
To categorize a worker as an independent contractor rather than an employee, an independent contractor must agree with the client on what the finished work product will be and then the contractor controls the means and manner of achieving the desired outcome. Secondly, an independent contractor offers services to the public at large, not just to one business, and is responsible for disbursing payments from the client, paying unreimbursed expenses, and providing his or her own tools to complete the job. Third, the relationship of the parties is often evidenced by a written agreement that specifies that the worker is an independent contractor and is not entitled to employee benefits; the services provided by the worker are not key to the business; and the relationship is not permanent.As a general principle of employment law, in the United States, there is a difference between an agent and an independent contractor. The default status of a worker is an employee unless specific guidelines are met, which can be determined by the ABC test. Thus, clarifying whether someone who performs work is an independent contractor or an employee from the beginning, and treating them accordingly, can save a company from trouble later on.
Provided key circumstances, including ones such as that the worker is paid regularly, follows set hours of work, is supplied with tools from the employer, is closely monitored by the employer, acting on behalf of the employer, only works for one employer at a time, they are considered an employee, and the employer will generally be liable for their actions and be obliged to give them benefits. Similarly, the employer is the owner of any invention created by an employee “hired to invent,” even in the absence of an assignment of inventions. In contrast, a company commissioning a work by an independent contractor will not own the copyright unless the company secures either a written contract stating that it is a “work made for hire” or a written assignment of the copyright. In order to stay protected and avoid lawsuits, an employer has to be aware of that distinction.
Employer–worker relationship
Employer and managerial control within an organization rests at many levels and has important implications for staff and productivity alike, with control forming the fundamental link between desired outcomes and actual processes. Employers must balance interests such as decreasing wage constraints with a maximization of labor productivity in order to achieve a profitable and productive employment relationship.
Labor acquisition / hiring
The main ways for employers to find workers and for people to find employers are via jobs listings in newspapers (via classified advertising) and online, also called job boards. Employers and job seekers also often find each other via professional recruitment consultants which receive a commission from the employer to find, screen and select suitable candidates. However, a study has shown that such consultants may not be reliable when they fail to use established principles in selecting employees. A more traditional approach is with a "Help Wanted" sign in the establishment (usually hung on a window or door or placed on a store counter). Evaluating different employees can be quite laborious but setting up different techniques to analyze their skills to measure their talents within the field can be best through assessments. Employer and potential employee commonly take the additional step of getting to know each other through the process of a job interview.
Training and development
Training and development refers to the employers effort to equip a newly hired employee with the necessary skills to perform at the job, and to help the employee grow within the organization. An appropriate level of training and development helps to improve employees job satisfaction.
Remuneration
There are many ways that employees are paid, including by hourly wages, by piecework, by yearly salary, or by gratuities (with the latter often being combined with another form of payment). In sales jobs and real estate positions, the employee may be paid a commission, a percentage of the value of the goods or services that they have sold. In some fields and professions (e.g., executive jobs), employees may be eligible for a bonus if they meet certain targets. Some executives and employees may be paid in shares or stock options, a compensation approach that has the added benefit, from the companys point of view, of helping to align the interests of the compensated individual with the performance of the company.
Under the faithless servant doctrine, a doctrine under the laws of a number of states in the United States, and most notably New York State law, an employee who acts unfaithfully towards his employer must forfeit all of the compensation he received during the period of his disloyalty.
Employee benefits
Employee benefits are various non-wage compensation provided to employees in addition to their wages or salaries. The benefits can include: housing (employer-provided or employer-paid), group insurance (health, dental, life etc.), disability income protection, retirement benefits, daycare, tuition reimbursement, sick leave, vacation (paid and non-paid), social security, profit sharing, funding of education, and other specialized benefits. In some cases, such as with workers employed in remote or isolated regions, the benefits may include meals. Employee benefits can improve the relationship between employee and employer and lowers staff turnover.
Organizational justice
Organizational justice is an employees perception and judgement of employers treatment in the context of fairness or justice. The resulting actions to influence the employee-employer relationship is also a part of organizational justice.
Workforce organizing
Employees can organize into trade or labor unions, which represent the workforce to collectively bargain with the management of organizations about working, and contractual conditions and services.
Ending employment
Usually, either an employee or employer may end the relationship at any time, often subject to a certain notice period. This is referred to as at-will employment. The contract between the two parties specifies the responsibilities of each when ending the relationship and may include requirements such as notice periods, severance pay, and security measures. A contract forbidding an employee from leaving their employment, under penalty of a surety bond, is referred to as an employment bond. In some professions, notably teaching, civil servants, university professors, and some orchestra jobs, some employees may have tenure, which means that they cannot be dismissed at will. Another type of termination is a layoff.
Wage labor
Wage labor is the socioeconomic relationship between a worker and an employer, where the worker sells their labor under a formal or informal employment contract. These transactions usually occur in a labor market where wages are market-determined. In exchange for the wages paid, the work product generally becomes the undifferentiated property of the employer, except for special cases such as the vesting of intellectual property patents in the United States where patent rights are usually vested in the original personal inventor. A wage laborer is a person whose primary means of income is from the selling of his or her labor in this way.In modern mixed economies such as that of the OECD countries, it is currently the dominant form of work arrangement. Although most work occurs following this structure, the wage work arrangements of CEOs, professional employees, and professional contract workers are sometimes conflated with class assignments, so that "wage labor" is considered to apply only to unskilled, semi-skilled or manual labor.
Wage slavery
Wage labor, as institutionalized under todays market economic systems, has been criticized, especially by socialists, using the pejorative term wage slavery. Socialists draw parallels between the trade of labor as a commodity and slavery. Cicero is also known to have suggested such parallels.The American philosopher John Dewey posited that until "industrial feudalism" is replaced by "industrial democracy", politics will be "the shadow cast on society by big business". Thomas Ferguson has postulated in his investment theory of party competition that the undemocratic nature of economic institutions under capitalism causes elections to become occasions when blocs of investors coalesce and compete to control the state plus cities.
Employment contract
Australia
Australian employment has been governed by the Fair Work Act since 2009.
Bangladesh
Bangladesh Association of International Recruiting Agencies (BAIRA) is an association of national level with its international reputation of co-operation and welfare of the migrant workforce as well as its approximately 1200 members agencies in collaboration with and support from the Government of Bangladesh.
Canada
In the Canadian province of Ontario, formal complaints can be brought to the Ministry of Labour. In the province of Quebec, grievances can be filed with the Commission des normes du travail.
Germany
Two of the prominent examples of work and employment contracts in Germany are the Werksvertrag or the Arbeitsvertrag, which is a form of Dienstleistungsvertrag (service-oriented contract). An Arbeitsvertrag can also be temporary, whereas a temporary worker is working under Zeitarbeit or Leiharbeit. Another employment setting is Arbeitnehmerüberlassung (ANÜ).
India
India has options for a fixed term contract or a permanent contract. Both contracts are entitled to minimum wages, fixed working hours and social security contributions.
Pakistan
Pakistan has no contract Labor, Minimum Wage and Provident Funds Acts. Contract labor in Pakistan must be paid minimum wage and certain facilities are to be provided to labor. However, the Acts are not yet fully implemented.
Philippines
In the Philippines, employment is regulated by the Department of Labor and Employment.
Sweden
According to Swedish law, there are three types of employment.
Test employment (swe: Provanställning), where the employer hires a person for a test period of 6 months maximum. The employment can be ended at any time without giving any reason. This type of employment can be offered only once per employer and in employee combination. Usually, a time limited or normal employment is offered after a test employment.
Time limited employment (swe: Tidsbegränsad anställning). The employer hires a person for a specified time. Usually, they are extended for a new period. Total maximum two years per employer and employee combination, then it automatically counts as a normal employment.
Normal employment (swe: Tillsvidareanställning / Fast anställning), which has no time limit (except for retirement etc.). It can still be ended for two reasons: personal reason, immediate end of employment only for strong reasons such as crime, or lack of work tasks (swe: Arbetsbrist), cancellation of employment, usually because of bad income for the company. There is a cancellation period of 1–6 months, and rules for how to select employees, basically those with shortest employment time shall be cancelled first.There are no laws about minimum salary in Sweden. Instead, there are agreements between employer organizations and trade unions about minimum salaries, and other employment conditions.
There is a type of employment contract which is common but not regulated in law, and that is Hour employment (swe: Timanställning), which can be Normal employment (unlimited), but the work time is unregulated and decided per immediate need basis. The employee is expected to be answering the phone and come to work when needed, e.g. when someone is ill and absent from work. They will receive salary only for actual work time and can in reality be fired for no reason by not being called anymore. This type of contract is common in the public sector.
United Kingdom
In the United Kingdom, employment contracts are categorized by the government into the following types:
Fixed-term contract: last for a certain length of time, are set in advance, end when a specific task is completed, ends when a specific event takes place.
Full-time or part-time contract: has no defined length of time, can be terminated by either party, is to accomplish a specific task, specified number of hours.
Agency staff
Freelancers, Consultants and Contractors
Zero-hour contracts
United States
For purposes of U.S. federal income tax withholding, 26 U.S.C. § 3401(c) provides a definition for the term "employee" specific to chapter 24 of the Internal Revenue Code:
"For purposes of this chapter, the term "employee" includes an officer, employee, or elected official of the United States, a State, or any political subdivision thereof, or the District of Columbia, or any agency or instrumentality of any one or more of the foregoing. The term "employee" also includes an officer of a corporation." This definition does not exclude all those who are commonly known as employees. "Similarly, Latham’s instruction which indicated that under 26 U.S.C. § 3401(c) the category of ‘employee’ does not include privately employed wage earners is a preposterous reading of the statute. It is obvious that within the context of both statutes the word ‘includes’ is a term of enlargement not of limitation, and the reference to certain entities or categories is not intended to exclude all others."Employees are often contrasted with independent contractors, especially when there is dispute as to the workers entitlement to have matching taxes paid, workers compensation, and unemployment insurance benefits. However, in September 2009, the court case of Brown v. J. Kaz, Inc. ruled that independent contractors are regarded as employees for the purpose of discrimination laws if they work for the employer on a regular basis, and said employer directs the time, place, and manner of employment.In non-union work environments, in the United States, unjust termination complaints can be brought to the United States Department of Labor.Labor unions are legally recognized as representatives of workers in many industries in the United States. Their activity today centers on collective bargaining over wages, benefits, and working conditions for their membership, and on representing their members in disputes with management over violations of contract provisions. Larger unions also typically engage in lobbying activities and electioneering at the state and federal level.Most unions in America are aligned with one of two larger umbrella organizations: the AFL–CIO created in 1955, and the Change to Win Federation which split from the AFL–CIO in 2005. Both advocate policies and legislation on behalf of workers in the United States and Canada, and take an active role in politics. The AFL–CIO is especially concerned with global trade issues.American business theorist Jeffrey Pfeffer posits that contemporary employment practices and employer commonalities in the United States, including toxic working environments, job insecurity, long hours and increased performance pressure from management, are responsible for 120,000 excess deaths annually, making the workplace the fifth leading cause of death in the United States.
Age-related issues
Younger age workers
Young workers are at higher risk for occupational injury and face certain occupational hazards at a higher rate; this is generally due to their employment in high-risk industries. For example, in the United States, young people are injured at work at twice the rate of their older counterparts. These workers are also at higher risk for motor vehicle accidents at work, due to less work experience, a lower use of seat belts, and higher rates of distracted driving. To mitigate this risk, those under the age of 17 are restricted from certain types of driving, including transporting people and goods under certain circumstances.High-risk industries for young workers include agriculture, restaurants, waste management, and mining. In the United States, those under the age of 18 are restricted from certain jobs that are deemed dangerous under the Fair Labor Standards Act.Youth employment programs are most effective when they include both theoretical classroom training and hands-on training with work placements.In the conversation of employment among younger aged workers, youth unemployment has also been monitored. Youth unemployment rates tend to be higher than the adult rates in every country in the world.
Older age workers
Those older than the statutory defined retirement age may continue to work, either out of enjoyment or necessity. However, depending on the nature of the job, older workers may need to transition into less-physical forms of work to avoid injury. Working past retirement age also has positive effects, because it gives a sense of purpose and allows people to maintain social networks and activity levels. Older workers are often found to be discriminated against by employers.
Working poor
Employment is no guarantee of escaping poverty, the International Labour Organization (ILO) estimates that as many as 40% of workers are poor, not earning enough to keep their families above the $2 a day poverty line. For instance, in India most of the chronically poor are wage earners in formal employment, because their jobs are insecure and low paid and offer no chance to accumulate wealth to avoid risks. According to the UNRISD, increasing labor productivity appears to have a negative impact on job creation: in the 1960s, a 1% increase in output per worker was associated with a reduction in employment growth of 0.07%, by the first decade of this century the same productivity increase implies reduced employment growth by 0.54%. Both increased employment opportunities and increased labor productivity (as long as it also translates into higher wages) are needed to tackle poverty. Increases in employment without increases in productivity leads to a rise in the number of "working poor", which is why some experts are now promoting the creation of "quality" and not "quantity" in labor market policies. This approach does highlight how higher productivity has helped reduce poverty in East Asia, but the negative impact is beginning to show. In Vietnam, for example, employment growth has slowed while productivity growth has continued. Furthermore, productivity increases do not always lead to increased wages, as can be seen in the United States, where the gap between productivity and wages has been rising since the 1980s.Researchers at the Overseas Development Institute argue that there are differences across economic sectors in creating employment that reduces poverty. 24 instances of growth were examined, in which 18 reduced poverty. This study showed that other sectors were just as important in reducing unemployment, such as manufacturing. The services sector is most effective at translating productivity growth into employment growth. Agriculture provides a safety net for jobs and economic buffer when other sectors are struggling.
Models of the employment relationship
Scholars conceptualize the employment relationship in various ways. A key assumption is the extent to which the employment relationship necessarily includes conflicts of interests between employers and employees, and the form of such conflicts. In economic theorizing, the labor market mediates all such conflicts such that employers and employees who enter into an employment relationship are assumed to find this arrangement in their own self-interest. In human resource management theorizing, employers and employees are assumed to have shared interests (or a unity of interests, hence the label “unitarism”). Any conflicts that exist are seen as a manifestation of poor human resource management policies or interpersonal clashes such as personality conflicts, both of which can and should be managed away. From the perspective of pluralist industrial relations, the employment relationship is characterized by a plurality of stakeholders with legitimate interests (hence the label “pluralism), and some conflicts of interests are seen as inherent in the employment relationship (e.g., wages v. profits). Lastly, the critical paradigm emphasizes antagonistic conflicts of interests between various groups (e.g., the competing capitalist and working classes in a Marxist framework) that are part of a deeper social conflict of unequal power relations. As a result, there are four common models of employment:
Mainstream economics: employment is seen as a mutually advantageous transaction in a free market between self-interested legal and economic equals
Human resource management (unitarism): employment is a long-term partnership of employees and employers with common interests
Pluralist industrial relations: employment is a bargained exchange between stakeholders with some common and some competing economic interests and unequal bargaining power due to imperfect labor markets
Critical industrial relations: employment is an unequal power relation between competing groups that is embedded in and inseparable from systemic inequalities throughout the socio-politico-economic system.These models are important because they help reveal why individuals hold differing perspectives on human resource management policies, labor unions, and employment regulation. For example, human resource management policies are seen as dictated by the market in the first view, as essential mechanisms for aligning the interests of employees and employers and thereby creating profitable companies in the second view, as insufficient for looking out for workers’ interests in the third view, and as manipulative managerial tools for shaping the ideology and structure of the workplace in the fourth view.
Academic literature
Literature on the employment impact of economic growth and on how growth is associated with employment at a macro, sector and industry level was aggregated in 2013.Researchers found evidence to suggest growth in manufacturing and services have good impact on employment. They found GDP growth on employment in agriculture to be limited, but that value-added growth had a relatively larger impact. The impact on job creation by industries/economic activities as well as the extent of the body of evidence and the key studies. For extractives, they again found extensive evidence suggesting growth in the sector has limited impact on employment. In textiles, however, although evidence was low, studies suggest growth there positively contributed to job creation. In agri-business and food processing, they found impact growth to be positive.They found that most available literature focuses on OECD and middle-income countries somewhat, where economic growth impact has been shown to be positive on employment. The researchers didnt find sufficient evidence to conclude any impact of growth on employment in LDCs despite some pointing to the positive impact, others point to limitations. They recommended that complementary policies are necessary to ensure economic growths positive impact on LDC employment. With trade, industry and investment, they only found limited evidence of positive impact on employment from industrial and investment policies and for others, while large bodies of evidence does exist, the exact impact remains contested.Researchers have also explored the relationship between employment and illicit activities. Using evidence from Africa, a research team found that a program for Liberian ex-fighters reduced work hours on illicit activities. The employment program also reduced interest in mercenary work in nearby wars. The study concludes that while the use of capital inputs or cash payments for peaceful work created a reduction in illicit activities, the impact of training alone is rather low.
Globalization and employment relations
The balance of economic efficiency and social equity is the ultimate debate in the field of employment relations. By meeting the needs of the employer; generating profits to establish and maintain economic efficiency; whilst maintaining a balance with the employee and creating social equity that benefits the worker so that he/she can fund and enjoy healthy living; proves to be a continuous revolving issue in westernized societies.Globalization has affected these issues by creating certain economic factors that disallow or allow various employment issues. Economist Edward Lee (1996) studies the effects of globalization and summarizes the four major points of concern that affect employment relations:
International competition, from the newly industrialized countries, will cause unemployment growth and increased wage disparity for unskilled workers in industrialized countries. Imports from low-wage countries exert pressure on the manufacturing sector in industrialized countries and foreign direct investment (FDI) is attracted away from the industrialized nations, towards low-waged countries.
Economic liberalization will result in unemployment and wage inequality in developing countries. This happens as job losses in uncompetitive industries outstrip job opportunities in new industries.
Workers will be forced to accept worsening wages and conditions, as a global labor market results in a “race to the bottom”. Increased international competition creates a pressure to reduce the wages and conditions of workers.
Globalization reduces the autonomy of the nation state. Capital is increasingly mobile and the ability of the state to regulate economic activity is reduced.What also results from Lees (1996) findings is that in industrialized countries an average of almost 70 per cent of workers are employed in the service sector, most of which consists of non-tradable activities. As a result, workers are forced to become more skilled and develop sought after trades, or find other means of survival. Ultimately this is a result of changes and trends of employment, an evolving workforce, and globalization that is represented by a more skilled and increasing highly diverse labor force, that are growing in non standard forms of employment (Markey, R. et al. 2006).
Alternatives
Subcultures
Various youth subcultures have been associated with not working, such as the hippie subculture in the 1960s and 1970s (which endorsed the idea of "dropping out" of society) and the punk subculture.
Post-secondary education
One of the alternatives to work is engaging in post-secondary education at a college, university or professional school. One of the major costs of obtaining a post-secondary education is the opportunity cost of forgone wages due to not working. At times when jobs are hard to find, such as during recessions, unemployed individuals may decide to get post-secondary education, because there is less of an opportunity cost.
Social assistance
In some countries, individuals who are not working can receive social assistance support (e.g., welfare or food stamps) to enable them to rent housing, buy food, repair or replace household goods, maintenance of children and observe social customs that require financial expenditure.
Volunteerism
Workers who are not paid wages, such as volunteers who perform tasks for charities, hospitals or not-for-profit organizations, are generally not considered employed. One exception to this is an internship, an employment situation in which the worker receives training or experience (and possibly college credit) as the chief form of compensation.
Indentured servitude and slavery
Those who work under obligation for the purpose of fulfilling a debt, such as indentured servants, or as property of the person or entity they work for, such as slaves, do not receive pay for their services and are not considered employed. Some historians suggest that slavery is older than employment, but both arrangements have existed for all recorded history. Indentured servitude and slavery are not considered compatible with human rights or with democracy.
Self-employment
See also
Notes and references
General bibliography
Acocella, Nicola (2007). Social pacts, employment and growth: a reappraisal of Ezio Tarantellis thought. Heidelberg: Springer Verlag. ISBN 978-3-7908-1915-1.
Anderson, Elizabeth (2017). Private Government: How Employers Rule Our Lives (and Why We Dont Talk about It). Princeton, NJ: Princeton University Press. ISBN 978-0-691-17651-2.
Dubin, Robert (1958). The World of Work: Industrial Society and Human Relations. Englewood Cliffs, N.J: Prentice-Hall. p. 213. OCLC 964691.
Freeman, Richard B.; Goroff, Daniel L. (2009). Science and Engineering Careers in the United States: An Analysis of Markets and Employment. Chicago: University of Chicago Press. ISBN 978-0-226-26189-8.
Lee, Eddy (January 1996). "Globalization and Employment: Is Anxiety Justified?". International Labour Review. 135 (5): 485–98. Archived from the original on 2013-05-16. Retrieved 2017-08-27.
Markey, Raymond; Hodgkinson, Ann; Kowalczyk, Jo (2002). "Gender, part-time employment and employee participation in Australian workplaces". Employee Relations. 24 (2): 129–50. doi:10.1108/01425450210420884.
Stone, Raymond J. (2005). Human Resource Management (5th ed.). Milton, Qld: John Wiley. pp. 412–14. ISBN 978-0-470-80403-2.
Wood, Jack M. (2004). Organisational Behaviour: A Global Perspective (3rd ed.) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Sunburn and explain in detail? [/INST] Sunburn is a form of radiation burn that affects living tissue, such as skin, that results from an overexposure to ultraviolet (UV) radiation, usually from the Sun. Common symptoms in humans and animals include: red or reddish skin that is hot to the touch or painful, general fatigue, and mild dizziness. Other symptoms include blistering, peeling skin, swelling, itching, and nausea. Excessive UV radiation is the leading cause of (primarily) non-malignant skin tumors, and in extreme cases can be life-threatening. Sunburn is an inflammatory response in the tissue triggered by direct DNA damage by UV radiation. When the cells DNA is overly damaged by UV radiation, type I cell-death is triggered and the tissue is replaced.Sun protective measures including sunscreen and sun protective clothing are widely accepted to prevent sunburn and some types of skin cancer. Special populations, including children, are especially susceptible to sunburn and protective measures should be used to prevent damage.
Signs and symptoms
Typically, there is initial redness, followed by varying degrees of pain, proportional in severity to both the duration and intensity of exposure.
Other symptoms can include blistering, swelling (edema), itching (pruritus), peeling skin, rash, nausea, fever, chills, and fainting (syncope). Also, a small amount of heat is given off from the burn, caused by the concentration of blood in the healing process, giving a warm feeling to the affected area. Sunburns may be classified as superficial, or partial thickness burns. Blistering is a sign of second degree sunburn.
Variations
Minor sunburns typically cause nothing more than slight redness and tenderness to the affected areas. In more serious cases, blistering can occur. Extreme sunburns can be painful to the point of debilitation and may require hospital care.
Duration
Sunburn can occur in less than 15 minutes, and in seconds when exposed to non-shielded welding arcs or other sources of intense ultraviolet light. Nevertheless, the inflicted harm is often not immediately obvious.
After the exposure, skin may turn red in as little as 30 minutes but most often takes 2 to 6 hours. Pain is usually strongest 6 to 48 hours after exposure. The burn continues to develop for 1 to 3 days, occasionally followed by peeling skin in 3 to 8 days. Some peeling and itching may continue for several weeks.
Skin cancer
Ultraviolet radiation causes sunburns and increases the risk of three types of skin cancer: melanoma, basal-cell carcinoma and squamous-cell carcinoma. Of greatest concern is that the melanoma risk increases in a dose-dependent manner with the number of a persons lifetime cumulative episodes of sunburn. It has been estimated that over 1/3 of melanomas in the United States and Australia could be prevented with regular sunscreen use.
Causes
Sunburn is caused by UV radiation from the sun, but "sunburn" may result from artificial sources, such as tanning lamps, welding arcs, or ultraviolet germicidal irradiation. It is a reaction of the body to direct DNA damage from UVB light. This damage is mainly the formation of a thymine dimer. The damage is recognized by the body, which then triggers several defense mechanisms, including DNA repair to revert the damage, apoptosis and peeling to remove irreparably damaged skin cells, and increased melanin production to prevent future damage.
Melanin readily absorbs UV wavelength light, acting as a photoprotectant. By preventing UV photons from disrupting chemical bonds, melanin inhibits both the direct alteration of DNA and the generation of free radicals, thus indirect DNA damage. However, human melanocytes contain over 2,000 genomic sites that are highly sensitive to UV, and such sites can be up to 170-fold more sensitive to UV induction of cyclobutane pyrimidine dimers than the average site These sensitive sites often occur at biologically significant locations near genes.
Sunburn causes an inflammation process, including production of prostanoids and bradykinin. These chemical compounds increase sensitivity to heat by reducing the threshold of heat receptor (TRPV1) activation from 109 °F (43 °C) to 85 °F (29 °C). The pain may be caused by overproduction of a protein called CXCL5, which activates nerve fibers.Skin type determines the ease of sunburn. In general, people with lighter skin tone and limited capacity to develop a tan after UV radiation exposure have a greater risk of sunburn. The Fitzpatricks Skin phototypes classification describes the normal variations of skin responses to UV radiation. Persons with type I skin have the greatest capacity to sunburn and type VI have the least capacity to burn. However, all skin types can develop sunburn.Fitzpatricks skin phototypes:
Type 0: Albino
Type I: Pale white skin, burns easily, does not tan
Type II: White skin, burns easily, tans with difficulty
Type III: White skin, may burn but tans easily
Type IV: Light brown/olive skin, hardly burns, tans easily
Type V: Brown skin, usually does not burn, tans easily
Type VI: Black skin, very unlikely to burn, becomes darker with UV radiation exposureAge also affects how skin reacts to sun. Children younger than six and adults older than sixty are more sensitive to sunlight.There are certain genetic conditions, for example xeroderma pigmentosum, that increase a persons susceptibility to sunburn and subsequent skin cancers. These conditions involve defects in DNA repair mechanisms which in turn decreases the ability to repair DNA that has been damaged by UV radiation.
Medications
The risk of a sunburn can be increased by pharmaceutical products that sensitize users to UV radiation. Certain antibiotics, oral contraceptives, antidepressants, acne medications, and tranquillizers have this effect.
UV intensity
The UV Index indicates the risk of getting a sunburn at a given time and location. Contributing factors include:
The time of day. In most locations, the suns rays are strongest between approximately 10 am and 4 pm daylight saving time.
Cloud cover. UV is partially blocked by clouds; but even on an overcast day, a significant percentage of the suns damaging UV radiation can pass through clouds.
Proximity to reflective surfaces, such as water, sand, concrete, snow, and ice. All of these reflect the suns rays and can cause sunburns.
The season of the year. The position of the sun in late spring and early summer can cause a more-severe sunburn.
Altitude. At a higher altitude it is easier to become burnt, because there is less of the earths atmosphere to block the sunlight. UV exposure increases about 4% for every 1000 ft (305 m) gain in elevation.
Proximity to the equator (latitude). Between the polar and tropical regions, the closer to the equator, the more direct sunlight passes through the atmosphere over the course of a year. For example, the southern United States gets fifty percent more sunlight than the northern United States.
Because of variations in the intensity of UV radiation passing through the atmosphere, the risk of sunburn increases with proximity to the tropic latitudes, located between 23.5° north and south latitude. All else being equal (e.g., cloud cover, ozone layer, terrain, etc.), over the course of a full year, each location within the tropic or polar regions receives approximately the same amount of UV radiation. In the temperate zones between 23.5° and 66.5°, UV radiation varies substantially by latitude and season. The higher the latitude, the lower the intensity of the UV rays. Intensity in the northern hemisphere is greatest during the months of May, June and July—and in the southern hemisphere, November, December and January. On a minute-by-minute basis, the amount of UV radiation is dependent on the angle of the sun. This is easily determined by the height ratio of any object to the size of its shadow (if the height is measured vertical to the earths gravitational field, the projected shadow is ideally measured on a flat, level surface; furthermore, for objects wider than skulls or poles, the height and length are best measured relative to the same occluding edge). The greatest risk is at solar noon, when shadows are at their minimum and the suns radiation passes most directly through the atmosphere. Regardless of ones latitude (assuming no other variables), equal shadow lengths mean equal amounts of UV radiation.
The skin and eyes are most sensitive to damage by UV at 265–275 nm wavelength, which is in the lower UVC band that is almost never encountered except from artificial sources like welding arcs. Most sunburn is caused by longer wavelengths, simply because those are more prevalent in sunlight at ground level.
Ozone depletion
In recent decades, the incidence and severity of sunburn have increased worldwide, partly because of chemical damage to the atmospheres ozone layer. Between the 1970s and the 2000s, average stratospheric ozone decreased by approximately 4%, contributing an approximate 4% increase to the average UV intensity at the earths surface. Ozone depletion and the seasonal "ozone hole" have led to much larger changes in some locations, especially in the southern hemisphere.
Tanning
Suntans, which naturally develop in some individuals as a protective mechanism against the sun, are viewed by most in the Western world as desirable. This has led to an overall increase in exposure to UV radiation from both the natural sun and tanning lamps. Suntans can provide a modest sun protection factor (SPF) of 3, meaning that tanned skin would tolerate up to three times the UV exposure as pale skin.Sunburns associated with indoor tanning can be severe.The World Health Organization, American Academy of Dermatology, and the Skin Cancer Foundation recommend avoiding artificial UV sources such as tanning beds, and do not recommend suntans as a form of sun protection.
Diagnosis
Differential diagnosis
The differential diagnosis of sunburn includes other skin pathology induced by UV radiation including photoallergic reactions, phototoxic reactions to topical or systemic medications, and other dermatologic disorders that are aggravated by exposure to sunlight. Considerations for diagnosis include duration and intensity of UV exposure, use of topical or systemic medications, history of dermatologic disease, and nutritional status.
Phototoxic reactions: Non-immunological response to sunlight interacting with certain drugs and chemicals in the skin which resembles an exaggerated sunburn. Common drugs that may cause a phototoxic reaction include amiodarone, dacarbazine, fluoroquinolones, 5-fluorouracil, furosemide, nalidixic acid, phenothiazines, psoralens, retinoids, sulfonamides, sulfonylureas, tetracyclines, thiazides, and vinblastine.
Photoallergic reactions: Uncommon immunological response to sunlight interacting with certain drugs and chemicals in the skin. When in excited state by UVR, these drugs and chemicals form free radicals that react to form functional antigens and induce a Type IV hypersensitivity reaction. These drugs include 6-methylcoumarin, aminobenzoic acid and esters, chlorpromazine, promethazine, diclofenac, sulfonamides, and sulfonylureas. Unlike phototoxic reactions which resemble exaggerated sunburns, photoallergic reactions can cause intense itching and can lead to thickening of the skin.
Phytophotodermatitis: UV radiation induces inflammation of the skin after contact with certain plants (including limes, celery, and meadow grass). Causes pain, redness, and blistering of the skin in the distribution of plant exposure.
Polymorphic light eruption: Recurrent abnormal reaction to UVR. It can present in various ways including pink-to-red bumps, blisters, plaques and urticaria.
Solar urticaria: UVR-induced wheals that occurs within minutes of exposure and fades within hours.
Other skin diseases exacerbated by sunlight: Several dermatologic conditions can increase in severity with exposure to UVR. These include systemic lupus erythematosus (SLE), dermatomyositis, acne, atopic dermatitis, and rosacea.Additionally, since sunburn is a type of radiation burn, it can initially hide a severe exposure to radioactivity resulting in acute radiation syndrome or other radiation-induced illnesses, especially if the exposure occurred under sunny conditions. For instance, the difference between the erythema caused by sunburn and other radiation burns is not immediately obvious. Symptoms common to heat illness and the prodromic stage of acute radiation syndrome like nausea, vomiting, fever, weakness/fatigue, dizziness or seizure can add to further diagnostic confusion.
Prevention
The most effective way to prevent sunburn is to reduce the amount of UV radiation reaching the skin. The World Health Organization, American Academy of Dermatology, and Skin Cancer Foundation recommend the following measures to prevent excessive UV exposure and skin cancer:
Limiting sun exposure between the hours of 10 am and 4 pm, when UV rays are the strongest
Seeking shade when UV rays are most intense
Wearing sun-protective clothing including a wide brim hat, sunglasses, and tightly-woven, loose-fitting clothing
Using sunscreen
Avoiding tanning beds and artificial UV exposure
UV intensity
The strength of sunlight is published in many locations as a UV Index. Sunlight is generally strongest when the sun is close to the highest point in the sky. Due to time zones and daylight saving time, this is not necessarily at 12 noon, but often one to two hours later. Seeking shade including using umbrellas and canopies can reduce the amount of UV exposure, but does not block all UV rays. The WHO recommends following the shadow rule: "Watch your shadow – Short shadow, seek shade!"
Grapes
A 2022 study found that eating 60 grapes a day can stop you getting sunburn. Scientists found people who ate three-quarters of a punnet every day for two weeks were better protected against damage to the skin from ultraviolet light. Polyphenols found naturally in the fruits are thought to be responsible for the resistance.
Sunscreen
Commercial preparations are available that block UV light, known as sunscreens or sunblocks. They have a sun protection factor (SPF) rating, based on the sunblocks ability to suppress sunburn: The higher the SPF rating, the lower the amount of direct DNA damage. The stated protection factors are correct only if 2 mg of sunscreen is applied per square cm of exposed skin. This translates into about 28 mL (1 oz) to cover the whole body of an adult male, which is much more than many people use in practice. Sunscreens function as chemicals such as oxybenzone and dioxybenzone (organic sunscreens) or opaque materials such as zinc oxide or titanium oxide (inorganic sunscreens) that both mainly absorb UV radiation. Chemical and mineral sunscreens vary in the wavelengths of UV radiation blocked. Broad-spectrum sunscreens contain filters that protect against UVA radiation as well as UVB. Although UVA radiation does not primarily cause sunburn, it does contribute to skin aging and an increased risk of skin cancer.
Sunscreen is effective and thus recommended for preventing melanoma and squamous cell carcinoma. There is little evidence that it is effective in preventing basal cell carcinoma. Typical use of sunscreen does not usually result in vitamin D deficiency, but extensive usage may.
Recommendations
Research has shown that the best sunscreen protection is achieved by application 15 to 30 minutes before exposure, followed by one reapplication 15 to 30 minutes after exposure begins. Further reapplication is necessary only after activities such as swimming, sweating, and rubbing. This varies based on the indications and protection shown on the label—from as little as 80 minutes in water to a few hours, depending on the product selected. The American Academy of Dermatology recommends the following criteria in selecting a sunscreen:
Broad spectrum: protects against both UVA and UVB rays
SPF 30 or higher
Water resistant: sunscreens are classified as water resistant based on time, either 40 minutes, 80 minutes, or not water resistant
Eyes
The eyes are also sensitive to sun exposure at about the same UV wavelengths as skin; snow blindness is essentially sunburn of the cornea. Wrap-around sunglasses or the use by spectacle-wearers of glasses that block UV light reduce the harmful radiation. UV light has been implicated in the development of age-related macular degeneration, pterygium and cataract. Concentrated clusters of melanin, commonly known as freckles, are often found within the iris.
The tender skin of the eyelids can also become sunburned, and can be especially irritating.
Lips
The lips can become chapped (cheilitis) by sun exposure. Sunscreen on the lips does not have a pleasant taste and might be removed by saliva. Some lip balms (ChapSticks) have SPF ratings and contain sunscreens.
Feet
The skin of the feet is often tender and protected, so sudden prolonged exposure to UV radiation can be particularly painful and damaging to the top of the foot. Protective measures include sunscreen, socks, and swimwear or swimgear that covers the foot.
Diet
Dietary factors influence susceptibility to sunburn, recovery from sunburn, and risk of secondary complications from sunburn. Several dietary antioxidants, including essential vitamins, have been shown to have some effectiveness for protecting against sunburn and skin damage associated with ultraviolet radiation, in both human and animal studies. Supplementation with Vitamin C and Vitamin E was shown in one study to reduce the amount of sunburn after a controlled amount of UV exposure.
A review of scientific literature through 2007 found that beta carotene (Vitamin A) supplementation had a protective effect against sunburn, but that the effects were only evident in the long-term, with studies of supplementation for periods less than 10 weeks in duration failing to show any effects. There is also evidence that common foods may have some protective ability against sunburn if taken for a period before the exposure.
Protecting children
Babies and children are particularly susceptible to UV damage which increases their risk of both melanoma and non-melanoma skin cancers later in life. Children should not sunburn at any age and protective measures can ensure their future risk of skin cancer is reduced.
Infants 0–6 months: Children under 6mo generally have skin too sensitive for sunscreen and protective measures should focus on avoiding excessive UV exposure by using window mesh covers, wide brim hats, loose clothing that covers skin, and reducing UV exposure between the hours of 10am and 4pm.
Infants 6–12 months: Sunscreen can safely be used on infants this age. It is recommended to apply a broad-spectrum, water-resistant SPF 30+ sunscreen to exposed areas as well as avoid excessive UV exposure by using wide-brim hats and protective clothing.
Toddlers and Preschool-aged children: Apply a broad-spectrum, water-resistant SPF 30+ sunscreen to exposed areas, use wide-brim hats and sunglasses, avoid peak UV intensity hours of 10am-4pm and seek shade. Sun protective clothing with a SPF rating can also provide additional protection.
Artificial UV exposure
The WHO recommends that artificial UV exposure including tanning beds should be avoided as no safe dose has been established. When one is exposed to any artificial source of occupational UV, special protective clothing (for example, welding helmets/shields) should be worn </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Swimmers itch and explain in detail? [/INST] Swimmers itch, cercarial dermatitis or schistosome dermatitis is a short-term allergic contact dermatitis occurring in the skin of humans that have been infected by water-borne schistosomes, a type of flatworm. It is common in freshwater, brackish and marine habitats worldwide. The incidence of this condition may be increasing, although this may be attributed to better monitoring and reporting. Nevertheless, the condition is considered to be an emerging infectious disease.The main symptom is itchy papules (raised skin) that commonly occur within 2 days of infection. Initially, wheals develop quickly, then turn into maculae in about half an hour. Within 10–12 hours these turn into very itchy papules that reach their worst by the second or third day. The papules disappear in 1–2 weeks but secondary effects from scratching can continue longer. The intense itching, which peaks after 48–72 hours, is associated with pain and swelling of the affected areas. People repeatedly exposed to cercariae develop heavier symptoms with faster onset.There are no permanent effects to people from this condition. Orally administered hydroxyzine, an antihistamine, is sometimes prescribed to treat swimmers itch and similar dermal allergic reactions. In addition, bathing in oatmeal, baking soda, or Epsom salts can also provide relief of symptoms.
Cause
Swimmers itch was known to exist as early as the 19th century, but it was not until 1928 that a biologist found that the dermatitis was caused by the larval stage of a group of flatworm parasites in the family Schistosomatidae. The genera most commonly associated with swimmers itch in humans are Trichobilharzia and Gigantobilharzia. It can also be caused by schistosome parasites of non-avian vertebrates, such as Schistosomatium douthitti, which infects snails and rodents. Other taxa reported to cause the reaction include Bilharziella polonica and Schistosoma bovis. In marine environments, especially along the coasts, swimmers itch can occur as well.These parasites use both freshwater snails and vertebrates as hosts in their parasitic life cycles as follows:
Once a schistosome egg is immersed in water, a short-lived, non-feeding, free-living stage known as the miracidium emerges. The miracidium uses cilia to follow chemical and physical cues thought to increase its chances of finding the first intermediate host in its life cycle, a freshwater snail.
After infecting a snail, it develops into a mother sporocyst, which in turn undergoes asexual reproduction, yielding large numbers of daughter sporocysts, which asexually produce another short-lived, free-living stage, the cercaria.
Cercariae use a tail-like appendage (often forked in genera causing swimmers itch) to swim to the surface of the water; and use various physical and chemical cues in order to locate the next and final (definitive) host in the life cycle, a bird. These larvae can accidentally come into contact with the skin of a swimmer. The cercaria penetrates the skin and dies in the skin immediately. The cercariae cannot infect humans, but they cause an inflammatory immune reaction. This reaction causes initially mildly itchy spots on the skin. Within hours, these spots become raised papules which are intensely itchy. Each papule corresponds to the penetration site of a single parasite.
After locating a bird, the parasite penetrates through the skin (usually the feet), dropping the forked tail in the process. Inside the circulatory system, the immature worms (schistosomula) develop into mature male and female worms, mate and migrate through the hosts circulatory system (or nervous system in case of T. regenti) to the final location (veins feeding the gastrointestinal tract) within the host body. There they lay eggs in the small veins in the intestinal mucosa from which they make their way into the lumen of the gut, and are dumped into the water when the bird defecates. One European species, Trichobilharzia regenti, instead infects the bird hosts nasal tissues and larvae hatch from the eggs directly in the tissue during drinking/feeding of the infected birds.
Risk factors
Humans usually become infected after swimming in slow-moving rivers, lakes or ponds. Some laboratory evidence indicates snails shed cercariae most intensely in the morning and on sunny days, and exposure to water in these conditions may therefore increase risk. Duration of swimming is correlated with increased risk of infection in Europe and North America, and shallow inshore waters may harbour higher densities of cercariae than open waters offshore </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Periodic breathing and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Anaplastic large-cell lymphoma and explain in detail? [/INST] Anaplastic large cell lymphoma (ALCL) refers to a group of non-Hodgkin lymphomas in which aberrant T cells proliferate uncontrollably. Considered as a single entity, ALCL is the most common type of peripheral lymphoma and represents ~10% of all peripheral lymphomas in children. The incidence of ALCL is estimated to be 0.25 cases per 100,000 people in the United States of America. There are four distinct types of anaplastic large cell lymphomas that on microscopic examination share certain key histopathological features and tumor marker proteins. However, the four types have very different clinical presentations, gene abnormalities, prognoses, and/or treatments.ALCL is defined based on microscopic histopathological examination of involved tissues which shows the presence of at least some ALCL-defining pleomorphic cells. These "hallmark" cells have abnormal kidney-shaped or horseshoe-shaped nuclei, prominent Golgi, and express the CD30 tumor marker protein on their surface membranes. In 2016, the World Health Organization (WHO) separated ALCL into four types: ALK-positive ALCL (also termed ALK+ ALCL), ALK-negative ALCL (ALK− ALCL), primary cutaneous ALCL (pcALCL), and breast implant-associated ALCL (BIA-ALCL). WHO defined BIA-ALCL as an ALCL type provisionally, i.e. subject to redefinition if future studies should support such a change.ALK-positive and ALK-negative ALCL are aggressive systemic lymphomas. They are differentiated based on their expression of an abnormal ALK protein made by a somatic recombination in the ALK gene. ALK, i.e. anaplastic lymphoma kinase, is a protein product of the ALK gene located on chromosome 2. In ALK-positive ALCL, a portion of the ALK gene has merged with another site on the same or different chromosome to form a chimeric gene consisting of part of the new site and part of the ALK gene coding for ALKs activity. This chimeric gene overproduces a fusion protein with excessive ALK activity. ALK is a serine/threonine-specific protein kinase that activities PI3K/AKT/mTOR, Ras-activated ERKs, Janus kinase-activated STAT proteins, and other cell signaling pathways as well as the expression of various genes by epigenetic mechanisms. Activations of these signaling pathways and genes may stimulate cell growth, proliferation, survival, and/or other behaviors that promote malignancy. ALK-negative ALCL, while not involving ALK translocations, has, in a variable percentage of cases, various translocations, rearrangements, and mutations that may contribute to its development.pcALCL and BIA-ALCL are far less aggressive lymphomas that tend to be localized to one or a very few sites. pcALCL presents as a single or, less commonly, multifocal skin papules or tumors that typically are limited to the dermis without infiltrating to the subcutaneous tissues or spreading to other sites. Its neoplastic cells may contain some gene translocations including, in very rare cases, ones with the ALK gene that are similar to those in ALK-positive ALCL. BIA-ALCL is caused by and develops around a breast implant. It typically presents many years after the surgical implantation as a deformation, textural change, and/or pain emanating in the area around implanted breast. In most cases, the disease is localized to the involved breast. BPI-ALCL is associated with occasional mutations in one or two genes but has not been reported to be associated with products of gene translocations or rearrangements.
ALK-positive anaplastic large cell lymphoma
Signs and symptoms
ALK-positive ALCL occurs mostly but not exclusively in children and young adults and is slightly more common in males. Most individuals present with stage III or IV (i.e. advanced) disease. They evidence systemic symptoms including B symptoms such as fever, night sweats, and weight loss in 75% of cases; lymph node enlargement (90%) including those in the mediastinum (36%); and lymphomatous lesions in the skin (26%), bone (14%), soft tissues (15%), lung (12%), and/or liver (8%). Tumor cells are found in the bone marrow in up to 40% of the cases when immunohistochemical analysis is performed. Involvement of the central nervous system or a leukemia-like circulation of malignant cells in the blood occurs only very rarely. Most patients, including up to 90% of young children and adolescents, have circulating autoantibodies directed against the ALK fusion protein expressed by their tumor cells.
Diagnosis
ALK-positive ALCL is diagnosed by histological and immunological examinations of involved tissues, typically lymph nodes. These tissues have lymphoma-like infiltrates that have variable numbers of ALCL "hallmark" cells, i.e. cells with kidney- or horseshoe-shaped nuclei that strongly express CD30 as detected by immunohistochemistry and an ALK fusion protein as detected by fluorescence in situ hybridization. These cells are scattered throughout the infiltrates. WHO classifies these infiltrates into 5 patterns: a Common Pattern consisting of large variably-shaped cells with large nuclei that typically contain multiple nucleoli (60–70% of cases); a Small Cell Pattern consisting of small to medium-sized neoplastic cells with clear cytoplasm and "hallmark" cells that are concentrated around small blood vessels (5–10% of cases); a Lymphohistiocytic Pattern consisting of small neoplastic cells along with abundant histiocytes (10% of cases); a Hodgkins-Like Pattern in which the architecture resembles the nodular sclerosis pattern of Hodgkin lymphoma (3% of cases); and a Composite Pattern consisting of two or more of the just described patterns (15% of cases). Detection of circulating autoantibody against ALK supports the diagnosis. Individuals with low levels of these autoantibodies are at an increase risk of relapsing after treatment.
Gene and molecular abnormalities
In 80–85% of cases, the ALK detected in ALK-positive ALCL is a NPM1-ALK fusion protein. It is made by a fusion of NPM1 gene, which makes nucleophosmin 1, located on the long or "q" arm of chromosome 5 at position 35 (notated as 5q35) with the ALK gene located on the short or "p" arm of chromosome 2 at position 23 (notated as 2p23) to form a chimeric gene notated as (2;5)(p23;q35). In 13% of cases ALK fuses with the TPM3 gene or in <1% of cases for each of the following genes: TFG, ATIC, CLTC, TPM4, MSN, RNF213 (also termed ALO17), MYH9, or TRAF1. All of these fusion proteins are considered to act like NPMI-ALK in possessing high ALK activity that promotes the development and progression ALK-positive ALCL by activating the cell signaling pathways cited in the Introduction. 15% Of individuals with ALK-positive ALCL also have point mutations in the NOTCH1 gene. While most of these abnormalities are thought to be detrimental not all are. For example, DUSP22 gene rearrangements are associated with favorable outcomes in ALK-positive (as well as ALK-negative) ALCL.
Treatment and prognosis
A recommended induction therapy for ALK-positive ALCL in individuals with lesions containing more than 10 percent CD30-positive cells consists of brentuximab vedotin (a drug consisting of an anti-CD30 antibody attached to a cell-killing agent, monomethyl auristatin E); two chemotherapy drugs, cyclophosphamide and the anthracycline, doxorubicin; and the corticosteroid, prednisone. This regimen gave a progression-free survival rate of 48.2 months in one study and overall survival rates of 70–90% at five years in other studies. For >60 year old and medically unfit individuals of any age, the standard CHOP regimen (cyclophosphamide, doxorubicin, prednisone, and the chemotherapeutic agent, vincristine) is used. For younger, medically fit individuals, the chemotherapeutic agent etoposide is added to the CHOP regimen (CHOP plus etoposide is termed the "CHOEP" regimen). For patients with lesions that contain <10% CD30-positive neoplastic cells, brentuximab vedotin, which targets these cells, is not used. Rather, patients are treated with an anthracycline-based chemotherapy regimen. Patients >60 years or less medically fit are given cyclophosphamide, doxorubicin, vincristine, and prednisone while patients ≤60 years old are given CHOP plus etoposide or one of various other intensive chemotherapy regiments. The intensive chemotherapy regimens give 5 year overall survival rates of 70–93%. The role of radiation therapy for ALK-positive ALCL is unclear but has been used for patients who cannot tolerate or do not achieve complete responses to the drug regimens and to patients with organ-threatening or life-threatening tumorous infiltrates. The role of autologous or less preferably allogenic hematopoietic stem cell transplantation (transplantation using the individuals own bone marrow, i.e. autologous, or a donors, i.e. allogenic) after achieving a complete remission following induction therapy is also unclear. Individuals with relapsed or refractory disease are treated with brentuximab vedotin if they did not receive the drug previously or had not received it in the previous 6 months. A small study reported overall response rates, complete response rates, and disease-free survival rates at 24 months of 63%, 45%, and 54%, respectively, using this regimen. Those who attain a complete response on this drug and can tolerate it are than treated with bone marrow transplantation. Finally, patients who fail or relapse on these treatments are given salvage therapy regimens that have been used for relapsed or refractory aggressive B cell malignancies such as GDP (i.e. gemcitabine, dexamethasone, and cisplatin), DHAP, and ICE.Drugs that inhibit ALK activity such as crizotinib and alectinib have been successful in establishing complete and partial remissions in a limited number of patients with advanced, refractory ALK-positive ALCL. These and other drugs are undergoing clinical trials to determine there safety and effectiveness in treating ALK-positive ALCL. (Also see clinical trials that use ALK inhibitors in ALK-positive ALCL and clinical trials that use ALK inhibitors in ALK-positive cancers.)
ALK-negative anaplastic large cell lymphoma
Signs and symptoms
Unlike ALK-positive ALCL, ALK-negative ALCL tends to occur in older adults (median age at diagnosis: 55–60 years) and presents primarily with lymph node involvement; only 20% of patients with ALK-ALCL present with extra-nodal disease in sites such as the skin, bone, and soft tissues. Nonetheless, most individuals (~67%) present with advanced stage grade III or IV disease in which neoplastic infiltrates occur in multiple lymph node locations and/or extra-nodal sites. ALK autoantibodies are not found in this type of ALCL. The prognosis of ALK-negative ALCL is often quoted as being worse than that for ALK-positive ALCL but this may reflect the older age and advanced stage at which ALK-negative disease presents: studies comparing age- and grade-matched ALK-positive to ALK-negative ALCL patients show little differences in prognoses.
Diagnosis
The histology of ALK-negative ALCL, similar to ALK-positive ALCL, consist of "hallmark" cells that strongly express CD30. Unlike ALK-positive ALCL, however, ALK-negative ALC does not fall into different morphological patterns. The histological of this disease may overlap with and be difficult to distinguish from other CD30-positive T-cell lymphomas or the nodular sclerosis form of Hodgkin lymphoma. Cases in which ALK-negative ALCL is not distinguishable from the latter lymphomas are best diagnosed as peripheral T-cell lymphoma not otherwise specified (PTL, NOS). The histology of ALK-negative ALCL may also overlap with tumors of non–T-cell lineage such as various carcinomas. The differential diagnoses of ambiguous cases may be helped by examining the tumor cells for the expression of certain marker proteins. For example, expression of CD56, MUC1 (also termed EMA for epithelial membrane antigen), and clusterin and strong uniform expression of CD30 support the diagnosis of ALK-negative ALCL over PTL, NOS, while variable CD30 expression and extensive expression of T-cell receptor proteins favor PTCL-NOS over ALK-negative ALCL. Detection of certain gene abnormalities (see next section) may also help distinguishing these diseases.
Gene and molecular abnormalities
ALK-negative ALCL tumor cells show products made by chimeric genes: DUSP22-IRF4 (many of which are fused at particular site and termed DUSP22-FRA7H) in 30% of the cases; TP63-TBL1XR1 in 8% of cases; and NFKB2-ROS1, NCOR2-ROS1, NFKB2-TYK2, or PABPC4-TYK2 in rare cases. They also show mutations in the JAK1 and/or STAT3 genes in 18% of cases; the MSC gene in 15% of cases, and the NOTCH1 gene in 15% of cases. About 24% of cases have a truncated ERBB4 gene. DUSP22 gene rearrangements have been associated with favorable outcomes in ALK-negative ALCL while TP63 gene arrangements are often associated with a poorer prognosis in various cancers. ALK-negative ALCL cells overexpress overactive STAT3 in 47% of cases and JAK1 in many cases. Many of these gene abnormalities appear to contribute to the development of ALK-negative ALCL.
Treatment and prognosis
The various treatments of ALK-negative ALCL generally follow those used for ALK-positive ALCL. However ALK-negative individuals more often present at an advanced stage of disease that requires intensive therapy. The aggressive treatments outline in the section on ALK-positive ALCL are used with the exception that patients with more favorable clinical and tumor tissue indicators as defined by having an International Prognostic Index score above 2 (particularly those who are under the age of 66) who obtain a complete remission after initial therapy are recommended for follow-up bone marrow transplantation. The International T-Cell Project reported on the treatment of 220 patients with ALK-negative ALCL; 15 received only supportive care, 168 were treated with anthracycline-containing chemotherapeutic regimens, 31 with anthracycline plus etoposide–containing chemotherapeutic regimens, 6 with other regimens; 16 with high-dose chemotherapy plus autologous stem cell bone marrow transplantation, and 4 with radiotherapy alone. Of the 205 patients that had more that supportive therapy, the overall and complete response rates were 77% and 63%, respectively. After a median follow-up of 52 months, the median progression free and overall survival times were 41 months and 55 months, respectively; 3- and 5-year progression-free rates were 52% and 43%, respectively; and 3- and 5-year overall survival rates were 60% and 49%, respectively. Chemotherapy treatments containing both anthracycline and etoposide were associated with superior overall survival rates (3- and 5-years of 76% and 69%, respectively) compared to chemotherapy treatment regiments containing an anthracycline but not etoposide (3- and 5-year overall survival rates of 56% and 44%, respectively). Progression-free survival rates with the latter two types of chemotherapy treatment regimens were not appreciably different.
Primary cutaneious anaplastic large cell lymphoma
Signs and symptoms
pcALCL is the second most common lymphoma in the category of Cutaneous T cell lymphoma cutaneous T cell lymphomas that includes lymphomatoid papulosis, various borderline CD30-positive cutaneous T cell lymphomas, and mycosis fungoides. The median age at diagnosis for pcALC is 61 years. The disease has a male predominance and appears to be more common in Caucasian populations. Individuals with pcALCL typically present with reddish masses that initially appear in the skin or, much less frequently, the lymph nodes or various organs. These masses are nodules or tumors that are often ulcerated, greater than 2 cm in diameter, and localized to a single site. In 20% of cases, however, they occur in multiple sites. In about 10% cases followed for many years, pcALCL presenting as skin lesions progresses to extracutaneous sites, mainly to regional draining lymph nodes. Over these same long periods, however, the diseases lesions partially regress in about 50% of cases but then relapse in about 40% of cases.
Diagnosis
pcALCL lesions exhibit large malignant T-cells or null cells (i.e. cells lacking many T-cell receptor proteins) with "Hallmark" cell features of anaplasia, pleomorphism, and kidney- and horse shaped-nuclei. These lesions are often limited to the dermis but can extend into the surrounding subcutaneous tissue and/or epidermis. Rarely, the lesions, termed pyrogenic variants, are rich in polymorphonuclear neutrophils. The neoplastic cells strongly express CD30 (100% of cases), CD2 (78%), CD4 (54–90% of cases), cytotoxicity marker proteins, and various other marker proteins that help distinguish it from other ALCL, cutaneous T-cell lymphomas, and cancers. While these cells typically are ALK-negative, they do express ALK-containing fusion proteins in rare cases. The latter cases have a relatively benign course compared to ALK-positive ALCL and are treated as variants of pcALCL rather than ALK-positive ALCL.
Gene and molecular abnormalities
Similar to ALK-negative ALCL, pcALCL have chromosomal rearrangements in their DUSP22 (20-30% of cases) and TP63 (5% of cases) genes and a mutation in the MSC (i.e. musculin) gene (6% of cases). Between 1993 and 2019, the Dutch registry had 6 (i.e. 2%) of 319 pcALCL cases that expressed ALK. Three of these cases were due to the NPM1-ALK chimeric gene that predominates in ALK-positive ALCL while the remaining three were due to TRAF1-ALK, ATIC-ALK, or TPM3-ALK chimeric genes. All 6 patients shared exactly the same breakpoint site in the ALK at exon 20 on the ALK gene.
Treatment and prognosis
Most pcALCL individuals present with isolated lesions that are effectively managed with radical surgical excision and/or radiation therapies; this approach is regarded as front-line therapy for localized disease. However, individuals with extensive disease and/or multiple tissue involvement respond poorly to front line treatment. These as well as individuals who relapse after front-line therapy need further treatment. In one study of 65 patients with pcALCL, 95% treated with surgical excision achieved complete remission but 41% of them relapsed within 22 months while 64% of patients treated with surgery plus radiotherapy developed recurrent disease within 55 months. In another study of 135 patients with this disease, 39% had relapses limited to the skin, 15% developed extracutaneous disease, and 9% ultimately died from pcALCL. Individuals with refractory, recurrent, and/or extensive disease have been treated more aggressively. In various studies, 10 of 13 such patients responded within 4 weeks to treatment with the chemotherapeutic agent, methotrexate; 10 of 16 patients obtained complete remissions of their skin lesions when treated with the anti-CD30 monoclonal antibody, brentuximab verdotin, and 48 of 53 patients treated with the aggressive chemotherapeutic CHOP regimen had complete responses. However, most of these patients developed recurrent disease within four months. As currently recommended, brentuximab vedotin is use initially to treat widespread systemic disease; a single chemotherapeutic drug rather than aggressive combination chemotherapy with CHOP or similar regimens is used to treat patients not responding to brentuximab verdotin; aggressive chemotherapy regimens are used to treat widespread nodal and/of visceral disease and disease which failed on other regiments; and, although there is little data supporting this, employ allogeneic bone marrow transplantation for patients with multiple relapses that have progressed on systemic therapy. Recurrences, regardless of treatment type, are common. pcALCL has an estimated five-year failure-free survival rate of 55 percent. Nonetheless, the disease has a 10-year overall survival rate of 90%. Typically leg involvement portends a worse prognosis: it has a five-year disease-specific survival rate of 76 percent compared with 96 percent for disease in other locations. Involvement of local nodes alone in patients with skin lesions does not seem to portend an adverse prognosis.
Breast implant-associated anaplastic large cell lymphoma
Signs and symptoms
BIA-ALCL is a complication of silicon-filled and saline-filled breast implants which develops 9 years to 10 years (median times) after surgical implantation. First described in 1997, it is estimated that the prevalence of BIA-ACLC in individuals with implants that have a textured surface is 1 in 30,000 while the risk of it is 70-fold lower in individuals who have a smooth surface implant or have no implant at all (i.e. in patients that have another type of ALCL). These relations strongly suggest that BIA-ACLC develops primarily if not exclusively in patients with textured implants. In all cases, however, many researchers suspect that BIA-ALCL is an under-recognized, misdiagnosed, and under-reported complication of breast implants. Two-thirds of individuals with BIA-ALCL present with swelling, discomfort, and/or (rarely) pain in the affected breast. This is due to the development of a tumor mass and/or swelling caused by an effusion (i.e. fluid) that accumulates between the breast implant surface and the fibrous capsule that has grown around it. The effusion fluid typically contains white blood cells, tumor cells, and high levels of protein. Besides or in addition to breast swelling, patients present with a breast mass in 30% present of cases, enlarged lymph nodes in the armpit or around the chest clavicle bones in 20% of cases, and/or in a small percentage of cases lesions in more distant tissues. Rarely, patients have presented with skin rash or itching on or around the involved breast. Using the Ann Arbor staging system, 83% of patients present with Stage 1 localized disease while the remaining 10, 0, and 7% of patients present with what is normally regarded as more aggressive Stage II, III, or IV diseases, respectively. Thus, about 17% of individuals present with a more aggressive disease that has spread from its original breast implant site to nearby lymph nodes, to areas outside of the capsule, or to more distal tissues.
Diagnosis
In most individuals with BIA-ALCL, the affected breast has a thickened capsule around the implant and effusion fluid between the capsule and implant. Neoplastic cells are located in and typically limited to the capsule and effusion. Histological examination of the capsules shows large anaplastic cells but cells with all the features of ALCL "hallmark" are often difficult to detect. In addition to these neoplastic cells, the capsule lesions contain, sometimes in a large excess that makes diagnosis difficult, a variety of non-malignant cells such as small lymphocytes, histiocytes, and granulocytes (the granulocytes are mostly eosinophils). The histology of palpable masses exhibit a different pathological picture: the tumor masses have multinodular areas that consist of necrosis or fibrous tissue interspaced with areas that consist of large neoplastic cells that have abundant cytoplasm and abnormally shaped nuclei within a fibrotic and chronic inflammatory cell background. Again, typical ALCL-defining "hallmark" cells may be difficult to find in these masses. The effusions show abundant, uniform-appearing, non-cohesive large cells with irregularly-shaped nuclei, prominent nucleoli and abundant cytoplasm. The histology and pathological features of diseased lymph nodes and tissues outside of the breast implant are indistinguishable from those seen in ALK-negative ALCL. The neoplastic cells in the capsules, effusions, and tissues strongly and uniformly express CD30, CD4 (75–84% of cases), EMA (48–90%), CD43 (86-95%), CD45 (44-74%), and in a far fewer percentage of cases various other marker proteins. These cells do not express ALK and often lack the characteristic surface marker proteins of T-cells. Identification of the status of these markers helps diagnose the disease.
Gene and molecular abnormalities
No chromosome translocations, chimeric genes, or fusion proteins have been described in BIA-ALCL although the neoplastic cells in the disease have been described to have gene copy number variations involving gains in gene copies on the p arm of chromosome 19 and losses of gene copies in the p arms of chromosome 10 and 1. The neoplastic cells in BIA-ALCL show mutations of the STAT3 gene in 64% of cases and reports of mutations in JAK1, JAK3, DNMT3A, and TP53 genes. The development of BIA-ALCL, it has often been suggested, may be at least in part a T-cell-induced, inflammation-driven cancer response to the implant.
Treatment and prognosis
The treatment regimens for BIA-ALCL recommended by 1) a multidisciplinary expert review panel, 2) the National Comprehensive Cancer Network, and 3) the French National Cancer Institute (Agence Nationale de Sécurité du Médicament [ANSM]) are very similar, commonly used, and summarized here. BIA-ALCL staging is done to identify patients with BIA-ALCL confined to the implant, capsule, and effusion from more disseminated disease. The staging preferably employs the TMN system designed to stage solid tumors. This is based on historical data suggesting that BIA-ALCL progresses locally like solid tumors rather than liquid tumors such as other lymphomas </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Whooping cough and explain in detail? [/INST] Whooping cough, also known as pertussis or the 100-day cough, is a highly contagious bacterial disease. Initial symptoms are usually similar to those of the common cold with a runny nose, fever, and mild cough, but these are followed by two or three months of severe coughing fits. Following a fit of coughing, a high-pitched whoop sound or gasp may occur as the person breathes in. The violent coughing may last for 10 or more weeks, hence the phrase "100-day cough". A person may cough so hard that they vomit, break ribs, or become very tired from the effort. Children less than one year old may have little or no cough and instead have periods where they cannot breathe. The time between infection and the onset of symptoms is usually seven to ten days. Disease may occur in those who have been vaccinated, but symptoms are typically milder.Pertussis is caused by the bacterium Bordetella pertussis. It is spread easily through the coughs and sneezes of an infected person. People are infectious from the start of symptoms until about three weeks into the coughing fits. Those treated with antibiotics are no longer infectious after five days. Diagnosis is by collecting a sample from the back of the nose and throat. This sample can then be tested by either culture or by polymerase chain reaction.Prevention is mainly by vaccination with the pertussis vaccine. Initial immunization is recommended between six and eight weeks of age, with four doses to be given in the first two years of life. Protection from pertussis decreases over time, so additional doses of vaccine are often recommended for older children and adults. Vaccination during pregnancy is highly effective at protecting the infant from pertussis during their vulnerable early months of life, and is recommended in many countries. Antibiotics may be used to prevent the disease in those who have been exposed and are at risk of severe disease. In those with the disease, antibiotics are useful if started within three weeks of the initial symptoms, but otherwise have little effect in most people. In pregnant women and children less than one year old, antibiotics are recommended within six weeks of symptom onset. Antibiotics used include erythromycin, azithromycin, clarithromycin, or trimethoprim/sulfamethoxazole. Evidence to support interventions for the cough, other than antibiotics, is poor. About 50% of infected children less than a year old require hospitalization and nearly 0.5% (1 in 200) die.An estimated 16.3 million people worldwide were infected in 2015. Most cases occur in the developing world, and people of all ages may be affected. In 2015, pertussis resulted in 58,700 deaths – down from 138,000 deaths in 1990. Outbreaks of the disease were first described in the 16th century. The bacterium that causes the infection was discovered in 1906. The pertussis vaccine became available in the 1940s.
Signs and symptoms
The classic symptoms of pertussis are a paroxysmal cough, inspiratory whoop, and fainting, or vomiting after coughing. The cough from pertussis has been documented to cause subconjunctival hemorrhages, rib fractures, urinary incontinence, hernias, and vertebral artery dissection. Violent coughing can cause the pleura to rupture, leading to a pneumothorax. Vomiting after a coughing spell or an inspiratory whooping sound on coughing, almost doubles the likelihood that the illness is pertussis. The absence of a paroxysmal cough or posttussive emesis, though, makes it almost half as likely.The illness usually starts with mild respiratory symptoms include mild coughing, sneezing, or a runny nose (known as the catarrhal stage). After one or two weeks, the coughing classically develops into uncontrollable fits, sometimes followed by a high-pitched "whoop" sound, as the person tries to inhale. About 50% of children and adults "whoop" at some point in diagnosed pertussis cases during the paroxysmal stage.
This stage usually lasts two to eight weeks, or sometimes longer. A gradual transition then occurs to the convalescent stage, which usually lasts one to four weeks. This stage is marked by a decrease in paroxysms of coughing, although paroxysms may occur with subsequent respiratory infection for many months after the onset of pertussis.Symptoms of pertussis can be variable, especially between immunized and non-immunized people. Those that are immunized can present with a more mild infection; they may only have the paroxysmal cough for a couple of weeks, and it may lack the "whooping" characteristic. Although immunized people have a milder form of the infection, they can spread the disease to others who are not immune.
Incubation period
The time between exposure and the development of symptoms is on average 7–14 days (range 6–20 days), rarely as long as 42 days.
Cause
Pertussis is caused by the bacterium Bordetella pertussis. It is an airborne disease (through droplets) that spreads easily through the coughs and sneezes of an infected person.
Spread from animals
Uncertainties have existed of B. pertussis and whooping cough as a zoonotic disease since around 1910, but in the 1930s knowledge was gained that the bacteria lost their virulent power when repeatedly spread on agar media. This explained the difficulties to reproduce results from different studies as the pre-inoculating handlings of the bacteria were not standardized among scientists.Today it is established that at least some primate species are highly susceptible to B. pertussis and develop clinical whooping cough in high incidence when exposed to low inoculation doses. The bacteria may be present in wild animal populations, but this is not confirmed by laboratory diagnosis, although whooping cough is known among wild gorillas. Several zoos also have a long-standing custom of vaccinating their primates against whooping cough.
Mechanism
After the bacteria are inhaled, they initially adhere to the ciliated epithelium in the nasopharynx. Surface proteins of B. pertussis, including filamentous hemaglutinin and pertactin, mediate attachment to the epithelium. The bacteria then multiply. In infants, who experience more severe disease, the bacteria spread down to the lungs.The bacteria secretes a number of toxins. Tracheal cytotoxin, a fragment of peptidoglycan, kills ciliated epithelial cells and thereby inhibits the mucociliary elevator by which mucus and debris are removed. TCT may contribute to the cough characteristic of pertussis. The cough may also be caused by a yet-to-be identified "cough toxin". Pertussis toxin causes lymphocytosis by an unknown mechanism. The elevated number of white blood cells leads to pulmonary hypertension, a major cause of death by pertussis. In infants who develop encephalopathy, cerebral hemorrhage and cortical atrophy occur, likely due to hypoxia.
Diagnosis
Based on symptoms
A physicians overall impression is most effective in initially making the diagnosis. Single factors are much less useful. In adults with a cough of less than 8 weeks, vomiting after coughing or a "whoop" is supportive. If there are no bouts of coughing or there is a fever the diagnosis is unlikely. In children who have a cough of less than 4 weeks vomiting after coughing is somewhat supportive but not definitive.
Lab tests
Methods used in laboratory diagnosis include culturing of nasopharyngeal swabs on a nutrient medium (Bordet–Gengou medium), polymerase chain reaction (PCR), direct fluorescent antibody (DFA), and serological methods (e.g. complement fixation test). The bacteria can be recovered from the person only during the first three weeks of illness, rendering culturing and DFA useless after this period, although PCR may have some limited usefulness for an additional three weeks.
Serology may be used for adults and adolescents who have already been infected for several weeks to determine whether antibody against pertussis toxin or another virulence factor of B. pertussis is present at high levels in the blood of the person.
Differential diagnosis
A similar, milder disease is caused by B. parapertussis.
Prevention
The primary method of prevention for pertussis is vaccination. Evidence is insufficient to determine the effectiveness of antibiotics in those who have been exposed, but are without symptoms. Preventive antibiotics, however, are still frequently used in those who have been exposed and are at high risk of severe disease (such as infants).
Vaccine
Pertussis vaccines are effective at preventing illness and are recommended for routine use by the World Health Organization and the United States Centers for Disease Control and Prevention. The vaccine saved an estimated half a million lives in 2002.The multicomponent acellular pertussis vaccine is 71–85% effective, with greater effectiveness against more severe strains. However, despite widespread vaccination, pertussis has persisted in vaccinated populations and is today "one of the most common vaccine-preventable diseases in Western countries". The 21st-century resurgences in pertussis infections is attributed to a combination of waning immunity and bacterial mutations that elude vaccines.Immunization does not confer lifelong immunity; a 2011 CDC study indicated that protection may only last three to six years. This covers childhood, which is the time of greatest exposure and greatest risk of death from pertussis.An effect of widespread immunization on society has been the shift of reported infections from children aged 1–9 years to infants, adolescents, and adults, with adolescents and adults acting as reservoirs for B. pertussis and infecting infants who have had fewer than three doses of vaccine.Infection induces incomplete natural immunity that wanes over time. A 2005 study said estimates of the duration of infection-acquired immunity range from 7 to 20 years and the different results could be the result of differences in levels of circulating B. pertussis, surveillance systems, and case definitions used. The study said protective immunity after vaccination wanes after 4–12 years. One study suggested that the availability of vaccine exemptions increases the number of pertussis cases.Some studies have suggested that while acellular pertussis vaccines are effective at preventing the disease, they have a limited impact on infection and transmission, meaning that vaccinated people could spread pertussis even though they may have only mild symptoms or none at all. Pertussis infection in these persons may be asymptomatic, or present as illness ranging from a mild cough to classic pertussis with persistent cough (i.e., lasting more than 7 days). Even though the disease may be milder in older persons, those who are infected may transmit the disease to other susceptible persons, including unimmunized or incompletely immunized infants. Older persons are often found to have the first case in a household with multiple pertussis cases, and are often the source of infection for children.
Treatment
The antibiotics erythromycin, clarithromycin, or azithromycin are typically the recommended treatment. Newer macrolides are frequently recommended due to lower rates of side effects. Trimethoprim-sulfamethoxazole (TMP/SMX) may be used in those with allergies to first-line agents or in infants who have a risk of pyloric stenosis from macrolides.A reasonable guideline is to treat people age >1 year within 3 weeks of cough onset and infants age <1 year and pregnant women within 6 weeks of cough onset. If the person is diagnosed late, antibiotics will not alter the course of the illness, and even without antibiotics, they should no longer be spreading pertussis. When used early, antibiotics decrease the duration of infectiousness, and thus prevent spread. Short-term antibiotics (azithromycin for 3–5 days) are as effective as long-term treatment (erythromycin 10–14 days) in eliminating B. pertussis with fewer and less severe side effects.People with pertussis are most infectious during the first two weeks following the onset of symptoms.Effective treatments of the cough associated with this condition have not been developed. The use of over the counter cough medications is discouraged and has not been found helpful.
Prognosis
While most healthy older children and adults fully recover, infection in newborns is particularly severe. Pertussis is fatal in an estimated 0.5% of US infants under one year of age. First-year infants are also more likely to develop complications, such as: apneas (31%), pneumonia (12%), seizures (0.6%) and encephalopathy (0.15%). This may be due to the ability of the bacterium to suppress the immune system.
Epidemiology
Worldwide, whooping cough affects around 16 million people yearly. One estimate for 2013 stated it resulted in about 61,000 deaths – down from 138,000 deaths in 1990. Another estimated 195,000 child deaths yearly from the disease worldwide. This is despite generally high coverage with the DTP and DTaP vaccines. Pertussis is one of the leading causes of vaccine-preventable deaths worldwide. About 90% of all cases occur in developing countries.Before vaccines, an average of 178,171 cases was reported in the U.S., with peaks reported every two to five years; more than 93% of reported cases occurred in children under 10 years of age. The actual incidence was likely much higher. After vaccinations were introduced in the 1940s, pertussis incidence fell dramatically to approximately 1,000 by 1976. Incidence rates have increased since 1980. In 2015, 20,762 people were reported to have been infected in the United States.Pertussis is the only vaccine-preventable disease that is associated with increasing deaths in the U.S. The number of deaths increased from four in 1996 to 17 in 2001, almost all of which were infants under one year. In Canada, the number of pertussis infections has varied between 2,000 and 10,000 reported cases each year over the last ten years, and it is the most common vaccine-preventable illness in Toronto.In 2009 Australia reported an average of 10,000 cases a year, and the number of cases had increased. In the U.S. pertussis in adults has increased significantly since about 2004.In 2017, India had a reported 23,766 reported pertussis cases, making it one of the highest reported number of cases of the year. Other countries, such as Germany, had reported 16,183 cases, while Australia and China had a reported number of 12,114 and 10,390 pertussis cases.
US outbreaks
In 2010, 10 babies in California died and health authorities declared an epidemic with 5,978 confirmed, probable and suspected cases. They found that doctors had failed to correctly diagnose the babies condition during several visits. Statistical analysis identified significant overlap in communities with a cluster of nonmedical child exemptions and cases. The number of exemptions varied widely among communities, but tended to be highly clustered. In some schools, more than 75 % of parents filed for vaccination exemptions. The data suggest vaccine refusal based on nonmedical reasons and personal belief exacerbated the outbreak. Other factors included reduced duration of immunity following the acellular vaccine and most vaccinated adults and older children not receiving a booster shot.In April and May 2012, pertussis was declared to be at epidemic levels in Washington, with 3,308 cases. In December 2012 Vermont declared an epidemic of 522 cases. Wisconsin had the highest incidence rate, with 3,877 cases, although it did not make an official epidemic declaration.
History
Discovery
B. pertussis was discovered in 1906 by Jules Bordet and Octave Gengou, who also developed the first serology and vaccine. Efforts to develop an inactivated whole-cell vaccine began soon after B. pertussis was cultured that year. In the 1920s, Louis W. Sauer developed a weak vaccine for whooping cough at Evanston Hospital (Evanston, IL). In 1925 Danish physician Thorvald Madsen was the first to test a whole-cell vaccine on a wide scale. Madsen used the vaccine to control outbreaks in the Faroe Islands in the North Sea.
Vaccine
In 1932 an outbreak of whooping cough hit Atlanta, Georgia, prompting pediatrician Leila Denmark to begin her study of the disease </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Spondylolysis and explain in detail? [/INST] Spondylolysis is a defect or stress fracture in the pars interarticularis of the vertebral arch. The vast majority of cases occur in the lower lumbar vertebrae (L5), but spondylolysis may also occur in the cervical vertebrae.
Signs and symptoms
In majority of cases, spondylolysis presents asymptomatically which can make diagnosis both difficult and incidental. When a patient does present with symptoms, there are general signs and symptoms a clinician will look for:
Clinical signs:Pain on completion of the stork test (placed in hyperextension and rotation)
Excessive lordotic posture
Unilateral tenderness on palpation
Visible on diagnostic imaging (Scottie dog fracture)
Symptoms:Unilateral low back pain
Pain that radiates into the buttocks or legs
Onset of pain can be acute or gradual
Pain that can restrict daily activities
Pain that worsens after strenuous activity
Pain aggravated with lumbar hyperextension
Difficulty in movement in spinal cord
Cause
The cause of spondylolysis remains unknown, however many factors are thought to contribute to its development. The condition is present in up to 6% of the population, majority of which usually present asymptomatically. Research supports that there are hereditary and acquired risk factors that can make one more susceptible to the defect. The disorder is generally more prevalent in males compared to females, and tends to occur earlier in males due to their involvement in more strenuous activities at a younger age. In a young athlete, the spine is still growing which means there are many ossification centers, leaving points of weakness in the spine. This leaves young athletes at increased risk, particularly when involved in repetitive hyperextension and rotation across the lumbar spine. Spondylolysis is a common cause of low back pain in preadolescents and adolescent athletes, as it accounts for about 50% of all low back pain. It is believed that both repetitive trauma and an inherent genetic weakness can make an individual more susceptible to spondylolysis.
Risk factors
Sports involving repetitive or forceful hyperextension of the spine, especially when combined with rotation are the main mechanism of injury for spondylolysis. The stress fracture of the pars interarticularis occurs on the side opposite to activity. For instance, for a right-handed player, the fracture occurs on the left side of the vertebrae.Spondylolysis has a higher occurrence in the following activities:
Baseball
Military service
Tennis
Diving
Cheerleading
Gymnastics
Gridiron Football
Association Football
Wrestling
Weightlifting
Roller Derby
Cricket
Pole Vault
Rugby
Volleyball
Gym
Ultimate Frisbee (especially during impact from laying out)
Ballet
Muay ThaiAlthough this condition can be caused by repetitive trauma to the lumbar spine in strenuous sports, other risk factors can also predispose individuals to spondylolysis. Males are more commonly affected by spondylolysis than females. In one study looking at youth athletes, it was found that the mean age of individuals with spondylolisthesis was 20 years of age. Spondylolysis also runs in families suggesting a hereditary component such as a predisposition to weaker vertebrae.
Pathophysiology
Spondylolysis is a bony defect or fracture within the pars interarticularis of the vertebral arch in the spinal column. The vast majority of spondylolysis occur in the lumbar vertebrae, however it can also be seen in cervical vertebrae. The lumbar vertebra consist of a body, pedicle, lamina, pars interarticularis, transverse process, spinous process and superior and inferior articular facets, which form joints that link the vertebrae together. When examining the vertebra, the pars interarticularis is the bony segment between the superior and inferior articular facet joints located anterior to the lamina and posterior to the pedicle. Separation of the pars interarticularis occurs when spondylolysis is present in the spinal column.Spondylolysis is typically caused by a stress fracture of the bone, and is especially common in adolescents who over-train in activities. The pars interarticularis is vulnerable to fracture during spinal hyperextension, especially when combined with rotation, or when experiencing a force during a landing. This stress fracture most commonly occurs where the concave lumbar spine transitions to the convex sacrum (L5-S1). A significant number of individuals with spondylolysis will develop spondylolisthesis, which is true for 50-81% of this population.
Diagnosis
There are several imaging techniques used to diagnose spondylolysis. Common imaging techniques include X-ray, MRI, Bone Scintigraphy (Bone Scan), and Computed Tomography (CT Scan).
X-Ray
X-rays (electromagnetic radiation) are projected through the body to produce an image of its internal structures. The radiation is more attenuated (absorbed) by the denser tissues of the body (i.e. bone) than the softer tissues (i.e. muscles, organs, etc) creating a picture composed of shades of grey ranging from white to black. A vertebra with a fracture or defect of the pars interarticularis will have a dark mark through this region of bone. Since this is difficult to see on the AP (anterior posterior) x-ray view an oblique x-ray of the lumbar spine can usually identify the spondylolysis. If inconclusive a further CT scan can produce a 3-dimensional images to more clearly show the defect although the exam increases the patients radiation dose by at least an order of magnitude than plain x-rays.
Bone scintigraphy
Also known as a bone scan, bone scintigraphy involves the injection of a small amount of radioactive tracer into the bloodstream. This tracer decays and emits radioactive energy which can be detected by a special camera. The camera produces a black and white image where areas shown as dark black indicate bone damage of some kind. If there is a black spot in the lumbar vertebrae (e.g. L5) this indicates damage and potentially spondylolysis. If this test is positive, a CT scan is usually ordered to confirm spondylolysis.
Computed tomography
Commonly known as a CT Scan or CAT scan, this form of imaging is very similar to x-ray technology but produces many more images than an x-ray does. The multiple images produce cross-sectional views not possible with an x-ray. This allows a physician or radiologist to examine the images from many more angles than an x-ray allows. For this reason the CT scan is much more accurate in detecting spondylolysis than an x-ray. Bone scintigraphy combined with CT scan is considered the gold standard which means that it is best at detecting spondylolysis.
MRI
MRI is a newer technique used to diagnose spondylolysis and is favorable for a few reasons. The MRI is much more accurate than the x-ray and also does not use radiation. The MRI uses powerful magnets and radio frequencies to produce very detailed images of many different densities of tissue including bone and soft tissues.
Treatment
Conservative management
Treatment for spondylolysis ranges from bracing, activity restriction, extension exercises, flexion exercises and deep abdominal strengthening, that is administered through physical therapy. The duration of physical therapy a patient receives varies upon the severity of spondylolysis, however typically ranges from three to six months. The goal of physical therapy is to minimize movement at the unstable defect of the pars interarticularis. Once a patient completes physical therapy, and displays no symptoms or inflammation in the lower back, they are cleared to continue with daily or athletic activities. However, a patient may need to maintain a variety of rehabilitation techniques after physical therapy to prevent the recurrence of spondylolysis.
Deep abdominal co-contraction exercises
The aim of deep abdominal co-contraction exercises is to train muscles surrounding the lumbar spine which provide stability of the spine. Spondylolysis results in a spinal instability and disrupts patterns of co-recruitment between muscle synergies. Specifically, local muscles that attach directly to the spine are affected. The lumbar multifidus and transversus abdominis play a direct role in stabilizing the lumbar spine. Instead the local muscles in individuals with spondylolysis are vulnerable to dysfunction, which results in abnormal spinal stability causing chronic low back pain. To compensate, the large torque producing global muscles are used to stabilize the spine.In one study, patients are taught to train the co-contraction of deep abdominal muscles and lumbar multifidus in static postures, functional tasks and aerobic activities. This technique was shown to reduce pain and functional disability when compared to other conservative treatments. These results also had a long- term effect in reducing levels of pain and functional disability. This is because motor programming eventually became automatic, and conscious control was no longer needed to contract the deep abdominal muscles during activities.
Activity restriction
Activity restriction of spondylolysis is advised for a short period of time once the patient becomes symptomatic, followed by a guided physical therapy program. Once spondylolysis has been diagnosed, treatment often consists of a short rest period of two to three days, followed by a physical therapy program. There should be restriction of heavy lifting, excessive bending, twisting and avoidance of any work, recreational activities or participation in sport that causes stress to the lumbar spine. Activity restriction can help eliminate and control a patients symptoms so they are able to resume their normal activities. Activity restriction is most commonly used in conjunction with other rehabilitation techniques including bracing.
Bracing
Acute spondylolysis is most commonly treated through the use of an antilordotic brace (Boston brace) to control and limit spinal movement, and reduce stress on the injured spinal segment. Bracing immobilizes the spine in a flexed position for a short period to allow healing of the bony defect in the pars interarticularis. An antilordotic brace commonly utilizes a plastic component that is contoured to closely fit the body. Antilordotic bracing subsequently reduces the athlete’s symptoms by decreasing the amount of stress on the low back, and allows a prompt return to sport for athletes. Typically, bracing is utilized for 6–12 weeks.In order for a brace to be effective, it must be worn every day for the required amount of time. Patients are given a brace schedule determined by their physical therapist that explains the amount of time the brace must be worn daily. A braces effectiveness increases with adherence to the bracing schedule. Patients that do not follow their bracing schedule are more likely to have their symptoms progress. Research has demonstrated that when braces are used as prescribed with full compliance, they are successful at preventing spondylolysis progression.
Surgery
Most patients with spondylolysis do not require surgery but, if the symptoms are not relieved with non-surgical treatments, or when the condition progresses to high grade spondylolisthesis, then patients may require surgery. There are two main types of surgery for this condition:
Spinal fusion: This procedure is recommended when a set of vertebrae becomes loose or unstable. The surgeon joins two or more bones (vertebrae) together through the use of metal rods, screws, and bone grafts. The bone grafts complete their fusion in 4–8 months following the surgery, securing the spine in the correct position. The procedure is also used to treat spinal instability, fractures in the lumbar spine and, severe degenerative disc disease. The process is relatively non-invasive, performed through small incisions and has a high success rate.
Laminectomy: Often performed when spinal stenosis occurs in conjunction with spondylolysis. The procedure surgically removes part or all of the lamina from the bony ring of the vertebra to reduce the pressure on the spinal cord </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Poverty and explain in detail? [/INST] Poverty is the state of having few material possessions or little income. Poverty can have diverse social, economic, and political causes and effects. When evaluating poverty in statistics or economics there are two main measures: absolute poverty compares income against the amount needed to meet basic personal needs, such as food, clothing, and shelter; relative poverty measures when a person cannot meet a minimum level of living standards, compared to others in the same time and place. The definition of relative poverty varies from one country to another, or from one society to another.Statistically, as of 2019, most of the worlds population live in poverty: in PPP dollars, 85% of people live on less than $30 per day, two-thirds live on less than $10 per day, and 10% live on less than $1.90 per day (extreme poverty). According to the World Bank Group in 2020, more than 40% of the poor live in conflict-affected countries. Even when countries experience economic development, the poorest citizens of middle-income countries frequently do not gain an adequate share of their countries increased wealth to leave poverty. Governments and non-governmental organizations have experimented with a number of different policies and programs for poverty alleviation, such as electrification in rural areas or housing first policies in urban areas. The international policy frameworks for poverty alleviation, established by the United Nations in 2015, are summarized in Sustainable Development Goal 1: "No Poverty".
Social forces, such as gender, disability, race and ethnicity, can exacerbate issues of poverty—with women, children and minorities frequently bearing unequal burdens of poverty. Moreover, impoverished individuals are more vulnerable to the effects of other social issues, such as the environmental effects of industry or the impacts of climate change or other natural disasters or extreme weather events. Poverty can also make other social problems worse; economic pressures on impoverished communities frequently play a part in deforestation, biodiversity loss and ethnic conflict. For this reason, the UNs Sustainable Development Goals and other international policy programs, such as the international recovery from COVID-19, emphasize the connection of poverty alleviation with other societal goals.
Definitions and etymology
The word poverty comes from the old (Norman) French word poverté (Modern French: pauvreté), from Latin paupertās from pauper (poor).There are several definitions of poverty depending on the context of the situation it is placed in, and usually references a state or condition in which a person or community lacks the financial resources and essentials for a certain standard of living.
United Nations: Fundamentally, poverty is a denial of choices and opportunities, a violation of human dignity. It means lack of basic capacity to participate effectively in society. It means not having enough to feed and clothe a family, not having a school or clinic to go to, not having the land on which to grow ones food or a job to earn ones living, not having access to credit. It means insecurity, powerlessness and exclusion of individuals, households and communities. It means susceptibility to violence, and it often implies living in marginal or fragile environments, without access to clean water or sanitation.World Bank: Poverty is pronounced deprivation in well-being, and comprises many dimensions. It includes low incomes and the inability to acquire the basic goods and services necessary for survival with dignity. Poverty also encompasses low levels of health and education, poor access to clean water and sanitation, inadequate physical security, lack of voice, and insufficient capacity and opportunity to better ones life.European Union (EU): The European Unions definition of poverty is significantly different from definitions in other parts of the world, and consequently policy measures introduced to combat poverty in EU countries also differ from measures in other nations. Poverty is measured in relation to the distribution of income in each member country using relative income poverty lines. Relative-income poverty rates in the EU are compiled by the Eurostat, in charge of coordinating, gathering, and disseminating member country statistics using European Union Survey of Income and Living Conditions (EU-SILC) surveys.
Measuring poverty
Absolute poverty
Absolute poverty, often synonymous with extreme poverty or abject poverty, refers to a set standard which is consistent over time and between countries. This set standard usually refers to "a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information. It depends not only on income but also on access to services." Having an income below the poverty line, which is defined as an income needed to purchase basic needs, is also referred to as primary proverty.
The "dollar a day" poverty line was first introduced in 1990 as a measure to meet such standards of living. For nations that do not use the US dollar as currency, "dollar a day" does not translate to living a day on the equivalent amount of local currency as determined by the exchange rate. Rather, it is determined by the purchasing power parity rate, which would look at how much local currency is needed to buy the same things that a dollar could buy in the United States. Usually, this would translate to having less local currency than if the exchange rate was used as the United States is a relatively more expensive country.From 1993 through 2005, the World Bank defined absolute poverty as $1.08 a day on such a purchasing power parity basis, after adjusting for inflation to the 1993 US dollar and in 2008, it was updated as $1.25 a day (equivalent to $1.00 a day in 1996 US prices) and in 2015, it was updated as living on less than US$1.90 per day, and moderate poverty as less than $2 or $5 a day. Similarly, ultra-poverty is defined by a 2007 report issued by International Food Policy Research Institute as living on less than 54 cents per day. The poverty line threshold of $1.90 per day, as set by the World Bank, is controversial. Each nation has its own threshold for absolute poverty line; in the United States, for example, the absolute poverty line was US$15.15 per day in 2010 (US$22,000 per year for a family of four), while in India it was US$1.0 per day and in China the absolute poverty line was US$0.55 per day, each on PPP basis in 2010. These different poverty lines make data comparison between each nations official reports qualitatively difficult. Some scholars argue that the World Bank method sets the bar too high, others argue it is too low.
There is disagreement among experts as to what would be considered a realistic poverty rate with one considering it "an inaccurately measured and arbitrary cut off". Some contend that a higher poverty line is needed, such as a minimum of $7.40 or even $10 to $15 a day. They argue that these levels would better reflect the cost of basic needs and normal life expectancy.One estimate places the true scale of poverty much higher than the World Bank, with an estimated 4.3 billion people (59% of the worlds population) living with less than $5 a day and unable to meet basic needs adequately. Philip Alston, a UN special rapporteur on extreme poverty and human rights, stated the World Banks international poverty line of $1.90 a day is fundamentally flawed, and has allowed for "self congratulatory" triumphalism in the fight against extreme global poverty, which he asserts is "completely off track" and that nearly half of the global population, or 3.4 billion, lives on less than $5.50 a day, and this number has barely moved since 1990. Still others suggest that poverty line misleads as it measures everyone below the poverty line the same, when in reality someone living on $1.20 per day is in a different state of poverty than someone living on $0.20 per day.Other measures of absolute poverty without using a certain dollar amount include the standard defined as receiving less than 80% of minimum caloric intake whilst spending more than 80% of income on food, sometimes called ultra-poverty.
Relative poverty
Relative poverty views poverty as socially defined and dependent on social context. It is argued that the needs considered fundamental is not an objective measure and could change with the custom of society. For example, a person who cannot afford housing better than a small tent in an open field would be said to live in relative poverty if almost everyone else in that area lives in modern brick homes, but not if everyone else also lives in small tents in open fields (for example, in a nomadic tribe). Since richer nations would have lower levels of absolute poverty, relative poverty is considered the "most useful measure for ascertaining poverty rates in wealthy developed nations" and is the "most prominent and most-quoted of the EU social inclusion indicators".Usually, relative poverty is measured as the percentage of the population with income less than some fixed proportion of median income. This is a calculation of the percentage of people whose family household income falls below the Poverty Line. The main poverty line used in the OECD and the European Union is based on "economic distance", a level of income set at 60% of the median household income. The United States federal government typically regulates this line to three times the cost of an adequate meal.There are several other different income inequality metrics, for example, the Gini coefficient or the Theil Index.
Other aspects
Rather than income, poverty is also measured through individual basic needs at a time. Life expectancy has greatly increased in the developing world since World War II and is starting to close the gap to the developed world. Child mortality has decreased in every developing region of the world. The proportion of the worlds population living in countries where the daily per-capita supply of food energy is less than 9,200 kilojoules (2,200 kilocalories) decreased from 56% in the mid-1960s to below 10% by the 1990s. Similar trends can be observed for literacy, access to clean water and electricity and basic consumer items.
Poverty may also be understood as an aspect of unequal social status and inequitable social relationships, experienced as social exclusion, dependency, and diminished capacity to participate, or to develop meaningful connections with other people in society. Such social exclusion can be minimized through strengthened connections with the mainstream, such as through the provision of relational care to those who are experiencing poverty. The World Banks "Voices of the Poor", based on research with over 20,000 poor people in 23 countries, identifies a range of factors which poor people identify as part of poverty. These include abuse by those in power, dis-empowering institutions, excluded locations, gender relationships, lack of security, limited capabilities, physical limitations, precarious livelihoods, problems in social relationships, weak community organizations and discrimination. Analysis of social aspects of poverty links conditions of scarcity to aspects of the distribution of resources and power in a society and recognizes that poverty may be a function of the diminished "capability" of people to live the kinds of lives they value. The social aspects of poverty may include lack of access to information, education, health care, social capital or political power. Relational poverty is the idea that societal poverty exists if there is a lack of human relationships. Relational poverty can be the result of a lost contact number, lack of phone ownership, isolation, or deliberate severing of ties with an individual or community. Relational poverty is also understood "by the social institutions that organize those relationships...poverty is importantly the result of the different terms and conditions on which people are included in social life"In the United Kingdom, the second Cameron ministry came under attack for their redefinition of poverty; poverty is no longer classified by a familys income, but as to whether a family is in work or not. Considering that two-thirds of people who found work were accepting wages that are below the living wage (according to the Joseph Rowntree Foundation) this has been criticised by anti-poverty campaigners as an unrealistic view of poverty in the United Kingdom.
Secondary poverty
Secondary poverty refers to those that earn enough income to not be impoverished, but who spend their income on unnecessary pleasures, such as alcoholic beverages, thus placing them below it in practice. In 18th- and 19th-century Great Britain, the practice of temperance among Methodists, as well as their rejection of gambling, allowed them to eliminate secondary poverty and accumulate capital. Factors that contribute to secondary poverty includes but are not limited to: alcohol, gambling, tobacco and drugs.
Variability
Poverty levels are snapshot pictures in time that omits the transitional dynamics between levels. Mobility statistics supply additional information about the fraction who leave the poverty level. For example, one study finds that in a sixteen-year period (1975 to 1991 in the US) only 5% of those in the lower fifth of the income level were still at that level, while 95% transitioned to a higher income category. Poverty levels can remain the same while those who rise out of poverty are replaced by others. The transient poor and chronic poor differ in each society. In a nine-year period ending in 2005 for the US, 50% of the poorest quintile transitioned to a higher quintile.
Global prevalence
According to Chen and Ravallion, about 1.76 billion people in developing world lived above $1.25 per day and 1.9 billion people lived below $1.25 per day in 1981. In 2005, about 4.09 billion people in developing world lived above $1.25 per day and 1.4 billion people lived below $1.25 per day (both 1981 and 2005 data are on inflation adjusted basis). The share of the worlds population living in absolute poverty fell from 43% in 1981 to 14% in 2011. The absolute number of people in poverty fell from 1.95 billion in 1981 to 1.01 billion in 2011. The economist Max Roser estimates that the number of people in poverty is therefore roughly the same as 200 years ago. This is the case since the world population was just little more than 1 billion in 1820 and the majority (84% to 94%) of the world population was living in poverty. According to one study the number of people worldwide living in absolute poverty fell from 1.18 billion in 1950 to 1.04 billion in 1977. According to another study, the number of people worldwide estimated to be starving fell from almost 920 million in 1971 to below 797 million in 1997. The proportion of the developing worlds population living in extreme economic poverty fell from 28% in 1990 to 21% in 2001. Most of this improvement has occurred in East and South Asia.In 2012 it was estimated that, using a poverty line of $1.25 a day, 1.2 billion people lived in poverty. Given the current economic model, built on GDP, it would take 100 years to bring the worlds poorest up to the poverty line of $1.25 a day. UNICEF estimates half the worlds children (or 1.1 billion) live in poverty. The World Bank forecasted in 2015 that 702.1 million people were living in extreme poverty, down from 1.75 billion in 1990. Extreme poverty is observed in all parts of the world, including developed economies. Of the 2015 population, about 347.1 million people (35.2%) lived in Sub-Saharan Africa and 231.3 million (13.5%) lived in South Asia. According to the World Bank, between 1990 and 2015, the percentage of the worlds population living in extreme poverty fell from 37.1% to 9.6%, falling below 10% for the first time. During the 2013 to 2015 period, the World Bank reported that extreme poverty fell from 11% to 10%, however they also noted that the rate of decline had slowed by nearly half from the 25 year average with parts of sub-saharan Africa returning to early 2000 levels. The World Bank attributed this to increasing violence following the Arab Spring, population increases in Sub-Saharan Africa, and general African inflationary pressures and economic malaise were the primary drivers for this slow down. Many wealthy nations have seen an increase in relative poverty rates ever since the Great Recession, in particular among children from impoverished families who often reside in substandard housing and find educational opportunities out of reach. It has been argued by some academics that the neoliberal policies promoted by global financial institutions such as the IMF and the World Bank are actually exacerbating both inequality and poverty.In East Asia the World Bank reported that "The poverty headcount rate at the $2-a-day level is estimated to have fallen to about 27 percent [in 2007], down from 29.5 percent in 2006 and 69 percent in 1990." The Peoples Republic of China accounts for over three quarters of global poverty reduction from 1990 to 2005, which according to the World Bank is "historically unprecedented". China accounted for nearly half of all extreme poverty in 1990.In Sub-Saharan Africa extreme poverty went up from 41% in 1981 to 46% in 2001, which combined with growing population increased the number of people living in extreme poverty from 231 million to 318 million. Statistics of 2018 shows population living in extreme conditions has declined by more than 1 billion in the last 25 years. As per the report published by the world bank on 19 September 2018 world poverty falls below 750 million.In the early 1990s some of the transition economies of Central and Eastern Europe and Central Asia experienced a sharp drop in income. The collapse of the Soviet Union resulted in large declines in GDP per capita, of about 30 to 35% between 1990 and the through year of 1998 (when it was at its minimum). As a result, poverty rates tripled, excess mortality increased, and life expectancy declined. By 1999, during the peak of the poverty crisis, 191 million people were living on less than $5.50 a day. In subsequent years as per capita incomes recovered the poverty rate dropped from 31.4% of the population to 19.6%. The average post-communist country had returned to 1989 levels of per-capita GDP by 2005, although as of 2015 some are still far behind that. According to the World Bank in 2014, around 80 million people were still living on less than $5.00 a day.World Bank data shows that the percentage of the population living in households with consumption or income per person below the poverty line has decreased in each region of the world except Middle East and North Africa since 1990:
Characteristics
The effects of poverty may also be causes as listed above, thus creating a "poverty cycle" operating across multiple levels, individual, local, national and global.
Health
One-third of deaths around the world—some 18 million people a year or 50,000 per day—are due to poverty-related causes. People living in developing nations, among them women and children, are over represented among the global poor and these effects of severe poverty. Those living in poverty suffer disproportionately from hunger or even starvation and disease, as well as lower life expectancy. According to the World Health Organization, hunger and malnutrition are the single gravest threats to the worlds public health and malnutrition is by far the biggest contributor to child mortality, present in half of all cases.Almost 90% of maternal deaths during childbirth occur in Asia and sub-Saharan Africa, compared to less than 1% in the developed world. Those who live in poverty have also been shown to have a far greater likelihood of having or incurring a disability within their lifetime. Infectious diseases such as malaria and tuberculosis can perpetuate poverty by diverting health and economic resources from investment and productivity; malaria decreases GDP growth by up to 1.3% in some developing nations and AIDS decreases African growth by 0.3–1.5% annually.Studies have shown that poverty impedes cognitive function although some of these findings could not be replicated in follow-up studies. One hypothesised mechanism is that financial worries put a severe burden on ones mental resources so that they are no longer fully available for solving complicated problems. The reduced capability for problem solving can lead to suboptimal decisions and further perpetuate poverty. Many other pathways from poverty to compromised cognitive capacities have been noted, from poor nutrition and environmental toxins to the effects of stress on parenting behavior, all of which lead to suboptimal psychological development. Neuroscientists have documented the impact of poverty on brain structure and function throughout the lifespan.Infectious diseases continue to blight the lives of the poor across the world. 36.8 million people are living with HIV/AIDS, with 954,492 deaths in 2017.Poor people often are more prone to severe diseases due to the lack of health care, and due to living in non-optimal conditions. Among the poor, girls tend to suffer even more due to gender discrimination. Economic stability is paramount in a poor household; otherwise they go in an endless loop of negative income trying to treat diseases. Often when a person in a poor household falls ill it is up to the family members to take care of them due to limited access to health care and lack of health insurance. The household members often have to give up their income or stop seeking further education to tend to the sick member. There is a greater opportunity cost imposed on the poor to tend to someone compared to someone with better financial stability.Substance abuse means that the poor typically spend about 2% of their income educating their children but larger percentages of alcohol and tobacco (for example, 6% in Indonesia and 8% in Mexico).
Hunger
Rises in the costs of living make poor people less able to afford items. Poor people spend a greater portion of their budgets on food than wealthy people. As a result, poor households and those near the poverty threshold can be particularly vulnerable to increases in food prices. For example, in late 2007 increases in the price of grains led to food riots in some countries. The World Bank warned that 100 million people were at risk of sinking deeper into poverty. Threats to the supply of food may also be caused by drought and the water crisis. Intensive farming often leads to a vicious cycle of exhaustion of soil fertility and decline of agricultural yields. Approximately 40% of the worlds agricultural land is seriously degraded. In Africa, if current trends of soil degradation continue, the continent might be able to feed just 25% of its population by 2025, according to United Nations Universitys Ghana-based Institute for Natural Resources in Africa. Every year nearly 11 million children living in poverty die before their fifth birthday. 1.02 billion people go to bed hungry every night. According to the Global Hunger Index, Sub-Saharan Africa had the highest child malnutrition rate of the worlds regions over the 2001–2006 period.
Mental health
A psychological study has been conducted by four scientists during inaugural Convention of Psychological Science. The results find that people who thrive with financial stability or fall under low socioeconomic status (SES) tend to perform worse cognitively due to external pressure imposed upon them. The research found that stressors such as low income, inadequate health care, discrimination, and exposure to criminal activities all contribute to mental disorders. This study also found that children exposed to poverty-stricken environments have slower cognitive thinking. It is seen that children perform better under the care of their parents and that children tend to adopt speaking language at a younger age. Since being in poverty from childhood is more harmful than it is for an adult, it is seen that children in poor households tend to fall behind in certain cognitive abilities compared to other average families.For a child to grow up emotionally healthy, the children under three need "A strong, reliable primary caregiver who provides consistent and unconditional love, guidance, and support. Safe, predictable, stable environments. Ten to 20 hours each week of harmonious, reciprocal interactions. This process, known as attunement, is most crucial during the first 6–24 months of infants lives and helps them develop a wider range of healthy emotions, including gratitude, forgiveness, and empathy. Enrichment through personalized, increasingly complex activities". In one survey, 67% of children from disadvantaged inner cities said they had witnessed a serious assault, and 33% reported witnessing a homicide. 51% of fifth graders from New Orleans (median income for a household: $27,133) have been found to be victims of violence, compared to 32% in Washington, DC (mean income for a household: $40,127). Studies have shown that poverty changes the personalities of children who live in it. The Great Smoky Mountains Study was a ten-year study that was able to demonstrate this. During the study, about one-quarter of the families saw a dramatic and unexpected increase in income. The study showed that among these children, instances of behavioral and emotional disorders decreased, and conscientiousness and agreeableness increased.
Education
Research has found that there is a high risk of educational underachievement for children who are from low-income housing circumstances. This is often a process that begins in primary school. Instruction in the US educational system, as well as in most other countries, tends to be geared towards those students who come from more advantaged backgrounds. As a result, children in poverty are at a higher risk than advantaged children for retention in their grade, special deleterious placements during the schools hours and not completing their high school education. Advantage breeds advantage. There are many explanations for why students tend to drop out of school. One is the conditions in which they attend school. Schools in poverty-stricken areas have conditions that hinder children from learning in a safe environment. Researchers have developed a name for areas like this: an urban war zone is a poor, crime-laden district in which deteriorated, violent, even warlike conditions and underfunded, largely ineffective schools promote inferior academic performance, including irregular attendance and disruptive or non-compliant classroom behavior. Because of poverty, "Students from low-income families are 2.4 times more likely to drop out than middle-income kids, and over 10 times more likely than high-income peers to drop out."For children with low resources, the risk factors are similar to others such as juvenile delinquency rates, higher levels of teenage pregnancy, and economic dependency upon their low-income parent or parents.
Families and society who submit low levels of investment in the education and development of less fortunate children end up with less favorable results for the children who see a life of parental employment reduction and low wages. Higher rates of early childbearing with all the connected risks to family, health and well-being are major issues to address since education from preschool to high school is identifiably meaningful in a life.
Poverty often drastically affects childrens success in school. A childs "home activities, preferences, mannerisms" must align with the world and in the cases that they do not do these, students are at a disadvantage in the school and, most importantly, the classroom. Therefore, it is safe to state that children who live at or below the poverty level will have far less success educationally than children who live above the poverty line. Poor children have a great deal less healthcare and this ultimately results in many absences from school. Additionally, poor children are much more likely to suffer from hunger, fatigue, irritability, headaches, ear infections, flu, and colds. These illnesses could potentially restrict a students focus and concentration.In general, the interaction of gender with poverty or location tends to work to the disadvantage of girls in poorer countries with low completion rates and social expectations that they marry early, and to the disadvantage of boys in richer countries with high completion rates but social expectations that they enter the labour force early. At the primary education level, most countries with a completion rate below 60% exhibit gender disparity at girls expense, particularly poor and rural girls. In Mauritania, the adjusted gender parity index is 0.86 on average, but only 0.63 for the poorest 20%, while there is parity among the richest 20%. In countries with completion rates between 60% and 80%, gender disparity is generally smaller, but disparity at the expense of poor girls is especially marked in Cameroon, Nigeria and Yemen. Exceptions in the opposite direction are observed in countries with pastoralist economies that rely on boys labour, such as the Kingdom of Eswatini, Lesotho and Namibia.
Shelter
The geographic concentration of poverty is argued to be a factor in entrenching poverty. William J. Wilsons "concentration and isolation" hypothesis states that the economic difficulties of the very poorest African Americans are compounded by the fact that as the better-off African Americans move out, the poorest are more and more concentrated, having only other very poor people as neighbors. This concentration causes social isolation, Wilson suggests, because the very poor are now isolated from access to the job networks, role models, institutions, and other connections that might help them escape poverty. Gentrification means converting an aging neighborhood into a more affluent one, as by remodeling homes. Landlords then increase rent on newly renovated real estate; the poor people cannot afford to pay high rent, and may need to leave their neighborhood to find affordable housing. The poor also get more access to income and services, while studies suggest poor residents living in gentrifying neighbourhoods are actually less likely to move than poor residents of non-gentrifying areas.Poverty increases the risk of homelessness. Slum-dwellers, who make up a third of the worlds urban population, live in a poverty no better, if not worse, than rural people, who are the traditional focus of the poverty in the developing world, according to a report by the United Nations.There are over 100 million street children worldwide. Most of the children living in institutions around the world have a surviving parent or close relative, and they most commonly entered orphanages because of poverty. It is speculated that, flush with money, for-profit orphanages are increasing and push for children to join even though demographic data show that even the poorest extended families usually take in children whose parents have died. Many child advocates maintain that this can harm childrens development by separating them from their families and that it would be more effective and cheaper to aid close relatives who want to take in the orphans.
Utilities
Water and sanitation
As of 2012, 2.5 billion people lack access to sanitation services and 15% practice open defecation. The most noteworthy example is Bangladesh, which had half the GDP per capita of India but has a lower mortality from diarrhea than India or the world average, with diarrhea deaths declining by 90% since the 1990s. Even while providing latrines is a challenge, people still do not use them even when available. By strategically providing pit latrines to the poorest, charities in Bangladesh sparked a cultural change as those better off perceived it as an issue of status to not use one. The vast majority of the latrines built were then not from charities but by villagers themselves.Water utility subsidies tend to subsidize water consumption by those connected to the supply grid, which is typically skewed towards the richer and urban segment of the population and those outside informal housing. As a result of heavy consumption subsidies, the price of water decreases to the extent that only 30%, on average, of the supplying costs in developing countries is covered.
This results in a lack of incentive to maintain delivery systems, leading to losses from leaks annually that are enough for 200 million people.
This also leads to a lack of incentive to invest in expanding the network, resulting in much of the poor population being unconnected to the network. Instead, the poor buy water from water vendors for, on average, about 5 to 16 times the metered price. However, subsidies for laying new connections to the network rather than for consumption have shown more promise for the poor.
Energy
Prejudice and exploitation
Cultural factors, such as discrimination of various kinds, can negatively affect productivity such as age discrimination, stereotyping, discrimination against people with physical disability, gender discrimination, racial discrimination, and caste discrimination. Children are more than twice as likely to live in poverty as adults. Women are the group suffering from the highest rate of poverty after children, in what is referred to as the feminization of poverty. In addition, the fact that women are more likely to be caregivers, regardless of income level, to either the generations before or after them, exacerbates the burdens of their poverty. Those in poverty have increased chances of incurring a disability which leads to a cycle where disability and poverty are mutually reinforcing.
Max Weber and some schools of modernization theory suggest that cultural values could affect economic success. However, researchers have gathered evidence that suggest that values are not as deeply ingrained and that changing economic opportunities explain most of the movement into and out of poverty, as opposed to shifts in values. A 2018 report on poverty in the United States by UN special rapporteur Philip Alston asserts that caricatured narratives about the rich and the poor (that "the rich are industrious, entrepreneurial, patriotic and the drivers of economic success" while "the poor are wasters, losers and scammers") are largely inaccurate, as "the poor are overwhelmingly those born into poverty, or those thrust there by circumstances largely beyond their control, such as physical or mental disabilities, divorce, family breakdown, illness, old age, unlivable wages or discrimination in the job market." Societal perception of people experiencing economic difficulty has historically appeared as a conceptual dichotomy: the "good" poor (people who are physically impaired, disabled, the "ill and incurable," the elderly, pregnant women, children) vs. the "bad" poor (able-bodied, "valid" adults, most often male).According to experts, many women become victims of trafficking, the most common form of which is prostitution, as a means of survival and economic desperation. Deterioration of living conditions can often compel children to abandon school to contribute to the family income, putting them at risk of being exploited. For example, in Zimbabwe, a number of girls are turning to sex in return for food to survive because of the increasing poverty. According to studies, as poverty decreases there will be fewer and fewer instances of violence.
Poverty reduction
Various poverty reduction strategies are broadly categorized based on whether they make more of the basic human needs available or whether they increase the disposable income needed to purchase those needs. Some strategies such as building roads can both bring access to various basic needs, such as fertilizer or healthcare from urban areas, as well as increase incomes, by bringing better access to urban markets.
In 2015 all UN Member States adopted the 17 Sustainable Development Goals as part of the 2030 Agenda for sustainable development. Goal 1 is to "end poverty in all its forms everywhere". It aims to eliminate extreme poverty for all people measured by daily wages less than $1.25 and at least half the total number of men, women, and children living in poverty. In addition, social protection systems must be established at the national level and equal access to economic resources must be ensured. Strategies have to be developed at the national, regional and international levels to support the eradication of poverty.
Increasing the supply of basic needs
Food and other goods
Agricultural technologies such as nitrogen fertilizers, pesticides, new seed varieties and new irrigation methods have dramatically reduced food shortages in modern times by boosting yields past previous constraints. Goal 2 of the Sustainable Development Goals is the elimination of hunger and undernutrition by 2030.Before the Industrial Revolution, poverty had been mostly accepted as inevitable as economies produced little, making wealth scarce. Geoffrey Parker wrote that "In Antwerp and Lyon, two of the largest cities in western Europe, by 1600 three-quarters of the total population were too poor to pay taxes, and therefore likely to need relief in times of crisis." The initial industrial revolution led to high economic growth and eliminated mass absolute poverty in what is now considered the developed world. Mass production of goods in places such as rapidly industrializing China has made what were once considered luxuries, such as vehicles and computers, inexpensive and thus accessible to many who were otherwise too poor to afford them.Even with new products, such as better seeds, or greater volumes of them, such as industrial production, the poor still require access to these products. Improving road and transportation infrastructure helps solve this major bottleneck. In Africa, it costs more to move fertilizer from an African seaport 100 kilometres (60 mi) inland than to ship it from the United States to Africa because of sparse, low-quality roads, leading to fertilizer costs two to six times the world average. Microfranchising models such as door-to-door distributors who earn commission-based income or Coca-Colas successful distribution system are used to disseminate basic needs to remote areas for below market prices.
Health care and education
Nations do not necessarily need wealth to gain health. For example, Sri Lanka had a maternal mortality rate of 2% in the 1930s, higher than any nation today. It reduced it to 0.5–0.6% in the 1950s and to 0.6% today while spending less each year on maternal health because it learned what worked and what did not. Knowledge on the cost effectiveness of healthcare interventions can be elusive and educational measures have been made to disseminate what works, such as the Copenhagen Consensus. Cheap water filters and promoting hand washing are some of the most cost effective health interventions and can cut deaths from diarrhea and pneumonia.Strategies to provide education cost effectively include deworming children, which costs about 50 cents per child per year and reduces non-attendance from anemia, illness and malnutrition, while being only a twenty-fifth as expensive as increasing school attendance by constructing schools. Schoolgirl absenteeism could be cut in half by simply providing free sanitary towels. Fortification with micronutrients was ranked the most cost effective aid strategy by the Copenhagen Consensus. For example, iodised salt costs 2 to 3 cents per person a year while even moderate iodine deficiency in pregnancy shaves off 10 to 15 IQ points. Paying for school meals is argued to be an efficient strategy in increasing school enrollment, reducing absenteeism and increasing student attention.Desirable actions such as enrolling children in school or receiving vaccinations can be encouraged by a form of aid known as conditional cash transfers. In Mexico, for example, dropout rates of 16- to 19-year-olds in rural area dropped by 20% and children gained half an inch in height. Initial fears that the program would encourage families to stay at home rather than work to collect benefits have proven to be unfounded. Instead, there is less excuse for neglectful behavior as, for example, children stopped begging on the streets instead of going to school because it could result in suspension from the program.
Housing
The right to housing is a human right. Policy incentives such as Housing First emphasize that other basic needs are easier to be met when housing is first guaranteed.
Removing constraints on government services
Government revenue can be diverted away from basic services by corruption. Funds from aid and natural resources are often sent by government individuals for money laundering to overseas banks which insist on bank secrecy, instead of spending on the poor. A Global Witness report asked for more action from Western banks as they have proved capable of stanching the flow of funds linked to terrorism.Illicit capital flight, such as corporate tax avoidance, from the developing world is estimated at ten times the size of aid it receives and twice the debt service it pays, with one estimate that most of Africa would be developed if the taxes owed were paid. About 60 per cent of illicit capital flight from Africa is from transfer mispricing, where a subsidiary in a developing nation sells to another subsidiary or shell company in a tax haven at an artificially low price to pay less tax. An African Union report estimates that about 30% of sub-Saharan Africas GDP has been moved to tax havens. Solutions include corporate "country-by-country reporting" where corporations disclose activities in each country and thereby prohibit the use of tax havens where no effective economic activity occurs.Developing countries debt service to banks and governments from richer countries can constrain government spending on the poor. For example, Zambia spent 40% of its total budget to repay foreign debt, and only 7% for basic state services in 1997. One of the proposed ways to help poor countries has been debt relief. Zambia began offering services, such as free health care even while overwhelming the health care infrastructure, because of savings that resulted from a 2005 round of debt relief. Since that round of debt relief, private creditors accounted for an increasing share of poor countries debt service obligations. This complicated efforts to renegotiate easier terms for borrowers during crises such as the COVID-19 pandemic because the multiple private creditors involved say they have a fiduciary obligation to their clients such as the pension funds.The World Bank and the International Monetary Fund, as primary holders of developing countries debt, attach structural adjustment conditionalities in return for loans which are generally geared toward loan repayment with austerity measures such as the elimination of state subsidies and the privatization of state services. For example, the World Bank presses poor nations to eliminate subsidies for fertilizer even while many farmers cannot afford them at market prices. In Malawi, almost 5 million of its 13 million people used to need emergency food aid but after the government changed policy and subsidies for fertilizer and seed were introduced, farmers produced record-breaking corn harvests in 2006 and 2007 as Malawi became a major food exporter. A major proportion of aid from donor nations is tied, mandating that a receiving nation spend on products and expertise originating only from the donor country. US law requires food aid be spent on buying food at home, instead of where the hungry live, and, as a result, half of what is spent is used on transport.Distressed securities funds, also known as vulture funds, buy up the debt of poor nations cheaply and then sue countries for the full value of the debt plus interest which can be ten or 100 times what they paid. They may pursue any companies which do business with their target country to force them to pay to the fund instead. Considerable resources are diverted on costly court cases. For example, a court in Jersey ordered the Democratic Republic of the Congo to pay an American speculator $100 million in 2010. Now, the UK, Isle of Man and Jersey have banned such payments.
Reversing brain drain
The loss of basic needs providers emigrating from impoverished countries has a damaging effect. As of 2004, there were more Ethiopia-trained doctors living in Chicago than in Ethiopia. Proposals to mitigate the problem include compulsory government service for graduates of public medical and nursing schools and promoting medical tourism so that health care personnel have more incentive to practice in their home countries. It is very easy for Ugandan doctors to emigrate to other countries. It is seen that only 69% of the health care jobs were filled in Uganda. Other Ugandan doctors were seeking jobs in other countries leaving inadequate or less skilled doctors to stay in Uganda.
Preventing overpopulation
Poverty and lack of access to birth control can lead to population increases that put pressure on local economies and access to resources, amplifying other economic inequality and creating increase poverty. Better education for both men and women, and more control of their lives, reduces population growth due to family planning. According to United Nations Population Fund (UNFPA), those who receive better education can earn money for their lives, thereby strengthening economic security.
Increasing personal income
The following are strategies used or proposed to increase personal incomes among the poor. Raising farm incomes is described as the core of the antipoverty effort as three-quarters of the poor today are farmers. Estimates show that growth in the agricultural productivity of small farmers is, on average, at least twice as effective in benefiting the poorest half of a countrys population as growth generated in nonagricultural sectors.
Income grants
A guaranteed minimum income ensures that every citizen will be able to purchase a desired level of basic needs. A basic income (or negative income tax) is a system of social security, that periodically provides each citizen, rich or poor, with a sum of money that is sufficient to live on. Studies of large cash-transfer programs in Ethiopia, Kenya, and Malawi show that the programs can be effective in increasing consumption, schooling, and nutrition, whether they are tied to such conditions or not. Proponents argue that a basic income is more economically efficient than a minimum wage and unemployment benefits, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. In 1968, Paul Samuelson, John Kenneth Galbraith and another 1,200 economists signed a document calling for the US Congress to introduce a system of income guarantees. Winners of the Nobel Prize in Economics, with often diverse political convictions, who support a basic income include Herbert A. Simon, Friedrich Hayek, Robert Solow, Milton Friedman, Jan Tinbergen, James Tobin
and James Meade.Income grants are argued to be vastly more efficient in extending basic needs to the poor than subsidizing supplies whose effectiveness in poverty alleviation is diluted by the non-poor who enjoy the same subsidized prices. With cars and other appliances, the wealthiest 20% of Egypt uses about 93% of the countrys fuel subsidies. In some countries, fuel subsidies are a larger part of the budget than health and education. A 2008 study concluded that the money spent on in-kind transfers in India in a year could lift all Indias poor out of poverty for that year if transferred directly. The primary obstacle argued against direct cash transfers is the impractically for poor countries of such large and direct transfers. In practice, payments determined by complex iris scanning are used by war-torn Democratic Republic of Congo and Afghanistan, while India is phasing out its fuel subsidies in favor of direct transfers. Additionally, in aid models, the famine relief model increasingly used by aid groups calls for giving cash or cash vouchers to the hungry to pay local farmers instead of buying food from donor countries, often required by law, as it wastes money on transport costs.
Economic freedoms
Corruption often leads to many civil services being treated by governments as employment agencies to loyal supporters and so it could mean going through 20 procedures, paying $2,696 in fees, and waiting 82 business days to start a business in Bolivia, while in Canada it takes two days, two registration procedures, and $280 to do the same. Such costly barriers favor big firms at the expense of small enterprises, where most jobs are created. Often, businesses have to bribe government officials even for routine activities, which is, in effect, a tax on business. Noted reductions in poverty in recent decades has occurred in China and India mostly as a result of the abandonment of collective farming in China and the ending of the central planning model known as the License Raj in India.The World Bank concludes that governments and feudal elites extending to the poor the right to the land that they live and use are the key to reducing poverty citing that land rights greatly increase poor peoples wealth, in some cases doubling it. Although approaches varied, the World Bank said the key issues were security of tenure and ensuring land transactions costs were low.Greater access to markets brings more income to the poor. Road infrastructure has a direct impact on poverty. Additionally, migration from poorer countries resulted in $328 billion sent from richer to poorer countries in 2010, more than double the $120 billion in official aid flows from OECD members. In 2011, India got $52 billion from its diaspora, more than it took in foreign direct investment.
Financial services
Microloans, made famous by the Grameen Bank, is where small amounts of money are loaned to farmers or villages, mostly women, who can then obtain physical capital to increase their economic rewards. However, microlending has been criticized for making hyperprofits off the poor even from its founder, Muhammad Yunus, and in India, Arundhati Roy asserts that some 250,000 debt-ridden farmers have been driven to suicide.Those in poverty place overwhelming importance on having a safe place to save money, much more so than receiving loans. Additionally, a large part of microfinance loans are spent not on investments but on products that would usually be paid by a checking or savings account. Microsavings are designs to make savings products available for the poor, who make small deposits. Mobile banking uses the wide availability of mobile phones to address the problem of the heavy regulation and costly maintenance of saving accounts. This usually involves a network of agents of mostly shopkeepers, instead of bank branches, would take deposits in cash and translate these onto a virtual account on customers phones. Cash transfers can be done between phones and issued back in cash with a small commission, making remittances safer.
Reversing wealth concentration
Oxfam, among others, has called for an international movement to end extreme wealth concentration arguing that the concentration of resources in the hands of the top 1% depresses economic activity and makes life harder for everyone else—particularly those at the bottom of the economic ladder. And they say that the gains of the worlds billionaires in 2017, which amounted to $762 billion, were enough to end extreme global poverty seven times over.
Perspectives
Economic theories
The cause of poverty is a highly ideologically charged subject, as different causes point to different remedies. Very broadly speaking, the socialist tradition locates the roots of poverty in problems of distribution and the use of the means of production as capital benefiting individuals, and calls for redistribution of wealth as the solution, whereas the neoliberal school of thought holds that creating conditions for profitable private investment is the solution. Neoliberal think tanks have received extensive funding, and the ability to apply many of their ideas in highly indebted countries in the global South as a condition for receiving emergency loans from the International Monetary Fund.
The existence of inequality is in part due to a set of self-reinforcing behaviors that all together constitute one aspect of the cycle of poverty. These behaviors, in addition to unfavorable, external circumstances, also explain the existence of the Matthew effect, which not only exacerbates existing inequality, but is more likely to make it multigenerational. Widespread, multigenerational poverty is an important contributor to civil unrest and political instability. For example, Raghuram G. Rajan, former governor of the Reserve Bank of India and former chief economist at the International Monetary Fund, has blamed the ever-widening gulf between the rich and the poor, especially in the US, to be one of the main fault lines which caused the financial institutions to pump money into subprime mortgages—on political behest, as a palliative and not a remedy, for poverty—causing the financial crisis of 2007–2009. In Rajans view the main cause of the increasing gap between high income and low income earners was lack of equal access to higher education for the latter.A data based scientific empirical research, which studied the impact of dynastic politics on the level of poverty of the provinces, found a positive correlation between dynastic politics and poverty; i.e. the higher proportion of dynastic politicians in power in a province leads to higher poverty rate. There is significant evidence that these political dynasties use their political dominance over their respective regions to enrich themselves, using methods such as graft or outright bribery of legislators.Many scholars and public intellectuals argue that, throughout most of human history, extreme poverty was the norm for roughly 90% of the population, with only the emergence of industrial capitalism in the 19th century lifting masses of people out of it. This narrative is advanced by, among others, Martin Ravallion, Nicholas Kristof, and Steven Pinker. Some academics including Dylan Sullivan and Jason Hickel have challenged this contemporary mainstream narrative on poverty, arguing that extreme poverty was not the norm throughout human history, but emerged during "periods of severe social and economic dislocation," including high European feudalism and the apex of the Roman Empire, and that it expanded significantly after 1500 with the emergence of colonialism and the beginnings of capitalism, stating that "the expansion of the capitalist world-system caused a dramatic and prolonged process of impoverishment on a scale unparalleled in recorded history." Sullivan and Hickel assert that only with the rise of anti-colonial and socialist political movements in the 20th century did human welfare begin to see significant improvement.
Environmentalism
Important studies such as the Brundtland Report concluded that poverty causes environmental degradation, while other theories like environmentalism of the poor conclude that the global poor may be the most important force for sustainability. Either way, the poor suffer most from environmental degradation caused by reckless exploitation of natural resources by the rich. This unfair distribution of environmental burdens and benefits has generated the global environmental justice movement.A report published in 2013 by the World Bank, with support from the Climate & Development Knowledge Network, found that climate change was likely to hinder future attempts to reduce poverty. The report presented the likely impacts of present day, 2 °C and 4 °C warming on agricultural production, water resources, coastal ecosystems and cities across Sub-Saharan Africa, South Asia and South East Asia. The impacts of a temperature rise of 2 °C included: regular food shortages in Sub-Saharan Africa; shifting rain patterns in South Asia leaving some parts under water and others without enough water for power generation, irrigation or drinking; degradation and loss of reefs in South East Asia, resulting in reduced fish stocks; and coastal communities and cities more vulnerable to increasingly violent storms </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is 5 and explain in detail? [/INST] 5 (five) is a number, numeral and digit. It is the natural number, and cardinal number, following 4 and preceding 6, and is a prime number. It has attained significance throughout history in part because typical humans have five digits on each hand.
In mathematics
5
{\displaystyle 5}
is the third smallest prime number, and the second super-prime. It is the first safe prime, the first good prime, the first balanced prime, and the first of three known Wilson primes. Five is the second Fermat prime and the third Mersenne prime exponent, as well as the third Catalan number, and the third Sophie Germain prime. Notably, 5 is equal to the sum of the only consecutive primes, 2 + 3, and is the only number that is part of more than one pair of twin primes, (3, 5) and (5, 7). It is also a sexy prime with the fifth prime number and first prime repunit, 11. Five is the third factorial prime, an alternating factorial, and an Eisenstein prime with no imaginary part and real part of the form
3
p
{\displaystyle 3p}
−
1
{\displaystyle 1}
. In particular, five is the first congruent number, since it is the length of the hypotenuse of the smallest integer-sided right triangle.Five is the second Fermat prime of the form
2
2
n
{\displaystyle 2^{2^{n}}}
+
1
{\displaystyle 1}
, and more generally the second Sierpiński number of the first kind,
n
n
{\displaystyle n^{n}}
+
1
{\displaystyle 1}
. There are a total of five known Fermat primes, which also include 3, 17, 257, and 65537. The sum of the first 3 Fermat primes, 3, 5 and 17, yields 25 or 52, while 257 is the 55th prime number. Combinations from these 5 Fermat primes generate 31 polygons with an odd number of sides that can be constructed purely with a compass and straight-edge, which includes the five-sided regular pentagon. Apropos, 31 is also equal to the sum of the maximum number of areas inside a circle that are formed from the sides and diagonals of the first five
n
{\displaystyle n}
-sided polygons, and equal to the maximum number of areas formed by a six-sided polygon; per Mosers circle problem.The number 5 is the fifth Fibonacci number, being 2 plus 3. It is the only Fibonacci number that is equal to its position aside from 1, which is both the first and second Fibonacci numbers. Five is also a Pell number and a Markov number, appearing in solutions to the Markov Diophantine equation: (1, 2, 5), (1, 5, 13), (2, 5, 29), (5, 13, 194), (5, 29, 433), ... (OEIS: A030452 lists Markov numbers that appear in solutions where one of the other two terms is 5). Whereas 5 is unique in the Fibonacci sequence, in the Perrin sequence 5 is both the fifth and sixth Perrin numbers.5 is the third Mersenne prime exponent of the form
2
n
{\displaystyle 2^{n}}
−
1
{\displaystyle 1}
, which yields
31
{\displaystyle 31}
: the prime index of the third Mersenne prime and second double Mersenne prime 127, as well as the third double Mersenne prime exponent for the number 2,147,483,647, which is the largest value that a signed 32-bit integer field can hold. There are only four known double Mersenne prime numbers, with a fifth candidate double Mersenne prime
M
M
61
{\displaystyle M_{M_{61}}}
= 223058...93951 − 1 too large to compute with current computers. In a related sequence, the first 5 terms in the sequence of Catalan–Mersenne numbers
M
c
n
{\displaystyle M_{c_{n}}}
are the only known prime terms, with a sixth possible candidate in the order of 101037.7094. These prime sequences are conjectured to be prime up to a certain limit.
Every odd number greater than
1
{\displaystyle 1}
is the sum of at most five prime numbers, and every odd number greater than
5
{\displaystyle 5}
is conjectured to be expressible as the sum of three prime numbers. Helfgott has provided a proof of the latter, also known as the odd Goldbach conjecture, that is already widely acknowledged by mathematicians as it still undergoes peer-review.
The sums of the first five non-primes greater than zero
1
{\displaystyle 1}
+
4
{\displaystyle 4}
+
6
{\displaystyle 6}
+
8
{\displaystyle 8}
+
9
{\displaystyle 9}
and the first five prime numbers
2
{\displaystyle 2}
+
3
{\displaystyle 3}
+
5
{\displaystyle 5}
+
7
{\displaystyle 7}
+
11
{\displaystyle 11}
both equal
28
{\displaystyle 28}
; the 7th triangular number and like
6
{\displaystyle 6}
a perfect number, which also includes
496
{\displaystyle 496}
, the 31st triangular number and perfect number of the form
2
p
{\displaystyle 2^{p}}
−1(
2
p
{\displaystyle 2^{p}}
−
1
{\displaystyle 1}
) with a
p
{\displaystyle p}
of
5
{\displaystyle 5}
, by the Euclid–Euler theorem.There are a total of five known unitary perfect numbers, which are numbers that are the sums of their positive proper unitary divisors. A sixth unitary number, if discovered, would have at least nine odd prime factors.Five is conjectured to be the only odd untouchable number, and if this is the case then five will be the only odd prime number that is not the base of an aliquot tree.In figurate numbers, 5 is a pentagonal number, with the sequence of pentagonal numbers starting: 1, 5, 12, 22, 35, ...
5 is a centered tetrahedral number: 1, 5, 15, 35, 69, ... Every centered tetrahedral number with an index of 2, 3 or 4 modulo 5 is divisible by 5.
5 is a square pyramidal number: 1, 5, 14, 30, 55, ... The sum of the first four members is 50 while the fifth indexed member in the sequence is 55.
5 is a centered square number: 1, 5, 13, 25, 41, ... The fifth square number or 52 is 25, which features in the proportions of the two smallest (3, 4, 5) and (5, 12, 13) primitive Pythagorean triples.The factorial of five, or
5
{\displaystyle 5}
! =
120
{\displaystyle 120}
, is the sum of the first fifteen non-zero positive integers, and 15th triangular number, which in turn is the sum of the first five non-zero positive integers and 5th triangular number. 35, which is the fourth or fifth pentagonal and tetrahedral number, is equal to the sum of the first five triangular numbers: 1, 3, 6, 10, 15.5 is the value of the central cell of the only non-trivial normal magic square, also called the Lo Shu square. Its
3
{\displaystyle 3}
x
3
{\displaystyle 3}
array of squares has a magic constant
M
{\displaystyle M}
of
15
{\displaystyle 15}
, where the sums of its rows, columns, and diagonals are all equal to fifteen. 5 is also the value of the central cell the only non-trivial order-3 normal magic hexagon that is made of nineteen cells.Polynomial equations of degree 4 and below can be solved with radicals, while quintic equations of degree 5, and higher, cannot generally be so solved. This is the Abel–Ruffini theorem. This is related to the fact that the symmetric group
S
n
{\displaystyle \mathrm {S} _{n}}
is a solvable group for n ⩽ 4 and not solvable for n ⩾ 5.
Eulers identity,
e
i
π
{\displaystyle e^{i\pi }}
+
1
{\displaystyle 1}
=
0
{\displaystyle 0}
, contains five essential numbers used widely in mathematics: Archimedes constant
π
{\displaystyle \pi }
, Eulers number
e
{\displaystyle e}
, the imaginary number
i
{\displaystyle i}
, unity
1
{\displaystyle 1}
, and zero
0
{\displaystyle 0}
, which makes this formula a renown example of beauty in mathematics.
In geometry
A pentagram, or five-pointed polygram, is the first proper star polygon constructed from the diagonals of a regular pentagon as self-intersecting edges that are proportioned in golden ratio,
φ
{\displaystyle \varphi }
. Its internal geometry appears prominently in Penrose tilings, and is a facet inside Kepler-Poinsot star polyhedra and Schläfli–Hess star polychora, represented by its Schläfli symbol {5/2}. A similar figure to the pentagram is a five-pointed simple isotoxal star ☆ without self-intersecting edges. Generally, star polytopes that are regular only exist in dimensions 2 ⩽
n
{\displaystyle n}
< 5.
In graph theory, all graphs with 4 or fewer vertices are planar, however, there is a graph with 5 vertices that is not: K5, the complete graph with 5 vertices, where every pair of distinct vertices in a pentagon is joined by unique edges belonging to a pentagram. By Kuratowskis theorem, a finite graph is planar iff it does not contain a subgraph that is a subdivision of K5, or the complete bipartite utility graph K3,3. A similar graph is the Petersen graph, which is strongly connected and also nonplanar. It is most easily described as graph of a pentagram embedded inside a pentagon, with a total of 5 crossings, a girth of 5, and a Thue number of 5. The Petersen graph, which is also a distance-regular graph, is one of only 5 known connected vertex-transitive graphs with no Hamiltonian cycles. The automorphism group of the Petersen graph is the symmetric group
S
5
{\displaystyle \mathrm {S} _{5}}
of order 120 = 5!.
The chromatic number of the plane is at least five, depending on the choice of set-theoretical axioms: the minimum number of colors required to color the plane such that no pair of points at a distance of 1 has the same color. Whereas the hexagonal Golomb graph and the regular hexagonal tiling generate chromatic numbers of 4 and 7, respectively, a chromatic coloring of 5 can be attained under a more complicated graph where multiple four-coloring Moser spindles are linked so that no monochromatic triples exist in any coloring of the overall graph, as that would generate an equilateral arrangement that tends toward a purely hexagonal structure.
The plane contains a total of five Bravais lattices, or arrays of points defined by discrete translation operations: hexagonal, oblique, rectangular, centered rectangular, and square lattices. The plane can also be tiled monohedrally with convex pentagons in fifteen different ways, three of which have Laves tilings as special cases.Five points are needed to determine a conic section, in the same way that two points are needed to determine a line. A Veronese surface in the projective plane
P
5
{\displaystyle \mathbb {P} ^{5}}
of a conic generalizes a linear condition for a point to be contained inside a conic.
There are
5
{\displaystyle 5}
Platonic solids in three-dimensional space: the tetrahedron, cube, octahedron, dodecahedron, and icosahedron. The dodecahedron in particular contains pentagonal faces, while the icosahedron, its dual polyhedron, has a vertex figure that is a regular pentagon. There are also
5
{\displaystyle 5}
:
☆ Regular polyhedron compounds: the stella octangula, compound of five tetrahedra, compound of five cubes, compound of five octahedra, and compound of ten tetrahedra. Icosahedral symmetry
I
h
{\displaystyle \mathrm {I} _{h}}
is isomorphic to the alternating group on 5 letters
A
5
{\displaystyle \mathrm {A} _{5}}
of order 120, realized by actions on these uniform polyhedron compounds.
☆ Space-filling convex polyhedra: the triangular prism, hexagonal prism, cube, truncated octahedron, and gyrobifastigium. While the cube is the only Platonic solid that can tessellate space on its own, the truncated octahedron and the gyrobifastigium are the only Archimedean and Johnson solids, respectively, that can also tessellate space with their own copies.
☆ Cell-transitive parallelohedra: any parallelepiped, as well as the rhombic dodecahedron and elongated dodecahedron, and the hexagonal prism and truncated octahedron. The cube is a special case of a parallelepiped, with the rhombic dodecahedron the only Catalan solid to tessellate space on its own.
☆ Regular abstract polyhedra, which include the excavated dodecahedron and the dodecadodecahedron. They have combinatorial symmetries transitive on flags of their elements, with topologies equivalent to that of toroids and the ability to tile the hyperbolic plane.
The 5-cell, or pentatope, is the self-dual fourth-dimensional analogue of the tetrahedron, with Coxeter group symmetry
A
4
{\displaystyle \mathrm {A} _{4}}
of order 120 = 5! and
S
5
{\displaystyle \mathrm {S} _{5}}
group structure. Made of five tetrahedra, its Petrie polygon is a regular pentagon and its orthographic projection is equivalent to the complete graph K5. It is one of six regular 4-polytopes, made of thirty-one elements: five vertices, ten edges, ten faces, five tetrahedral cells and one 4-face.
A regular 120-cell, the dual polychoron to the regular 600-cell, can fit one hundred and twenty 5-cells. Also, five 24-cells fit inside a small stellated 120-cell, the first stellation of the 120-cell.A subset of the vertices of the small stellated 120-cell are matched by the great duoantiprism star, which is the only uniform nonconvex duoantiprismatic solution in the fourth dimension, constructed from the polytope cartesian product {5}⊗{5/3} and made of fifty tetrahedra, ten pentagrammic crossed antiprisms, ten pentagonal antiprisms, and fifty vertices.The grand antiprism, which is the only known non-Wythoffian construction of a uniform polychoron, is made of twenty pentagonal antiprisms and three hundred tetrahedra, with a total of one hundred vertices and five hundred edges.The abstract four-dimensional 57-cell is made of fifty-seven hemi-icosahedral cells, in-which five surround each edge. The 11-cell, another abstract 4-polytope with eleven vertices and fifty-five edges, is made of eleven hemi-dodecahedral cells each with fifteen dodecahedra. The skeleton of the hemi-dodecahedron is the Petersen graph.Overall, the fourth dimension contains five Weyl groups that form a finite number of uniform polychora:
A
4
{\displaystyle \mathrm {A} _{4}}
,
B
4
{\displaystyle \mathrm {B} _{4}}
,
D
4
{\displaystyle \mathrm {D} _{4}}
,
F
4
{\displaystyle \mathrm {F} _{4}}
, and
H
4
{\displaystyle \mathrm {H} _{4}}
, with four of these Coxeter groups capable of generating the same finite forms without
D
4
{\displaystyle \mathrm {D} _{4}}
; accompanied by a fifth or sixth general group of unique 4-prisms of Platonic and Archimedean solids. There are also a total of five Coxeter groups that generate non-prismatic Euclidian honeycombs in 4-space, alongside five compact hyperbolic Coxeter groups that generate five regular compact hyperbolic honeycombs with finite facets, as with the order-5 5-cell honeycomb and the order-5 120-cell honeycomb, both of which have five cells around each face. Compact hyperbolic honeycombs only exist through the fourth dimension, or rank 5, with paracompact hyperbolic solutions existing through rank 10. Likewise, analogues of three-dimensional icosahedral symmetry
I
h
{\displaystyle \mathrm {I} _{h}}
or four-dimensional
H
4
{\displaystyle \mathrm {H} _{4}}
symmetry do not exist in dimensions n ⩾ 5; however, there is the uniform prismatic group
H
4
{\displaystyle \mathrm {H} _{4}}
×
A
1
{\displaystyle \mathrm {A} _{1}}
in the fifth dimension which contains prisms of regular and uniform 4-polytopes that have
H
4
{\displaystyle \mathrm {H} _{4}}
symmetry.
The 5-simplex is the five-dimensional analogue of the 5-cell, or 4-simplex; the fifth iteration of
n
{\displaystyle n}
-simplexes in any
n
{\displaystyle n}
dimensions. The 5-simplex has the Coxeter group
A
5
{\displaystyle \mathrm {A} _{5}}
as its symmetry group, of order 720 = 6!, whose group structure is represented by the symmetric group
S
6
{\displaystyle \mathrm {S} _{6}}
, the only finite symmetric group which has an outer automorphism. The 5-cube, made of ten tesseracts and the 5-cell as its vertex figure, is also regular and one of thirty-one uniform 5-polytopes under the Coxeter
B
5
{\displaystyle \mathrm {B} _{5}}
hypercubic group. The demipenteract, with one hundred and twenty cells, is the only fifth-dimensional semiregular polytope, and has the rectified 5-cell as its vertex figure, which is one of only three semiregular 4-polytopes alongside the rectified 600-cell and the snub 24-cell. In the fifth dimension, there are five regular paracompact honeycombs, all with infinite facets and vertex figures. There are exclusively twelve complex aperiotopes in
C
n
{\displaystyle \mathbb {C} ^{n}}
complex spaces of dimensions
n
{\displaystyle n}
⩾
5
{\displaystyle 5}
, with fifteen in
C
4
{\displaystyle \mathbb {C} ^{4}}
and sixteen in
C
3
{\displaystyle \mathbb {C} ^{3}}
; alongside complex polytopes in
C
5
{\displaystyle \mathbb {C} ^{5}}
and higher under simplex, hypercubic and orthoplex groups, the latter with van Oss polytopes.
There are five exceptional Lie groups:
g
2
{\displaystyle {\mathfrak {g}}_{2}}
,
f
4
{\displaystyle {\mathfrak {f}}_{4}}
,
e
6
{\displaystyle {\mathfrak {e}}_{6}}
,
e
7
{\displaystyle {\mathfrak {e}}_{7}}
, and
e
8
{\displaystyle {\mathfrak {e}}_{8}}
. The smallest of these,
g
2
{\displaystyle {\mathfrak {g}}_{2}}
, can be represented in five-dimensional complex space and projected in the same number of dimensions as a ball rolling on top of another ball, whose motion is described in two-dimensional space.
e
8
{\displaystyle {\mathfrak {e}}_{8}}
, the largest of all five exceptional groups, also contains the other four as subgroups and is constructed with one hundred and twenty quaternionic unit icosians that make up the vertices of the 600-cell. There are also five solvable groups that are excluded from finite simple groups of Lie type.
The five Mathieu groups constitute the first generation in the happy family of sporadic groups. These are also the first five sporadic groups to have been described, defined as
M
n
{\displaystyle \mathrm {M} _{n}}
multiply transitive permutation groups on
n
{\displaystyle n}
objects, with
n
{\displaystyle n}
∈ {11, 12, 22, 23, 24}. In particular,
M
11
{\displaystyle \mathrm {M} _{11}}
, the smallest of all sporadic groups, has a rank 3 action on fifty-five points from an induced action on unordered pairs, as well as two five-dimensional faithful complex irreducible representations over the field with three elements, which is the lowest irreducible dimensional representation of all sporadic group over their respective fields with n elements. Of precisely five different conjugacy classes of maximal subgroups of
M
11
{\displaystyle \mathrm {M} _{11}}
, one is the almost simple symmetric group
S
5
{\displaystyle \mathrm {S} _{5}}
(of order 5!), and another is
M
10
{\displaystyle \mathrm {M} _{10}}
, also almost simple, that functions as a point stabilizer which has
5
{\displaystyle 5}
as its largest prime factor in its group order: 24·32·5 = 2·3·4·5·6 = 8·9·10 = 720. On the other hand, whereas
M
11
{\displaystyle \mathrm {M} _{11}}
is sharply 4-transitive,
M
12
{\displaystyle \mathrm {M} _{12}}
is sharply 5-transitive and
M
24
{\displaystyle \mathrm {M} _{24}}
is 5-transitive, and as such they are the only two 5-transitive groups that are not symmetric groups or alternating groups.
M
22
{\displaystyle \mathrm {M} _{22}}
has the first five prime numbers as its distinct prime factors in its order of 27·32·5·7·11, and is the smallest of five sporadic groups with five distinct prime factors in their order. All Mathieu groups are subgroups of
M
24
{\displaystyle \mathrm {M} _{24}}
, which under the Witt design
W
24
{\displaystyle \mathrm {W} _{24}}
of Steiner system S(5, 8, 24) emerges a construction of the extended binary Golay code
B
24
{\displaystyle \mathrm {B} _{24}}
that has
M
24
{\displaystyle \mathrm {M} _{24}}
as its automorphism group.
W
24
{\displaystyle \mathrm {W} _{24}}
generates octads from code words of Hamming weight 8 from the extended binary Golay code, one of five different Hamming weights the extended binary Golay code uses: 0, 8, 12, 16, and 24. The Witt design and the extended binary Golay code in turn can be used to generate a faithful construction of the 24-dimensional Leech lattice Λ24, which is the subject of the second generation of seven sporadic groups that are subquotients of the automorphism of the Leech lattice, Conway group
C
o
0
{\displaystyle \mathrm {Co} _{0}}
.
There are five non-supersingular primes: 37, 43, 53, 61, and 67, all smaller than the largest of fifteen supersingular prime divisors of the friendly giant, 71.
List of basic calculations
In decimal
5 is the only prime number to end in the digit 5 in decimal because all other numbers written with a 5 in the ones place are multiples of five, which makes it a 1-automorphic number.
All multiples of 5 will end in either 5 or 0, and vulgar fractions with 5 or 2 in the denominator do not yield infinite decimal expansions because they are prime factors of 10, the base.
In the powers of 5, every power ends with the number five, and from 53 onward, if the exponent is odd, then the hundreds digit is 1, and if it is even, the hundreds digit is 6.
A number
n
{\displaystyle n}
raised to the fifth power always ends in the same digit as
n
{\displaystyle n}
.
Evolution of the Arabic digit
The evolution of the modern Western digit for the numeral 5 cannot be traced back to the Indian system, as for the digits 1 to 4. The Kushana and Gupta empires in what is now India had among themselves several different forms that bear no resemblance to the modern digit. The Nagari and Punjabi took these digits and all came up with forms that were similar to a lowercase "h" rotated 180°. The Ghubar Arabs transformed the digit in several different ways, producing from that were more similar to the digits 4 or 3 than to 5. It was from those digits that Europeans finally came up with the modern 5.
While the shape of the character for the digit 5 has an ascender in most modern typefaces, in typefaces with text figures the glyph usually has a descender, as, for example, in .
On the seven-segment display of a calculator, it is represented by five segments at four successive turns from top to bottom, rotating counterclockwise first, then clockwise, and vice-versa.
Science
The atomic number of boron.
The number of appendages on most starfish, which exhibit pentamerism.
The most destructive known hurricanes rate as Category 5 on the Saffir–Simpson hurricane wind scale.
The most destructive known tornadoes rate an F-5 on the Fujita scale or EF-5 on the Enhanced Fujita scale.
Astronomy
Messier object M5, a magnitude 7.0 globular cluster in the constellation Serpens.
The New General Catalogue object NGC 5, a magnitude 13 spiral galaxy in the constellation Andromeda.
The Roman numeral V stands for dwarfs (main sequence stars) in the Yerkes spectral classification scheme.
The Roman numeral V (usually) stands for the fifth-discovered satellite of a planet or minor planet (e.g. Jupiter V).
There are five Lagrangian points in a two-body system.
Biology
There are generally considered to be five senses.
The five basic tastes are sweet, salty, sour, bitter, and umami.
Almost all amphibians, reptiles, and mammals which have fingers or toes have five of them on each extremity.
Computing
5 is the ASCII code of the Enquiry character, which is abbreviated to ENQ.
Religion and culture
Hinduism
The god Shiva has five faces and his mantra is also called panchakshari (five-worded) mantra.
The goddess Saraswati, goddess of knowledge and intellectual is associated with panchami or the number 5.
There are five elements in the universe according to Hindu cosmology: dharti, agni, jal, vayu evam akash (earth, fire, water, air and space respectively).
The most sacred tree in Hinduism has 5 leaves in every leaf stunt.
Most of the flowers have 5 petals in them.
The epic Mahabharata revolves around the battle between Duryodhana and his 99 other brothers and the 5 pandava princes—Dharma, Arjuna, Bhima, Nakula and Sahadeva.
Christianity
There are traditionally five wounds of Jesus Christ in Christianity: the Scourging at the Pillar, the Crowning with Thorns, the wounds in Christs hands, the wounds in Christs feet, and the Side Wound of Christ.
Gnosticism
The number five was an important symbolic number in Manichaeism, with heavenly beings, concepts, and others often grouped in sets of five.
Five Seals in Sethianism
Five Trees in the Gospel of Thomas
Islam
The Five Pillars of Islam
Muslims pray to Allah five times a day
According to Shia Muslims, the Panjetan or the Five Holy Purified Ones are the members of Muhammads family: Muhammad, Ali, Fatimah, Hasan, and Husayn and are often symbolically represented by an image of the Khamsa.
Judaism
The Torah contains five books—Genesis, Exodus, Leviticus, Numbers, and Deuteronomy—which are collectively called the Five Books of Moses, the Pentateuch (Greek for "five containers", referring to the scroll cases in which the books were kept), or Humash (חומש, Hebrew for "fifth").
The book of Psalms is arranged into five books, paralleling the Five Books of Moses.
The Khamsa, an ancient symbol shaped like a hand with four fingers and one thumb, is used as a protective amulet by Jews; that same symbol is also very popular in Arabic culture, known to protect from envy and the evil eye.
Sikhism
The five sacred Sikh symbols prescribed by Guru Gobind Singh are commonly known as panj kakars or the "Five Ks" because they start with letter K representing kakka (ਕ) in the Punjabi languages Gurmukhi script. They are: kesh (unshorn hair), kangha (the comb), kara (the steel bracelet), kachhehra (the soldiers shorts), and kirpan (the sword) (in Gurmukhi: ਕੇਸ, ਕੰਘਾ, ਕੜਾ, ਕਛਹਰਾ, ਕਿਰਪਾਨ). Also, there are five deadly evils: kam (lust), krodh (anger), moh (attachment), lobh (greed), and ankhar (ego).
Daoism
5 Elements
5 Emperors
Other religions and cultures
According to ancient Greek philosophers such as Aristotle, the universe is made up of five classical elements: water, earth, air, fire, and ether. This concept was later adopted by medieval alchemists and more recently by practitioners of Neo-Pagan religions such as Wicca.
The pentagram, or five-pointed star, bears religious significance in various faiths including Baháʼí, Christianity, Freemasonry, Satanism, Taoism, Thelema, and Wicca.
In Cantonese, "five" sounds like the word "not" (character: 唔). When five appears in front of a lucky number, e.g. "58", the result is considered unlucky.
In East Asian tradition, there are five elements: (water, fire, earth, wood, and metal). The Japanese names for the days of the week, Tuesday through Saturday, come from these elements via the identification of the elements with the five planets visible with the naked eye. Also, the traditional Japanese calendar has a five-day weekly cycle that can be still observed in printed mixed calendars combining Western, Chinese-Buddhist, and Japanese names for each weekday.
In numerology, 5 or a series of 555, is often associated with change, evolution, love and abundance.
Members of The Nation of Gods and Earths, a primarily African American religious organization, call themselves the "Five-Percenters" because they believe that only 5% of mankind is truly enlightened.
Art, entertainment, and media
Fictional entities
James the Red Engine, a fictional character numbered 5.
Johnny 5 is the lead character in the film Short Circuit (1986)
Number Five is a character in Lorien Legacies
Sankara Stones, five magical rocks in Indiana Jones and the Temple of Doom that are sought by the Thuggees for evil purposes
The Mach Five Mahha-gō? (マッハ号), the racing car Speed Racer (Go Mifune in the Japanese version) drives in the anime series of the same name (known as "Mach Go! Go! Go!" in Japan)
In the works of J. R. R. Tolkien, five wizards (Saruman, Gandalf, Radagast, Alatar and Pallando) are sent to Middle-earth to aid against the threat of the Dark Lord Sauron
In the A Song of Ice and Fire series, the War of the Five Kings is fought between different claimants to the Iron Throne of Westeros, as well as to the thrones of the individual regions of Westeros (Joffrey Baratheon, Stannis Baratheon, Renly Baratheon, Robb Stark and Balon Greyjoy)
In The Wheel of Time series, the "Emonds Field Five" are a group of five of the series main characters who all come from the village of Emonds Field (Rand alThor, Matrim Cauthon, Perrin Aybara, Egwene alVere and Nynaeve alMeara)
Myst uses the number 5 as a unique base counting system. In The Myst Reader series, it is further explained that the number 5 is considered a holy number in the fictional Dni society.
Number Five is also a character in The Umbrella Academy comic book and TV series adaptation
Films
Towards the end of the film Monty Python and the Holy Grail (1975), the character of King Arthur repeatedly confuses the number five with the number three.
Five Go Mad in Dorset (1982) was the first of the long-running series of The Comic Strip Presents... television comedy films
The Fifth Element (1997), a science fiction film
Fast Five (2011), the fifth installment of the Fast and Furious film series.
V for Vendetta (2005), produced by Warner Bros., directed by James McTeigue, and adapted from Alan Moores graphic novel V for Vendetta prominently features number 5 and Roman Numeral V; the story is based on the historical event in which a group of men attempted to destroy Parliament on November 5, 1605
Music
Groups
Five (group), a UK Boy band
The Five (composers), 19th-century Russian composers
5 Seconds of Summer, pop band that originated in Sydney, Australia
Five Americans, American rock band active 1965–1969
Five Finger Death Punch, American heavy metal band from Las Vegas, Nevada. Active 2005–present
Five Man Electrical Band, Canadian rock group billed (and active) as the Five Man Electrical Band, 1969–1975
Maroon 5, American pop rock band that originated in Los Angeles, California
MC5, American punk rock band
Pentatonix, a Grammy-winning a cappella group originated in Arlington, Texas
The 5th Dimension, American pop vocal group, active 1977–present
The Dave Clark Five, a.k.a. DC5, an English pop rock group comprising Dave Clark, Lenny Davidson, Rick Huxley, Denis Payton, and Mike Smith; active 1958–1970
The Jackson 5, American pop rock group featuring various members of the Jackson family; they were billed (and active) as The Jackson 5, 1966–1975
Hi-5, Australian pop kids group, where it has several international adaptations, and several members throughout the history of the band. It was also a TV show.
We Five: American folk rock group active 1965–1967 and 1968–1977
Grandmaster Flash and the Furious Five: American rap group, 1970–80s
Fifth Harmony, an American girl group.
Ben Folds Five, an American alternative rock trio, 1993–2000, 2008 and 2011–2013
R5 (band), an American pop and alternative rock group, 2009–2018
Other uses
A perfect fifth is the most consonant harmony, and is the basis for most western tuning systems.
Modern musical notation uses a musical staff made of five horizontal lines.
In harmonics – the fifth partial (or 4th overtone) of a fundamental has a frequency ratio of 5:1 to the frequency of that fundamental. This ratio corresponds to the interval of 2 octaves plus a pure major third. Thus, the interval of 5:4 is the interval of the pure third. A major triad chord when played in just intonation (most often the case in a cappella vocal ensemble singing), will contain such a pure major third.
The number of completed, numbered piano concertos of Ludwig van Beethoven, Sergei Prokofiev, and Camille Saint-Saëns.
Using the Latin root, five musicians are called a quintet.
A scale with five notes per octave is called a pentatonic scale.
Five is the lowest possible number that can be the top number of a time signature with an asymmetric meter.
Television
StationsChannel 5 (UK), a television channel that broadcasts in the United Kingdom
5 (TV channel) (formerly known as ABC 5 and TV5) (DWET-TV channel 5 In Metro Manila) a television network in the Philippines.
SeriesBabylon 5, a science fiction television series
The number 5 features in the television series Battlestar Galactica in regards to the Final Five cylons and the Temple of Five
Hi-5 (Australian TV series), a television series from Australia
Hi-5 (UK TV series), a television show from the United Kingdom
Hi-5 Philippines a television show from the Philippines
Odyssey 5, a 2002 science fiction television series
Tillbaka till Vintergatan, a Swedish childrens television series featuring a character named "Femman" (meaning five), who can only utter the word five.
The Five (talk show): Fox News Channel roundtable current events television show, premiered 2011, so-named for its panel of five commentators.
Yes! PreCure 5 is a 2007 anime series which follows the adventures of Nozomi and her friends. It is also followed by the 2008 sequel Yes! Pretty Cure 5 GoGo!
The Quintessential Quintuplets is a 2019 slice of life romance anime series which follows the everyday life of five identical quintuplets and their interactions with their tutor. It has two seasons, and a final movie is scheduled in summer 2022.
Hawaii Five-0, CBS American TV series.
Literature
The Famous Five is a series of childrens books by British writer Enid Blyton
The Power of Five is a series of childrens books by British writer and screenwriter Anthony Horowitz
The Fall of Five is a book written under the collective pseudonym Pittacus Lore in the series Lorien Legacies
The Book of Five Rings is a text on kenjutsu and the martial arts in general, written by the swordsman Miyamoto Musashi circa 1645
Slaughterhouse-Five is a book by Kurt Vonnegut about World War II
Sports
The Olympic Games have five interlocked rings as their symbol, representing the number of inhabited continents represented by the Olympians (Europe, Asia, Africa, Australia and Oceania, and the Americas).
In AFL Womens, the top level of womens Australian rules football, each team is allowed 5 "interchanges" (substitute players), who can be freely substituted at any time.
In baseball scorekeeping, the number 5 represents the third basemans position.
In basketball:
The number 5 is used to represent the position of center.
Each team has five players on the court at a given time. Thus, the phrase "five on five" is commonly used to describe standard competitive basketball.
The "5-second rule" refers to several related rules designed to promote continuous play. In all cases, violation of the rule results in a turnover.
Under the FIBA (used for all international play, and most non-US leagues) and NCAA womens rule sets, a team begins shooting bonus free throws once its opponent has committed five personal fouls in a quarter.
Under the FIBA rules, A player fouls out and must leave the game after committing five fouls
Five-a-side football is a variation of association football in which each team fields five players.
In ice hockey:
A major penalty lasts five minutes.
There are five different ways that a player can score a goal (teams at even strength, team on the power play, team playing shorthanded, penalty shot, and empty net).
The area between the goaltenders legs is known as the five-hole.
In most rugby league competitions, the starting left wing wears this number </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Streptococcus agalactiae and explain in detail? [/INST] Streptococcus agalactiae (also known as group B streptococcus or GBS) is a gram-positive coccus (round bacterium) with a tendency to form chains (as reflected by the genus name Streptococcus). It is a beta-hemolytic, catalase-negative, and facultative anaerobe.S. agalactiae is the most common human pathogen of streptococci belonging to group B of the Rebecca Lancefield classification of streptococci. GBS are surrounded by a bacterial capsule composed of polysaccharides (exopolysacharide). The species is subclassified into ten serotypes (Ia, Ib, II–IX) depending on the immunologic reactivity of their polysaccharide capsule.The plural term group B streptococci (referring to the serotypes) and the singular term group B streptococcus (referring to the single species) are both commonly encountered (even though S. halichoeri and S. pseudoporcinus are also group B Streptococci).
In general, GBS is a harmless commensal bacterium being part of the human microbiota colonizing the gastrointestinal and genitourinary tract of up to 30% of healthy human adults (asymptomatic carriers). Nevertheless, GBS can cause severe invasive infections especially in newborns, the elderly, and people with compromised immune systems.
S. agalactiae is also a common veterinary pathogen, because it can cause bovine mastitis (inflammation of the udder) in dairy cows. The species name agalactiae meaning "of no milk", alludes to this.
Laboratory identification
GBS grows readily on blood agar plates as colonies surrounded by a narrow zone of β-hemolysis. GBS is characterized by the presence in the cell wall of the antigen group B of Lancefield classification (Lancefield grouping) that can be detected directly in intact bacteria using latex agglutination tests. The CAMP test is also another important test for identification of GBS. The CAMP factor produced by GBS acts synergistically with the staphylococcal β-hemolysin inducing enhanced hemolysis of sheep or bovine erythrocytes. GBS is also able to hydrolyze hippurate and this test can also be used to identify presumptively GBS.
Hemolytic GBS strains produce an orange-brick-red non-isoprenoid polyene (ornithine rhamnolipid) pigment (granadaene) when cultivated on granada medium that allows its straightforward identification.
GBS can also be identified using MALDI-TOF (Matrix Assisted Laser Desorption/Ionization-Time of Flight) instruments.
GBS colonies can additionally be identified tentatively after their appearance in chromogenic agar media, nevertheless GBS-like colonies that develop in chromogenic media should be confirmed as GBS using additional reliable tests (e.g.latex agglutination or the CAMP test) to avoid potential mis-identification.
A summary of the laboratory techniques for GBS identification is depicted in Ref 7.
GBS colonization
GBS is a normal component of the intestinal and vaginal microbiota in some women, GBS is an asymptomatic (presenting no symptoms) colonizer of the gastrointestinal tract and vagina in up to 30% of otherwise healthy adults, including pregnant women. GBS colonization may be permanent, intermittent or temporary. In different studies, GBS vaginal colonization rate ranges from 0% to 36%, most studies reporting colonization rates in sexually active women over 20%. It has been estimated that maternal GBS colonization worldwide is 18%, with regional variation from 11% to 35%.
These variations in the reported prevalence of asymptomatic GBS colonization could be related to the detection methods used, and differences in populations sampled.
Virulence
As other virulent bacteria, GBS harbors an important number of virulence factors (virulence factors are molecules produced by bacteria that boosts their capacity to infect and damage human tissues), the most important being the capsular polysaccharide (rich in sialic acid)
and a pore-forming toxin, β-hemolysin.
Today it is considered that GBS pigment and hemolysin are identical or closely related molecules.
GBS infection in newborns
GBS colonization usually does not cause problems in healthy women, nevertheless during pregnancy it can sometimes cause serious illness for the mother and the newborn. GBS is the leading cause of bacterial neonatal infection in the baby during gestation and after delivery with significant mortality rates in premature infants. GBS infections in the mother can cause chorioamnionitis (a severe infection of the placental tissues) infrequently, postpartum infections (after birth) and it had been related with prematurity and fetal death. GBS urinary tract infections (UTI) may also induce labor and cause premature delivery.
In the western world, GBS (in the absence of effective prevention measures) is the major cause of several bacterial infections of the newborn neonatal infection sepsis, pneumonia, and meningitis, which can lead to death or long-term sequelae.GBS neonatal infection typically originates in the lower reproductive tract of infected mothers.
GBS infections in newborns are separated into two clinical syndromes, early-onset disease (EOD) and late-onset disease (LOD).
EOD manifests from 0 to 7 living days in the newborn, most of the cases of EOD being apparent within 24h of birth. The most common clinical syndromes of EOD are sepsis without apparent focus, pneumonia, and less frequently meningitis. EOD is acquired vertically (vertical transmission), through exposure of the fetus or the baby to GBS from the vagina of a colonized woman, either intrautero or during birth after rupture of membranes. Infants can be infected during passage through the birth canal, nevertheless newborns that acquire GBS through this route can become only colonized, and these colonized infants habitually do not develop EOD. Roughly 50% of newborns to GBS colonized mothers are also GBS colonized and (without prevention measures) 1–2% of these newborns will develop EOD. In the past, the incidence of EOD ranged from 0.7 to 3.7 per thousand live births in the US and from 0.2 to 3.25 per thousand in Europe. In 2008, after widespread use of antenatal screening and intrapartum antibiotic prophylaxis (IAP), the CDC reported an incidence of 0.28 cases of EOD per thousand live births in the US.Multistate surveillance 2006-2015 shows a decline in EOD from 0.37 to 0.23 per 1000 live births in the US but LOD remains steady at 0.31 per 1000 live births.It has been indicated that where there was a policy of providing IAP for GBS colonized mothers the overall risk of EOGBS is 0.3%. Since 2006 to 2015 the incidence of GBS EOD decreased from 0.37 to 0.23 per thousand live births in the US.Though maternal GBS colonization is the key determinant for EOD, other factors also increase the risk. These factors include onset of labor before 37 weeks of gestation (premature birth), prolonged rupture of membranes (≥18h before delivery), intra-partum fever (>38 °C, >100.4 °F), amniotic infections (chorioamnionitis), young maternal age, and low levels of GBS anticapsular polysaccharide antibodies in the mother. Nevertheless, most babies who develop EOD are born to GBS colonized mothers without any additional risk factor. A previous sibling with EOD is also an important risk factor for development of the infection in subsequent deliveries, probably reflecting a lack of GBS polysaccharides protective antibodies in the mother.
Heavy GBS vaginal colonization is also associated with a higher risk for EOD.
Overall, the case–fatality rates from EOD have declined, from 50% observed in studies from the 1970s to 2 to 10% in recent years, mainly as a consequence of improvements in therapy and management. Fatal neonatal infections by GBS are more frequent among premature infants.GBS LOD affects infants from 7 days to 3 months of age and is more likely to cause bacteremia or meningitis. LOD can be acquired from the mother or from environmental sources. Hearing loss and mental impairment can be a long-term sequela of GBS meningitis. In contrast with EOD, the incidence of LOD has remained unchanged at 0.26 per 1000 live births in the US. S. agalactiae neonatal meningitis does not present with the hallmark sign of adult meningitis, a stiff neck; rather, it presents with nonspecific symptoms, such as fever, vomiting and irritability, and can consequently lead to a late diagnosis.
Prevention of neonatal infection
The only reliable way to prevent EOD currently is intrapartum antibiotic prophylaxis (IAP), that is to say administration of antibiotics during delivery. It has been proved that intravenous penicillin or ampicillin administered for at least 4 hours before delivery to GBS colonized women is very effective at preventing vertical transmission of GBS from mother to baby and EOD. Intravenous penicillin remains the agent of choice for IAP, with intravenous ampicillin as an acceptable alternative. For penicillin allergic women, the laboratory requisitions for ordering antepartum GBS screening cultures should indicate clearly the presence of penicillin allergy.Cefazolin, clindamycin, and vancomycin are used to prevent EOD in infants born to penicillin-allergic mothers.
Intravenous vancomycin is recommended for IAP in women colonized with a clindamycin-resistant Group B Streptococcus strain and a severe penicillin allergy.There are two ways to identify female candidates to receive intrapartum antibiotic prophylaxis: a risk-based approach or a culture-based screening approach. The culture-based screening approach identifies candidates to receive IAP using lower vaginal and rectal cultures obtained between 36 and 37 weeks gestation (32–34 weeks of gestation for women with twins) and IAP is administered to all GBS colonized women. The risk-based strategy identifies candidates to receive IAP by the aforementioned risk factors known to increase the probability of EOD without considering if the mother is or is not a GBS carrier.IAP is also recommended for women with intrapartum risk factors if their GBS carrier status is not known at the time of delivery, for women with GBS bacteriuria during their pregnancy, and for women who have had an infant with EOD previously.
The risk-based approach for IAP is in general less effective than the culture-based approach because in most of the cases EOD develops among newborns, which are born to mothers without risk factors.In 2010, the Centers for Disease Control and Prevention (CDC), in collaboration with several professional groups, issued its revised GBS prevention guidelines.In 2018, the task of revising and updating the GBS prophylaxis guidelines was transferred from the CDC to ACOG (American College of Obstetricians and Gynecologists), the American Academy of Pediatrics and to the American Society for Microbiology. The ACOG committee issued an update document on Prevention of Group B Streptococcal Early-Onset Disease in Newborns in 2019. This document does not introduce important changes from the CDC guidelines. The key measures necessary for preventing neonatal GBS early onset disease continue to be universal prenatal screening by culture of GBS from swabs collected from the lower vagina and rectum, correct collection and microbiological processing of the samples, and proper implementation of intrapartum antibiotic prophylaxis. The ACOG now recommends performing universal GBS screening between 36 and 37 weeks of gestation.
This new recommendation provides a five-week window for valid culture results that includes births that occur up to a gestational age of at least 41 weeks.
The culture-based screening approach is followed in most developed countries such as the United States, France, Spain, Belgium, Canada, Argentina, and Australia. The risk-based strategy is followed in the United Kingdom, and the Netherlands.
Screening for GBS colonization
Though the GBS colonization status of women can change during pregnancy, cultures to detect GBS carried out ≤5 weeks before delivery predict quite accurately the GBS carrier status at delivery.
In contrast, if the prenatal culture is performed more than five weeks before delivery it is unreliable for predicting accurately the GBS carrier status at delivery.The clinical specimens recommended for culture of GBS at 36–37 weeks’ gestation, this recommendation provides a 5-week window for valid culture results that includes births that occur up to a gestational age of at least 41 weeks (32–34 weeks of gestation for women with twins) are swabs collected the lower vagina (near the introitus) and then from the rectum (through the anal sphincter) without use of a speculum. Vaginal-rectal samples should be collected using a flocked swab preferably, since flocked swabs releases samples and microorganisms more effectively than fiber swabs. Following the recommendations of the Centers for Disease Control and Prevention of United States (CDC) these swabs should be placed into a non-nutritive transport medium and later inoculated into a selective enrichment broth, Todd Hewitt broth with selective antibiotics (enrichment culture). After incubation the enrichment broth is subcultured to blood agar plates and GBS like colonies are identified by the CAMP test or using latex agglutination with GBS antisera. After incubation the enrichment broth can also be subcultured to granada medium agar where GBS grows as pink-red colonies or to chromogenic agars, where GBS grows as colored colonies. GBS-like colonies that develop in chromogenic media should be confirmed as GBS using additional reliable tests to avoid mis-identification.Nucleic acid amplification tests (NAAT) such as polymerase chain reaction (PCR) and DNA hybridization probes have been developed for identifying GBS directly from recto-vaginal samples, but they still cannot replace antenatal culture for the most accurate detection of GBS carriers.
Intrapartum NAAT without enrichment has a high false negative rate and the use of intrapartum NAAT without enrichment to rule out the need for IAP.
Vaccination
Though IAP for EOD prevention is associated with a large decline in the incidence of the disease, there is, however, no effective strategy for preventing late-onset neonatal GBS disease.Vaccination is considered an ideal solution to prevent not only EOD and LOD but also GBS infections in adults at risk. Nevertheless, though research and clinical trials for the development of an effective vaccine to prevent GBS infections are underway, no vaccine was available in 2020.
The capsular polysaccharide of GBS is not only an important GBS virulence factor but it is also an excellent candidate for the development of an effective vaccine. Protein-based vaccines are also in development.
GBS infection in adults
GBS is also an important infectious agent able to cause invasive infections in adults </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Protrusio acetabuli and explain in detail? [/INST] Protrusio acetabuli is an uncommon defect of the acetabulum, the socket that receives the femoral head to make the hip joint. The hip bone of the pelvic bone/girdle is composed of three bones, the ilium, the ischium and the pubis </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Opisthorchiasis and explain in detail? [/INST] Opisthorchiasis is a parasitic disease caused by certain species of genus Opisthorchis (specifically, Opisthorchis viverrini and Opisthorchis felineus). Chronic infection may lead to cholangiocarcinoma, a cancer of the bile ducts.
Medical care and loss of wages caused by Opisthorchis viverrini in Laos and in Thailand costs about $120 million annually. In Asia, infection by Opisthorchis viverrini and other liver flukes affects the poorest people. Along with other foodborne trematode infections such as clonorchiasis, fascioliasis and paragonimiasis, opisthorchiasis is listed among the World Health Organizations list of neglected tropical diseases.
Signs and symptoms
Symptoms of opisthorchiasis are indistinguishable from clonorchiasis. About 80% of infected people have no symptoms, though they can have eosinophilia. Asymptomatic infection can occur when there are less than 1000 eggs in one gram of feces. Infection is considered heavy when there are 10,000-30,000 eggs in one gram of feces. Symptoms of heavier infections may include diarrhea, epigastric and right upper quadrant pain, lack of appetite, fatigue, yellowing of the eyes and skin and mild fever.These parasites are long-lived and cause heavy chronic infections that may lead to accumulation of fluid in the legs (edema) and in the peritoneal cavity (ascites), enlarged non-functional gallbladder and also ascending cholangitis, which can lead to periductal fibrosis, cholecystitis and cholelithiasis, obstructive jaundice, hepatomegaly and/or portal hypertension.
Chronic opisthorchiasis and cholangiocarcinoma
Both experimental and epidemiological evidence strongly implicates Opisthorchis viverrini infections in the etiology of a malignant cancer of the bile ducts (cholangiocarcinoma) in humans which has a very poor prognosis. Clonorchis sinensis and Opisthorchis viverrini are both categorized by the International Agency for Research on Cancer (IARC) as Group 1 carcinogens.In humans, the onset of cholangiocarcinoma occurs with chronic opisthorchiasis, associated with hepatobiliary damage, inflammation, periductal fibrosis and/or cellular responses to antigens from the infecting fluke. These conditions predispose to cholangiocarcinoma, possibly through an enhanced susceptibility of DNA to damage by carcinogens. Chronic hepatobiliary damage is reported to be multi-factorial and considered to arise from a continued mechanical irritation of the epithelium by the flukes present, particularly via their suckers, metabolites and excreted/secreted antigens as well as immunopathological processes. In silico analyses using techniques of genomics and bioinformatics is unraveling information on molecular mechanisms that may be relevant to the development of cholangiocarcinoma.In regions where Opisthorchis viverrini is highly endemic, the incidence of cholangiocarcinoma is unprecedented. For instance, cholangiocarcinoma represents 15% of primary liver cancer worldwide, but in Thailands Khon Kaen province, this figure escalates to 90%, the highest recorded incidence of this cancer in the world. Of all cancers recorded worldwide in 2002, 0.02% were cholangiocarcinoma caused by Opisthorchis viverrini. Cancer of the bile ducts caused by opisthorchiasis occurs in the ages 25–44 years in Thailand. A few cases have appeared in later life among U.S. veterans of the Vietnam War, who consumed poorly cooked fish from streams in endemic areas near the border of Laos and Vietnam.
Diagnosis
The medical diagnosis is usually established by finding eggs of Opisthorchis viverrini in feces using the Kato technique.
Alternatively, an antigen of Opisthorchis viverrini can be detected by ELISA test. A polymerase chain reaction test that can be performed on faeces has been developed and evaluated in a rural community in central Thailand.
Prevention
Cholangiocarcinoma is typically incurable at diagnosis. Because of this, intervention strategies are focused on the prevention or treatment of liver fluke infection. Prevention can be accomplished through education (by persuading people not to consume raw or undercooked fish), but the ancient cultural custom to consume raw, undercooked or freshly pickled fish persists in endemic areas. One community health program, known as the Integrated Opisthorchiasis Control Program, has achieved success in the Lawa Lakes region south of Khon Kaen.Cooking or deep-freezing (-20 °C for 7 days) of food made of fish is an effective method of prevention. Methods for prevention of Opisthorchis viverrini in aquaculture fish ponds have also been proposed.
Treatment
Treatment of opisthorchiasis is usually accomplished with praziquantel. A single dose of praziquantel of 40 mg/kg is effective against opisthorchiasis (and also against schistosomiasis). Despite the efficacy of this compound, the lack of acquired immunity to infection predisposes humans to reinfection in endemic regions. In addition, under experimental conditions, the treatment of Opisthorchis viverrini-infected hamsters with praziquantel induced a dispersion of parasite antigens, resulting in adverse immunopathological changes following re-infection with Opisthorchis viverrini, a process which has been proposed to initiate and/or promote the development of cholangiocarcinoma in humans. Albendazole can be used as an alternative.A randomized controlled trial published in 2011 showed that tribendimidine, a broad-spectrum anthelmintic, appears to be at least as efficacious as praziquantel </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Low milk supply and explain in detail? [/INST] In breastfeeding women, low milk supply, also known as lactation insufficiency, insufficient milk syndrome, agalactia, agalactorrhea, hypogalactia or hypogalactorrhea, is the production of breast milk in daily volumes that do not fully meet the nutritional needs of her infant.
Breast milk supply augments in response to the babys demand for milk, and decreases when milk is allowed to remain in the breasts. Low milk supply is usually caused by allowing milk to remain in the breasts for long periods of time, or insufficiently draining the breasts during feeds. It is usually preventable, unless caused by medical conditions that have been estimated to affect five to fifteen percent of women.Several common misconceptions often lead mothers to believe they have insufficient milk when they are in fact producing enough. Actual low milk supply is likely if the baby is latching and swallowing well at the breast, is nevertheless not growing well or is showing signs of dehydration or malnutrition, and does not have a medical condition that would explain the lack of growth. The main method for increasing milk supply is improved breastfeeding practices and/or expressing milk through pumping or hand expression. The medication domperidone increases milk supply for some women. For mothers who cannot breastfeed exclusively, breastfeeding as much as possible, with supplementary formula feeding as necessary, offers many benefits over formula alone.
Causes
Low milk supply can be either primary (caused by medical conditions or anatomical issues in the mother), secondary (caused by not thoroughly and regularly removing milk from the breasts) or both. Secondary causes are far more common than primary ones. One study found that 15% of healthy first-time mothers had low milk supply 2–3 weeks after birth, with secondary causes accounting for at least two-thirds of those cases.Breastfeeding management issues that can interfere with regular milk drainage from the breast include poor latch, unnecessary use of supplemental formula, timed or scheduled feedings (as opposed to on-demand feedings), and overuse of pacifiers. Medical conditions in the infant that result in weak or unco-ordinated sucking can cause low milk supply by inhibiting the transfer of milk to the baby. These conditions include tongue-tie, congenital heart defects, prematurity, and Down syndrome.Primary causes of low milk supply include:
insufficient glandular tissue, also known as primary lactation failure. Breasts with insufficient glandular tissue often have a tubular shape, with a large space between the breasts, and often do not grow during pregnancy.
retained placenta
Sheehans syndrome
prior breast surgery (especially breast reduction)
prior nipple piercing
polycystic ovary syndrome (PCOS)
hypothyroidism
hypoprolactinemia
theca lutein cysts
hypertension
pregnancySmoking more than 15 cigarettes per day is associated with reduced milk production. Many medications are known to significantly suppress milk production, including pseudoephedrine, diuretics, and contraceptives that contain estrogen. It is suspected that some herbs, particularly sage and peppermint, suppress milk production.
Mechanism
The physiological mechanisms that regulate breast milk supply are poorly understood. High levels of prolactin are necessary for lactation, however there is no direct correlation between baseline levels of prolactin and quantity of milk production. One aspect of supply regulation that has been identified is that breast milk contains a peptide called feedback inhibitor of lactation (FIL). When milk is present in the breast, FIL inhibits the secretion of milk.
After a mothers milk comes in, a reduction in supply is inevitable unless milk is removed regularly and thoroughly from the breasts.Surgery or injury to the breast can decrease supply by disrupting milk ducts that carry milk from the alveoli, where milk is produced, to the nipple. Retained placenta results in high levels of progesterone, a hormone that inhibits lactation.
Diagnosis
A womans belief that her milk supply is insufficient is one of the most common reasons for discontinuing breastfeeding. In many of those cases, the womans milk supply is actually normal. After a few weeks or months of breastfeeding, changes that are commonly mistaken for signs of low milk supply include breasts feeling softer (this is normal after 1–3 months), more frequent demands by the infant to feed, feeds becoming shorter over time, baby colic, the perception that the baby is more satisfied after being fed infant formula, and a slowdown in growth after three months.To evaluate whether milk supply is actually insufficient, qualitative parameters should be used such as the following:
By 3–5 days, of age, the infant should be stooling 3–4 times per day and urinating 3—5 times per day. By 5–7 days of age, there should be 3—6 stools per day and 4—6 urines.
The infant should be alert, have good muscle tone, and show no signs of dehydration.
The infant should be consistently gaining weight and growing. In newborns, a loss of more than 5 to 7 percent of birth weight warrants investigation. The use of IV fluids in labour tends to artificially increase the birth weight of the baby, and subsequently inflate the babys weight loss. Newborns should regain their birth weight by two weeks of age, and gain at least 150 g per week.If an infant is not showing these signs of growing well, other possible causes include:
Poor milk transfer due to tongue-tie, or cleft palate.
Medical conditions in the infant, such as malabsorption of nutrients.
Definition
A mother is considered to have low milk supply if she produces less breast milk than her infant requires. The term is used only after a mothers milk "comes in", which usually occurs around 30–40 hours after delivery of a full-term infant. Low milk supply is distinct from the scenario in which the mothers milk comes in later than normal but is thereafter produced in sufficient quantities; this is known as delayed onset of lactation (delayed lactogenesis II).
Prevention
The first week after birth is a critical window for establishing milk supply. The Ten Steps of the Baby Friendly Hospital Initiative describe some maternity hospital practices that promote the development of a good milk supply, such as rooming in (allowing mothers and infants to remain together 24 hours a day) and having trained staff available to help mothers with breastfeeding. Supplemental formula or donor milk should be given only when medically necessary.
There is increasing evidence that suggests that early skin-to-skin contact between mother and baby stimulates breastfeeding behavior in the baby. Newborns who are immediately placed on their mothers skin have a natural instinct to latch on to the breast and start nursing, typically within one hour of birth. Immediate skin-to-skin contact may provide a form of imprinting that makes subsequent feeding significantly easier. Interrupting the process, such as removing the baby to weigh him/her, may complicate subsequent feeding.Frequent suckling in the first days of life is correlated with higher milk production. Infants should be fed when they show signs of hunger. It is common for breastfed babies to feed during the night for the first six months. Usually feedings last 10–15 minutes in the early days.If the infant is unable to breastfeed, it is recommended that pumping or hand expression of milk begin within two hours of delivery, and be done at least eight times every 24 hours.
Management
Attempts to increase milk supply should begin promptly as the longer low supply continues, the more difficult it is to reverse. The primary method for increasing milk supply is to increase the frequency and the thoroughness of milk drainage from the breasts, and to increase breast stimulation.For mothers of healthy term infants who are able to nurse, very strong evidence supports improving breastfeeding practices as a primary treatment. Increasing skin-to-skin contact between mother and baby, and allowing unrestricted breastfeeding, aid in stimulating the milk ejection reflex and promote frequent feeds. If the babys latch is not optimal, improving the latch will help the baby drink more milk. Making the mother comfortable, particularly by resolving nipple pain, is essential. Breast self-massage is recommended to stimulate the milk ejection reflex and to physically promote the flow of milk. Some experts recommend using a breast pump after each breastfeeding session.If the baby cannot nurse effectively, frequent drainage of milk from the breasts through hand expression and/or an electric double breast pump is recommended.The mothers use of medications and herbs should be evaluated, as some substance suppress lactation. The Academy of Breastfeeding Medicine protocol for low milk supply recommends that the mother be evaluated for medical causes of the problem, however health professionals and breastfeeding counsellors often do not do this.
Medications and herbs
After the above treatment methods have been attempted, many breastfeeding specialists and lactation consultants recommend medications or herbs that are believed to increase milk supply (galactagogues). In 2010, a randomized, double-blinded, placebo-controlled study demonstrated that domperidone increased milk production in mothers of preterm infants. Another very small study of domperidone found that some women respond to it and others do not. As the effects of domperidone stop when use of the drug is stopped, it is sometimes used for months </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Endometrial hyperplasia and explain in detail? [/INST] Endometrial hyperplasia is a condition of excessive proliferation of the cells of the endometrium, or inner lining of the uterus.
Most cases of endometrial hyperplasia result from high levels of estrogens, combined with insufficient levels of the progesterone-like hormones which ordinarily counteract estrogens proliferative effects on this tissue. This may occur in a number of settings, including obesity, polycystic ovary syndrome, estrogen producing tumours (e.g. granulosa cell tumour) and certain formulations of estrogen replacement therapy.
Endometrial hyperplasia with atypia is a significant risk factor for the development or even co-existence of endometrial cancer, so careful monitoring and treatment of women with this disorder is essential.
Classification
Like other hyperplastic disorders, endometrial hyperplasia initially represents a physiological response of endometrial tissue to the growth-promoting actions of estrogen. However, the gland-forming cells of a hyperplastic endometrium may also undergo changes over time which predispose them to cancerous transformation </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Vesicointestinal fistula and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Healing and explain in detail? [/INST] With physical trauma or disease suffered by an organism, healing involves the repairing of damaged tissue(s), organs and the biological system as a whole and resumption of (normal) functioning. Medicine includes the process by which the cells in the body regenerate and repair to reduce the size of a damaged or necrotic area and replace it with new living tissue. The replacement can happen in two ways: by regeneration in which the necrotic cells are replaced by new cells that form "like" tissue as was originally there; or by repair in which injured tissue is replaced with scar tissue. Most organs will heal using a mixture of both mechanisms.Within surgery, healing is more often referred to as recovery, and postoperative recovery has historically been viewed simply as restitution of function and readiness for discharge. More recently, it has been described as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activitiesHealing is also referred to in the context of the grieving process.In psychiatry and psychology, healing is the process by which neuroses and psychoses are resolved to the degree that the client is able to lead a normal or fulfilling existence without being overwhelmed by psychopathological phenomena. This process may involve psychotherapy, pharmaceutical treatment or alternative approaches such as traditional spiritual healing.
Regeneration
In order for an injury to be healed by regeneration, the cell type that was destroyed must be able to replicate. Cells also need a collagen framework along which to grow. Alongside most cells there is either a basement membrane or a collagenous network made by fibroblasts that will guide the cells growth. Since ischaemia and most toxins do not destroy collagen, it will continue to exist even when the cells around it are dead.
Example
Acute tubular necrosis (ATN) in the kidney is a case in which cells heal completely by regeneration. ATN occurs when the epithelial cells that line the kidney are destroyed by either a lack of oxygen (such as in hypovolemic shock, when blood supply to the kidneys is dramatically reduced), or by toxins (such as some antibiotics, heavy metals or carbon tetrachloride).Although many of these epithelial cells are dead, there is typically patchy necrosis, meaning that there are patches of epithelial cells still alive. In addition, the collagen framework of the tubules remains completely intact.The existing epithelial cells can replicate, and, using the basement membrane as a guide, eventually bring the kidney back to normal. After regeneration is complete, the damage is undetectable, even microscopically.Healing must happen by repair in the case of injury to cells that are unable to regenerate (e.g. neurons). Also, damage to the collagen network (e.g. by enzymes or physical destruction), or its total collapse (as can happen in an infarct) cause healing to take place by repair.
Genetics
Many genes play a role in healing. For instance, in wound healing, P21 has been found to allow mammals to heal spontaneously. It even allows some mammals (like mice) to heal wounds without scars. The LIN28 gene also plays a role in wound healing. It is dormant in most mammals. Also, the proteins MG53 and TGF beta 1 play important roles in wound healing.
Wound healing
In response to an incision or wound, a wound healing cascade is unleashed. This cascade takes place in four phases: clot formation, inflammation, proliferation, and maturation.
Clotting phase
Healing of a wound begins with clot formation to stop bleeding and to reduce infection by bacteria, viruses and fungi. Clotting is followed by neutrophil invasion three to 24 hours after the wound has been incurred, with mitoses beginning in epithelial cells after 24 to 48 hours.
Inflammation phase
In the inflammatory phase, macrophages and other phagocytic cells kill bacteria, debride damaged tissue and release chemical factors such as growth hormones that encourage fibroblasts, epithelial cells and endothelial cells which make new capillaries to migrate to the area and divide.
Proliferative phase
In the proliferative phase, immature granulation tissue containing plump, active fibroblasts forms. Fibroblasts quickly produce abundant type III collagen, which fills the defect left by an open wound. Granulation tissue moves, as a wave, from the border of the injury towards the center.As granulation tissue matures, the fibroblasts produce less collagen and become more spindly in appearance. They begin to produce the much stronger type I collagen. Some of the fibroblasts mature into myofibroblasts which contain the same type of actin found in smooth muscle, which enables them to contract and reduce the size of the wound.
Maturation phase
During the maturation phase of wound healing, unnecessary vessels formed in granulation tissue are removed by apoptosis, and type III collagen is largely replaced by type I. Collagen which was originally disorganized is cross-linked and aligned along tension lines. This phase can last a year or longer. Ultimately a scar made of collagen, containing a small number of fibroblasts is left.
Tissue damaged by inflammation
After inflammation has damaged tissue (when combatting bacterial infection for example) and pro-inflammatory eicosanoids have completed their function, healing proceeds in 4 phases.
Recall phase
In the recall phase the adrenal glands increase production of cortisol which shuts down eicosanoid production and inflammation.
Resolution phase
In the Resolution phase, pathogens and damaged tissue are removed by macrophages (white blood cells). Red blood cells are also removed from the damaged tissue by macrophages. Failure to remove all of the damaged cells and pathogens may retrigger inflammation. The two subsets of macrophage M1 & M2 plays a crucial role in this phase, M1 macrophage being a pro inflammatory while as M2 is a regenerative and the plasticity between the two subsets determine the tissue inflammation or repair.
Regeneration phase
In the Regeneration phase, blood vessels are repaired and new cells form in the damaged site similar to the cells that were damaged and removed. Some cells such as neurons and muscle cells (especially in the heart) are slow to recover.
Repair phase
In the Repair phase, new tissue is generated which requires a balance of anti-inflammatory and pro-inflammatory eicosanoids. Anti-inflammatory eicosanoids include lipoxins, epi-lipoxins, and resolvins, which cause release of growth hormones.
See also
Scar free healing
Health
References
External links
How wounds heal and tumors form With this simple Flash demonstration, Harvard professor Donald Ingber explains how wounds heal, why scars form, and how tumors develop. Presented by Childrens Hospital Boston.
Wound Healing and Repair
Lorenz H.P. and Longaker M.T </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Carcinoid syndrome and explain in detail? [/INST] Carcinoid syndrome is a paraneoplastic syndrome comprising the signs and symptoms that occur secondary to carcinoid tumors. The syndrome includes flushing and diarrhea, and less frequently, heart failure, vomiting and bronchoconstriction. It is caused by endogenous secretion of mainly serotonin and kallikrein.
Signs and symptoms
The carcinoid syndrome occurs in approximately 5% of carcinoid tumors and becomes manifest when vasoactive substances from the tumors enter the systemic circulation escaping hepatic degradation. If the primary tumor is from the gastrointestinal tract (hence releasing serotonin into the hepatic portal circulation), carcinoid syndrome generally does not occur until the disease is so advanced that it overwhelms the livers ability to metabolize the released serotonin.
Flushing: The most important clinical finding is flushing of the skin, usually of the head and the upper part of thorax.
Diarrhea: It may be associated with abdominal cramping. This symptom is due to the action of serotonin, histamine, and gastrin
Abdominal pain: It is due to hepatomegaly, small bowel obstruction or lack of oxygen to small bowel.
Bronchoconstriction, which may be histamine or serotonin-induced, affects about 15% of those having carcinoid syndrome and often accompanies flushing, sneezing, and shortness of breath.
Carcinoid heart disease: About 19% to 60% of the those affected by carcinoid syndrome develop carcinoid heart disease. Serotonin induces fibrosis in valves of the right heart especially tricuspid valve.
Pathophysiology
Carcinoid tumors produce several vasoactive substances, most prominently serotonin. It is commonly thought that serotonin is the cause of the flushing, but this is only partially correct. The flushing also results from secretion of kallikrein, the enzyme that catalyzes the conversion of kininogen to lysyl-bradykinin. The latter is further converted to bradykinin, one of the most powerful vasodilators known.
Other components of the carcinoid syndrome are diarrhea (probably caused by the increased serotonin, which greatly increases peristalsis, leaving less time for fluid absorption), a pellagra-like syndrome (probably caused by diversion of large amounts of tryptophan from synthesis of the vitamin B3 niacin, which is needed for NAD production, to the synthesis of serotonin and other 5-hydroxyindoles), fibrotic lesions of the endocardium, particularly on the right side of the heart resulting in insufficiency of the tricuspid valve and, less frequently, the pulmonary valve and, uncommonly, bronchoconstriction.The pathogenesis of the cardiac lesions and the bronchoconstriction is unknown, but the former probably involves activation of serotonin 5-HT2B receptors by serotonin. When the primary tumor is in the gastrointestinal tract, as it is in the great majority of cases, the serotonin and kallikrein are inactivated in the liver; manifestations of carcinoid syndrome do not occur until there are metastases to the liver or when the cancer is accompanied by liver failure (cirrhosis). Carcinoid tumors arising in the bronchi may be associated with manifestations of carcinoid syndrome without liver metastases because their biologically active products reach the systemic circulation before passing through the liver and being metabolized.In most patients, there is an increased urinary excretion of 5-HIAA (5-hydroxyindoleacetic acid), a degradation product of serotonin.The biology of these tumors differs from many other tumor types. Ongoing research on the biology of these tumors may reveal new mechanisms for tumor development.
Diagnosis
With a certain degree of clinical suspicion, the most useful initial test is the 24-hour urine levels of 5-HIAA (5-hydroxyindoleacetic acid), the end product of serotonin metabolism. Patients with carcinoid syndrome usually excrete more than 25 mg of 5-HIAA per day.
Imaging
For localization of both primary lesions and metastasis, the initial imaging method is Octreoscan, where indium-111 labelled somatostatin analogues (octreotide) are used in scintigraphy for detecting tumors expressing somatostatin receptors. Median detection rates with octreoscan are about 89%, in contrast to other imaging techniques such as CT scan and MRI with detection rates of about 80%. Gallium-68 labelled somatostatin analogues such as 68Ga-DOTA-Octreotate (DOTATATE), performed on a PET/CT scanner is superior to conventional Octreoscan.
Usually, on a CT scan, a spider-like/crab-like change is visible in the mesentery due to the fibrosis from the release of serotonin. 18F-FDG PET/CT, which evaluate for increased metabolism of glucose, may also aid in localizing the carcinoid lesion or evaluating for metastases. Chromogranin A and platelets serotonin are increased.Transthoracic echocardiogram shows valve thickening and reduction in mobility of the valves. Cardiac MRI is useful in showing ventricular anatomy and function.
Localization of tumour
Tumour localization may be extremely difficult </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Hepatopulmonary syndrome and explain in detail? [/INST] In medicine, hepatopulmonary syndrome is a syndrome of shortness of breath and hypoxemia (low oxygen levels in the blood of the arteries) caused by vasodilation (broadening of the blood vessels) in the lungs of patients with liver disease. Dyspnea and hypoxemia are worse in the upright position (which is called platypnea and orthodeoxia, respectively).
Pathophysiology
The hepatopulmonary syndrome results from the formation of microscopic intrapulmonary arteriovenous dilatations in patients with both chronic and far less common, acute liver failure. The mechanism is unknown but is thought to be due to increased liver production or decreased liver clearance of vasodilators, possibly involving nitric oxide.The dilation of these blood vessels causes overperfusion relative to ventilation, leading to ventilation-perfusion mismatch and hypoxemia. There is an increased gradient between the partial pressure of oxygen in the alveoli of the lung and adjacent arteries (alveolar-arterial [A-a] gradient) while breathing room air. Patients with HPS have platypnea-orthodeoxia syndrome (POS); that is, because intrapulmonary vascular dilations (IPVDs) predominate in the bases of the lungs, standing worsens hypoxemia (orthodeoxia)/dyspnea (platypnea) and the supine position improves oxygenation as blood is redistributed from the bases to the apices. Additionally, late in cirrhosis, it is common to develop high output failure, which would lead to less time in capillaries per red blood cell, exacerbating the hypoxemia.
Diagnosis
The hepatopulmonary syndrome is suspected in any patient with known liver disease who reports dyspnea (particularly platypnea). Patients with clinically significant symptoms should undergo pulse oximetry. If the syndrome is advanced, arterial blood gasses should be measured on air. Hepatopulmonary syndrome (HPS) consists of the triad of liver dysfunction, otherwise unexplained hypoxemia, and intrapulmonary vascular dilation (IPVD).A useful diagnostic test is contrast echocardiography. Intravenous microbubbles (> 10 micrometers in diameter) from agitated normal saline that are normally obstructed by pulmonary capillaries (normally <8 to 15 micrometers) rapidly transit the lung and appear in the left atrium of the heart within 7 heart beats </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Immune system and explain in detail? [/INST] The immune system is a network of biological processes that protects an organism from diseases. It detects and responds to a wide variety of pathogens, from viruses to parasitic worms, as well as cancer cells and objects such as wood splinters, distinguishing them from the organisms own healthy tissue. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions.
Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against virus infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination.
Dysfunction of the immune system can cause autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. Autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimotos thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system.
Layered defense
The immune system protects its host from infection with layered defenses of increasing specificity. Physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered.
Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are components of an organisms body that can be distinguished from foreign substances by the immune system. Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (originally named for being antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response.
Surface barriers
Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. Organisms cannot be completely sealed from their environments, so systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms.Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a chemical defense against ingested pathogens.Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, changing the conditions in their environment, such as pH or available iron. As a result, the probability that pathogens will reach sufficient numbers to cause illness is reduced.
Innate immune system
Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms, and the only one in plants.
Immune sensing
Cells in the innate immune system use pattern recognition receptors to recognize molecular structures that are produced by pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils and epithelial cells to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of hosts cells that are released during cell damage or cell death.Recognition of extracellular or endosomal PAMPs is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the leucine rich repeats (LRR), which give them a curved shape. Toll-like receptors were first discovered in Drosophila and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. Ten toll-like receptors have been described in humans.Cells in the innate immune system have pattern recognition receptors, which detect infection or cell damage, inside. Three major classes of these "cytosolic" receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors.
Innate immune cells
Some leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the "professional" phagocytes (macrophages, neutrophils, and dendritic cells). These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. The other cells involved in the innate response include innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells.Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals.Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, representing 50% to 60% of total circulating leukocytes. During the acute phase of inflammation, neutrophils migrate toward the site of inflammation in a process called chemotaxis, and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce an array of chemicals including enzymes, complement proteins, and cytokines, while they can also act as scavengers that rid the body of worn-out cells and other debris, and as antigen-presenting cells (APC) that activate the adaptive immune system.Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system.Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes, and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma.Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by absence of antigen specific B or T cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers.Natural killer cells (NK) are lymphocytes and a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self." This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors which essentially put the brakes on NK cells.
Inflammation
Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18.
Humoral defenses
The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane.
Adaptive immune system
The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it.
Recognition of antigen
The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection.
Antigen presentation to T lymphocytes
Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule.
Cell mediated immunity
There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response.
Killer T cells
Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cells plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below).
Helper T cells
Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks.Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cells CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cells activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cells surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells.
Gamma delta T cells
Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other unconventional T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells.
Humoral immune response
A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells.Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another.
Immunological memory
When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory.
Physiological regulation
The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration.
Hormones
Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D.
Vitamin D
Although cellular studies indicate that vitamin D has receptors and probable functions in the immune system, there is no clinical evidence to prove that vitamin D deficiency increases the risk for immune diseases or vitamin D supplementation lowers immune disease risk. A 2011 United States Institute of Medicine report stated that "outcomes related to ... immune functioning and autoimmune disorders, and infections ... could not be linked reliably with calcium or vitamin D intake and were often conflicting.": 5
Sleep and rest
The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep.In people with sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma.In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses.During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time.
Physical exercise
Physical exercise has a positive effect on the immune system and depending on the frequency and intensity, the pathogenic effects of diseases caused by bacteria and viruses are moderated. Immediately after intense exercise there is a transient immunodepression, where the number of circulating lymphocytes decreases and antibody production declines. This may give rise to a window of opportunity for infection and reactivation of latent virus infections, but the evidence is inconclusive.
Changes at the cellular level
During exercise there is an increase in circulating white blood cells of all types. This is caused by the frictional force of blood flowing on the endothelial cell surface and catecholamines affecting β-adrenergic receptors (βARs). The number of neutrophils in the blood increases and remains raised for up to six hours and immature forms are present. Although the increase in neutrophils ("neutrophilia") is similar to that seen during bacterial infections, after exercise the cell population returns to normal by around 24 hours.The number of circulating lymphocytes (mainly natural killer cells) decreases during intense exercise but returns to normal after 4 to 6 hours. Although up to 2% of the cells die most migrate from the blood to the tissues, mainly the intestines and lungs, where pathogens are most likely to be encountered.Some monocytes leave the blood circulation and migrate to the muscles where they differentiate and become macrophages. These cells differentiate into two types: proliferative macrophages, which are responsible for increasing the number of stem cells and restorative macrophages, which are involved their maturing to muscle cells.
Repair and regeneration
The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate.
Disorders of human immunity
Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities.
Immunodeficiencies
Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or acquired. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency.
Autoimmunity
Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimotos thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus.
Hypersensitivity
Hypersensitivity is an immune response that damages the bodys own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen.
Type II hypersensitivity occurs when antibodies bind to antigens on the individuals own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages.
Idiopathic inflammation
Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause.
Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens.
Manipulation in medicine
The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer.
Immunosuppression
Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent rejection after an organ transplant.Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs and can have many undesirable side effects, such as central obesity, hyperglycemia, and osteoporosis. Their use is tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine.
Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. This killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways.
Immunostimulation
Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness.
Vaccination
Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed.Many vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity.
Tumor immunology
Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organisms own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (for example, melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes.
The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system.Some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells.Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumors.
Predicting immunogenicity
Some drugs can cause a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics. Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response.
Evolution and other mechanisms
Evolution of the immune system
It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Many species, however, use mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally simplest forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes (bacteria and archea) also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few.Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses.Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication.
Alternative adaptive immune system
Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (for example, immunoglobulins and T-cell receptors) exist only in jawed vertebrates. A distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity.
Manipulation by pathogens
The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses.An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium spp.) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the hosts immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, such as the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L).The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures.
History of immunology
Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis experimented with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" being expelled from the blood—therefore preventing a second occurrence of the disease—this theory explained many observations about smallpox known during this time.These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteurs theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Kochs 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed.Immunology made a great advance towards the end of the 19th century, through rapid developments in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a joint Nobel Prize in 1908, along with the founder of cellular immunology, Elie Metchnikoff. In 1974, Niels Kaj Jerne developed the immune network theory; he shared a Nobel Prize in 1984 with Georges J </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Cutaneous T-cell lymphoma and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Cerebrovascular disease and explain in detail? [/INST] Cerebrovascular disease includes a variety of medical conditions that affect the blood vessels of the brain and the cerebral circulation. Arteries supplying oxygen and nutrients to the brain are often damaged or deformed in these disorders. The most common presentation of cerebrovascular disease is an ischemic stroke or mini-stroke and sometimes a hemorrhagic stroke. Hypertension (high blood pressure) is the most important contributing risk factor for stroke and cerebrovascular diseases as it can change the structure of blood vessels and result in atherosclerosis. Atherosclerosis narrows blood vessels in the brain, resulting in decreased cerebral perfusion. Other risk factors that contribute to stroke include smoking and diabetes. Narrowed cerebral arteries can lead to ischemic stroke, but continually elevated blood pressure can also cause tearing of vessels, leading to a hemorrhagic stroke.A stroke usually presents with an abrupt onset of a neurologic deficit – such as hemiplegia (one-sided weakness), numbness, aphasia (language impairment), or ataxia (loss of coordination) – attributable to a focal vascular lesion. The neurologic symptoms manifest within seconds because neurons need a continual supply of nutrients, including glucose and oxygen, that are provided by the blood. Therefore, if blood supply to the brain is impeded, injury and energy failure is rapid.Besides hypertension, there are also many less common causes of cerebrovascular disease, including those that are congenital or idiopathic and include CADASIL, aneurysms, amyloid angiopathy, arteriovenous malformations, fistulas, and arterial dissections. Many of these diseases can be asymptomatic until an acute event, such as a stroke, occurs. Cerebrovascular diseases can also present less commonly with headache or seizures. Any of these diseases can result in vascular dementia due to ischemic damage to the brain.
Signs and symptoms
The most common presentation of cerebrovascular diseases is an acute stroke, which occurs when blood supply to the brain is compromised. Symptoms of stroke are usually rapid in onset, and may include weakness of one side of the face or body, numbness on one side of the face or body, inability to produce or understand speech, vision changes, and balance difficulties. Hemorrhagic strokes can present with a very severe, sudden headache associated with vomiting, neck stiffness, and decreased consciousness. Symptoms vary depending on the location and the size of the area of involvement of the stroke. Edema, or swelling, of the brain may occur which increases intracranial pressure and may result in brain herniation. A stroke may result in coma or death if it involves key areas of the brain.Other symptoms of cerebrovascular disease include migraines, seizures, epilepsy, or cognitive decline. However, cerebrovascular disease may go undetected for years until an acute stroke occurs. In addition, patients with some rare congenital cerebrovascular diseases may begin to have these symptoms in childhood.
Causes
Congenital
Congenital diseases are medical conditions that are present at birth that may be associated with or inherited through genes. Examples of congenital cerebrovascular diseases include arteriovenous malformations, germinal matrix hemorrhage, and CADASIL (cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy). Arteriovenous malformations are abnormal tangles of blood vessels. Usually, a capillary bed separates arteries from veins, which protects the veins from the higher blood pressures that occur in arteries. In arteriovenous malformations, arteries are directly connected to veins, which increases the risk of venous rupture and hemorrhage. Arteriovenous malformations in the brain have a 2–4% chance of rupture each year. However, many arteriovenous malformations go unnoticed and are asymptomatic throughout a persons lifetime.
A germinal matrix hemorrhage is bleeding into the brain of premature infants caused by the rupture of fragile blood vessels within the germinal matrix of premature babies. The germinal matrix is a highly vascularized area within an unborn infants brain from which brain cells, including neurons and glial cells, originate. Infants are at most risk to germinal matrix hemorrhages when they are born prematurely, before 32 weeks. The stresses exposed after birth, along with the fragile blood vessels, increase risk of hemorrhage. Signs and symptoms include flaccid weakness, seizures, abnormal posturing, or irregular respiration.CADASIL is an inherited disorder caused by mutations in the Notch 3 gene located on chromosome 19. The Notch 3 gene codes for a transmembrane protein whose function is not well-known. However, the mutation causes accumulation of this protein within small to medium-sized blood vessels. This disease often presents in early adulthood with migraines, stroke, mood disturbances, and cognitive deterioration. MRI shows white matter changes in the brain and also signs of repeated strokes. The diagnosis can be confirmed by gene testing.
Acquired
Acquired cerebrovascular diseases are those that are obtained throughout a persons life that may be preventable by controlling risk factors. The incidence of cerebrovascular disease increases as an individual ages. Causes of acquired cerebrovascular disease include atherosclerosis, embolism, aneurysms, and arterial dissections. Atherosclerosis leads to narrowing of blood vessels and less perfusion to the brain, and it also increases the risk of thrombosis, or a blockage of an artery, within the brain. Major modifiable risk factors for atherosclerosis include:
Controlling these risk factors can reduce the incidence of atherosclerosis and stroke. Atrial fibrillation is also a major risk factor for strokes. Atrial fibrillation causes blood clots to form within the heart, which may travel to the arteries within the brain and cause an embolism. The embolism prevents blood flow to the brain, which leads to a stroke.An aneurysm is an abnormal bulging of small sections of arteries, which increases the risk of artery rupture. Intracranial aneurysms are a leading cause of subarachnoid hemorrhage, or bleeding around the brain within the subarachnoid space. There are various hereditary disorders associated with intracranial aneurysms, such as Ehlers-Danlos syndrome, autosomal dominant polycystic kidney disease, and familial hyperaldosteronism type I. However, individuals without these disorders may also obtain aneurysms. The American Heart Association and American Stroke Association recommend controlling modifiable risk factors including smoking and hypertension.Arterial dissections are tears of the internal lining of arteries, often associated with trauma. Dissections within the carotid arteries or vertebral arteries may compromise blood flow to the brain due to thrombosis, and dissections increase the risk of vessel rupture.
Idiopathic
Idiopathic diseases are those that occur spontaneously without a known cause. Moyamoya is an example of an idiopathic cerebrovascular disorder that results in narrowing and occlusion of intracranial blood vessels. The most common presentation is stroke or transient ischemic attack, but cognitive decline within children may also be a presenting symptom. The disease may begin to show symptoms beginning in adolescence, but some may not have symptoms until adulthood.
Pathophysiology
Mechanism of brain cell death
When a reduction in blood flow lasting seconds occurs, the brain tissue suffers ischemia, or inadequate blood supply. If the interruption of blood flow is not restored in minutes, the tissue suffers infarction followed by tissue death. When the low cerebral blood flow persists for a longer duration, this may develop into an infarction in the border zones (areas of poor blood flow between the major cerebral artery distributions). In more severe instances, global hypoxia-ischemia causes widespread brain injury leading to a severe cognitive sequelae called hypoxic-ischemic encephalopathy.An ischemic cascade occurs where an energetic molecular problem arises due to lack of oxygen and nutrients. The cascade results in decreased production of adenosine triphosphate (ATP), which is a high-energy molecule needed for cells in the brain to function. Consumption of ATP continues in spite of insufficient production, this causes total levels of ATP to decrease and lactate acidosis to become established (ionic homeostasis in neurons is lost). The downstream mechanisms of the ischemic cascade thus begins. Ion pumps no longer transport Ca2+ out of cell, this triggers release of glutamate, which in turn allows calcium into cell walls. In the end the apoptosis pathway is initiated and cell death occurs.There are several arteries that supply oxygen to different areas of the brain, and damage or occlusion of any of them can result in stroke. The carotid arteries cover the majority of the cerebrum. The common carotid artery divides into the internal and the external carotid arteries. The internal carotid artery becomes the anterior cerebral artery and the middle central artery. The ACA transmits blood to the frontal parietal. From the basilar artery are two posterior cerebral arteries. Branches of the basilar and PCA supply the occipital lobe, brain stem, and the cerebellum. Ischemia is the loss of blood flow to the focal region of the brain. This produces heterogeneous areas of ischemia at the affected vascular region, furthermore, blood flow is limited to a residual flow. Regions with blood flow of less than 10 mL/100 g of tissue/min are core regions (cells here die within minutes of a stroke). The ischemic penumbra with a blood flow of <25 ml/100g tissue/min, remain usable for more time (hours).
Types of stroke
There are two main divisions of strokes: ischemic and hemorrhagic. Ischemic stroke involves decreased blood supply to regions of the brain, while hemorrhagic stroke is bleeding into or around the brain.
Ischemic
Ischemic stroke, the most common is caused by a blockage of a blood vessel in the brain, usually caused by thrombosis or emboli from a proximal arterial source or the heart, that leads to the brain being starved of oxygen. The neurologic signs and symptoms must last longer than 24 hours or the brain infarction is demonstrated, mainly by imaging techniques.
Transient ischemic attack (TIA) also called a mini-stroke. This is a condition in which the blood flow to a region of the brain is blocked, but blood flow is quickly restored and the brain tissue can fully recover. The symptoms are only transient, leaving no sequelae, or long-term deficits. In order to diagnose this entity, all neurologic signs and symptoms must have been resolved within 24 hrs without evidence of brain infarction on brain imaging.
Hemorrhagic
Subarachnoid haemorrhage occurs when blood leaks out of damaged vessels into the cerebrospinal fluid in the subarachnoid space around the brain. The most common cause of a subarachnoid hemorrhage is an aneurysm rupture due to the weakened blood vessel walls and increased wall stress. The neurologic symptoms are produced by the blood mass effect on neural structures, from the toxic effects of blood on the brain tissue, or by the increasing of intracranial pressure.
Intracerebral haemorrhage is bleeding directly into the brain rather than around the brain. Causes and risk factors include hypertension, blood thinning medications, trauma, and arteriovenous malformations.
Diagnosis
Diagnoses of cerebrovascular disease may include:
medical history
physical exam
neurological examination.
acute stroke imaging is generally performed in significant symptoms of new onset.It is important to differentiate the symptoms caused by a stroke from those caused by syncope (fainting) which is also a reduction in cerebral blood flow, almost always generalized, but they are usually caused by systemic hypotension of various origins: cardiac arrhythmias, myocardial infarction, hemorrhagic shock, among others.
Treatment
Treatment for cerebrovascular disease may include medication, lifestyle changes, and surgery, depending on the cause.Examples of medications are:
antiplatelets (aspirin, clopidogrel)
blood thinners (heparin, warfarin)
antihypertensives:
ACE inhibitors
beta blockers
calcium channel blockers - in particular Nimodipine reduces the incidence and severity of ischemic deficits in patients with subarachnoid hemorrhage (SAH)
anti-diabetic medications.Surgical procedures include:
endovascular surgery and vascular surgery (for future stroke prevention).
Prognosis
Prognostics factors:
Lower Glasgow Coma Scale score, higher pulse rate, higher respiratory rate and lower arterial oxygen saturation level is prognostic features of in-hospital mortality rate in acute ischemic stroke.
Epidemiology
Worldwide, it is estimated there are 31 million stroke survivors, though about 6 million deaths were due to cerebrovascular disease (2nd most common cause of death in the world and 6th most common cause of disability).Cerebrovascular disease primarily occurs with advanced age; the risk for developing it goes up significantly after 65 years of age. CVD tends to occur earlier than Alzheimers Disease (which is rare before the age of 80). In some countries such as Japan, CVD is more common than AD.In 2012, 6.4 million adults from the US had a stroke, which corresponds to 2.7% of the US. This is approximately 129,000 deaths in 2013.Geographically, a "stroke belt" in the US has long been known, similar to the "diabetes belt" which includes all of Mississippi and parts of Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, North Carolina, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, and West Virginia.
References
Further reading
Chan, Pak H. (2002-03-28). Cerebrovascular Disease: 22nd Princeton Conference. Cambridge University Press. ISBN 9781139439657.
Mark, S. D; Wang, W; Fraumeni, J. F; Li, J.-Y; Taylor, P. R; Wang, G.-Q; Guo, W; Dawsey, S. M; Li, B; Blot, W. J (1996). "Lowered Risks of Hypertension and Cerebrovascular Disease after Vitamin/Mineral Supplementation: The Linxian Nutrition Intervention Trial" </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Chyluria and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Angioedema and explain in detail? [/INST] Angioedema is an area of swelling (edema) of the lower layer of skin and tissue just under the skin or mucous membranes. The swelling may occur in the face, tongue, larynx, abdomen, or arms and legs. Often it is associated with hives, which are swelling within the upper skin. Onset is typically over minutes to hours.The underlying mechanism typically involves histamine or bradykinin. The version related to histamine is due to an allergic reaction to agents such as insect bites, foods, or medications. The version related to bradykinin may occur due to an inherited problem known as C1 esterase inhibitor deficiency, medications known as angiotensin-converting enzyme inhibitors, or a lymphoproliferative disorder.Treatment to protect the airway may include intubation or cricothyroidotomy. Histamine-related angioedema can be treated with antihistamines, corticosteroids, and epinephrine. In those with bradykinin-related disease a C1 esterase inhibitor, ecallantide, or icatibant may be used. Fresh frozen plasma may be used instead. In the United States the disease affects about 100,000 people a year.
Signs and symptoms
The skin of the face, normally around the mouth, and the mucosa of the mouth and/or throat, as well as the tongue, swell over the period of minutes to hours. The swelling can also occur elsewhere, typically in the hands. The swelling can be itchy or painful. There may also be slightly decreased sensation in the affected areas due to compression of the nerves. Urticaria (hives) may develop simultaneously.In severe cases, stridor of the airway occurs, with gasping or wheezy inspiratory breath sounds and decreasing oxygen levels. Tracheal intubation is required in these situations to prevent respiratory arrest and risk of death.Sometimes, the cause is recent exposure to an allergen (e.g. peanuts), but more often it is either idiopathic (unknown) or only weakly correlated to allergen exposure.In hereditary angioedema (HAE), often no direct cause is identifiable, although mild trauma, including dental work and other stimuli, can cause attacks. There is usually no associated itch or urticaria, as it is not an allergic response. Patients with HAE can also have recurrent episodes (often called "attacks") of abdominal pain, usually accompanied by intense vomiting, weakness, and in some cases, watery diarrhea, and an unraised, nonitchy splotchy/swirly rash. These stomach attacks can last one to five days on average and can require hospitalization for aggressive pain management and hydration. Abdominal attacks have also been known to cause a significant increase in the patients white blood cell count, usually in the vicinity of 13,000 to 30,000. As the symptoms begin to diminish, the white count slowly begins to decrease, returning to normal when the attack subsides. As the symptoms and diagnostic tests are almost indistinguishable from an acute abdomen (e.g. perforated appendicitis) it is possible for undiagnosed HAE patients to undergo laparotomy (operations on the abdomen) or laparoscopy (keyhole surgery) that turns out to have been unnecessary.HAE may also cause swelling in a variety of other locations, most commonly the limbs, genitals, neck, throat and face. The pain associated with these swellings varies from mildly uncomfortable to agonizing pain, depending on its location and severity. Predicting where and when the next episode of edema will occur is impossible. Most patients have an average of one episode per month, but there are also patients who have weekly episodes or only one or two episodes per year. The triggers can vary and include infections, minor injuries, mechanical irritation, operations or stress. In most cases, edema develops over a period of 12–36 hours and then subsides within 2–5 days.
Pathophysiology
Bradykinin plays a critical role in all forms of hereditary angioedema. This peptide is a potent vasodilator and increases vascular permeability, leading to rapid accumulation of fluid in the interstitium. This is most obvious in the face, where the skin has relatively little supporting connective tissue, and edema develops easily. Bradykinin is released by various cell types in response to numerous different stimuli; it is also a pain mediator. Dampening or inhibiting bradykinin has been shown to relieve HAE symptoms.
Various mechanisms that interfere with bradykinin production or degradation can lead to angioedema. ACE inhibitors block ACE, the enzyme that among other actions, degrades bradykinin. In hereditary angioedema, bradykinin formation is caused by continuous activation of the complement system due to a deficiency in one of its prime inhibitors, C1-esterase (aka: C1-inhibitor or C1INH), and continuous production of kallikrein, another process inhibited by C1INH. This serine protease inhibitor (serpin) normally inhibits the association of C1r and C1s with C1q to prevent the formation of the C1-complex, which - in turn - activates other proteins of the complement system. Additionally, it inhibits various proteins of the coagulation cascade, although effects of its deficiency on the development of hemorrhage and thrombosis appear to be limited.
The three types of hereditary angioedema are:
Type I - decreased levels of C1INH (85%);
Type II - normal levels, but decreased function of C1INH (15%);
Type III - no detectable abnormality in C1INH, occurs in an X-linked dominant fashion and therefore mainly affects women; it can be exacerbated by pregnancy and use of hormonal contraception (exact frequency uncertain). It has been linked with mutations in the factor XII gene.Angioedema can be due to antibody formation against C1INH; this is an autoimmune disorder. This acquired angioedema is associated with the development of lymphoma.
Consumption of foods that are themselves vasodilators, such as alcoholic beverages or cinnamon, can increase the probability of an angioedema episode in susceptible patients. If the episode occurs at all after the consumption of these foods, its onset may be delayed overnight or by some hours, making the correlation with their consumption somewhat difficult. In contrast, consumption of bromelain in combination with turmeric may be beneficial in reducing symptoms.The use of ibuprofen or aspirin may increase the probability of an episode in some patients. The use of acetaminophen typically has a smaller, but still present, increase in the probability of an episode.
Diagnosis
The diagnosis is made on the clinical picture. Routine blood tests (complete blood count, electrolytes, kidney function, liver enzymes) are typically performed. Mast cell tryptase levels may be elevated if the attack was due to an acute allergic (anaphylactic) reaction. When the patient has been stabilized, particular investigations may clarify the exact cause; complement levels, especially depletion of complement factors 2 and 4, may indicate deficiency of C1-inhibitor. HAE type III is a diagnosis of exclusion consisting of observed angioedema along with normal C1 levels and function.
The hereditary form (HAE) often goes undetected for a long time, as its symptoms resemble those of more common disorders, such as allergy or intestinal colic. An important clue is the failure of hereditary angioedema to respond to antihistamines or steroids, a characteristic that distinguishes it from allergic reactions. It is particularly difficult to diagnose HAE in patients whose episodes are confined to the gastrointestinal tract. Besides a family history of the disease, only a laboratory analysis can provide final confirmation. In this analysis, it is usually a reduced complement factor C4, rather than the C1-INH deficiency itself, that is detected. The former is used during the reaction cascade in the complement system of immune defense, which is permanently overactive due to the lack of regulation by C1-INH.
Angioedema is classified as either hereditary or acquired.
Acquired angioedema
Acquired angioedema (AAE) can be immunologic, nonimmunologic, or idiopathic. It is usually caused by allergy and occurs together with other allergic symptoms and urticaria. It can also occur as a side effect to certain medications, particularly ACE inhibitors. It is characterized by repetitive episodes of swelling, frequently of the face, lips, tongue, limbs, and genitals. Edema of the gastrointestinal mucosa typically leads to severe abdominal pain; in the upper respiratory tract, it can be life-threatening.
Hereditary angioedema
Hereditary angioedema (HAE) exists in three forms, all of which are caused by a genetic mutation inherited in an autosomal dominant form. They are distinguished by the underlying genetic abnormality. Types I and II are caused by mutations in the SERPING1 gene, which result in either diminished levels of the C1-inhibitor protein (type I HAE) or dysfunctional forms of the same protein (type II HAE). Type III HAE has been linked with mutations in the F12 gene, which encodes the coagulation protein factor XII. All forms of HAE lead to abnormal activation of the complement system, and all forms can cause swelling elsewhere in the body, such as the digestive tract. If HAE involves the larynx, it can cause life-threatening asphyxiation. The pathogenesis of this disorder is suspected to be related to unopposed activation of the contact pathway by the initial generation of kallikrein and/or clotting factor XII by damaged endothelial cells. The end product of this cascade, bradykinin, is produced in large amounts and is believed to be the predominant mediator leading to increased vascular permeability and vasodilation that induces typical angioedema "attacks".
Management
Allergic
In allergic angioedema, avoidance of the allergen and use of antihistamines may prevent future attacks. Cetirizine is a commonly prescribed antihistamine for angioedema. Some patients have reported success with the combination of a nightly low dose of cetirizine to moderate the frequency and severity of attacks, followed by a much higher dose when an attack does appear. Severe angioedema cases may require desensitization to the putative allergen, as mortality can occur. Chronic cases require steroid therapy, which generally leads to a good response. In cases where allergic attack is progressing towards airway obstruction, epinephrine may be life-saving.
Drug induction
ACE inhibitors can induce angioedema. ACE inhibitors block the enzyme ACE so it can no longer degrade bradykinin; thus, bradykinin accumulates and can cause angioedema. This complication appears more common in African-Americans. In people with ACE inhibitor angioedema, the drug needs to be discontinued and an alternative treatment needs to be found, such as an angiotensin II receptor blocker (ARB), which has a similar mechanism but does not affect bradykinin. However, this is controversial, as small studies have shown some patients with ACE inhibitor angioedema can develop it with ARBs, as well.
Hereditary
In hereditary angioedema (HAE), specific stimuli that have previously led to attacks may need to be avoided in the future. It does not respond to antihistamines, corticosteroids, or epinephrine </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Extreme poverty and explain in detail? [/INST] Extreme poverty, deep poverty, abject poverty, absolute poverty, destitution, or penury, is the most severe type of poverty, defined by the United Nations (UN) as "a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information. It depends not only on income but also on access to services" (UN 1995 report of the World Summit for Social Development). Historically, other definitions have been proposed within the United Nations.
In 2018, extreme poverty mainly refers to an income below the international poverty line of $1.90 per day (in 2011 prices, $2.29 in 2021 dollars), set by the World Bank. In October 2017, the World Bank updated the international poverty line, a global absolute minimum, to $1.90 a day. This is the equivalent of $1.00 a day in 1996 US prices, hence the widely used expression "living on less than a dollar a day". The vast majority of those in extreme poverty reside in South Asia and Sub-Saharan Africa. As of 2018, it is estimated that the country with the most people living in extreme poverty is Nigeria, at 86 million.In the past, the vast majority of the world population lived in conditions of extreme poverty.
The percentage of the global population living in absolute poverty fell from over 80% in 1800 to under 20% by 2015. According to UN estimates, in 2015 roughly 734 million people or 10% remained under those conditions. The number had previously been measured as 1.9 billion in 1990, and 1.2 billion in 2008. Despite the significant number of individuals still below the international poverty line, these figures represent significant progress for the international community, as they reflect a decrease of more than one billion people over 15 years.In public opinion surveys around the globe, people surveyed tend to think that extreme poverty has not decreased.The reduction of extreme poverty and hunger was the first Millennium Development Goal (MDG1), as set by the United Nations in 2000. Specifically, the target was to reduce the extreme poverty rate by half by 2015, a goal that was met five years ahead of schedule. In the Sustainable Development Goals, which succeeded the MDGs, the goal is to end extreme poverty in all its forms everywhere. With this declaration the international community, including the UN and the World Bank have adopted the target of ending extreme poverty by 2030.
Definition
Previous definitions
In July 1993, Leandro Despouy, the then UN Special Rapporteur on extreme poverty and human rights made use of a definition he adapted from a 1987 report to the French Economic and Social Council by Fr. Joseph Wresinski, founder of the International Movement ATD Fourth World, distinguishing "lack of basic security" (poverty) and "chronic poverty" (extreme poverty), linking the eradication of extreme poverty by allowing people currently experiencing it a real opportunity to exercise all their human rights:
"The lack of basic security connotes the absence of one or more factors enabling individuals and families to assume basic responsibilities and to enjoy fundamental rights. The situation may become widespread and result in more serious and permanent consequences. The lack of basic security leads to chronic poverty when it simultaneously affects several aspects of people’s lives, when it is prolonged and when it severely compromises people’s chances of regaining their rights and of reassuming their responsibilities in the foreseeable future."This definition was mentioned previously, in June 1989, in the preliminary report on the realization of economic, social and cultural rights by the UN Special Rapporteur Danilo Türk. It is still in use today, among others, in the current UN Guiding Principles on Extreme Poverty and Human Rights adopted by the UN Human Rights Council in September 2012
Consumption-based definition
Extreme poverty is defined by the international community as living below $1.90 a day, as measured in 2011 international prices (equivalent to $2.12 in 2018). This number, also known as the international poverty line, is periodically updated to account for inflation and differences in the cost of living; it was originally defined at $1.00 a day in 1996. The updates are made according to new price data to portray the costs of basic food, health services, clothing, and shelter around the world as accurately as possible. The latest revision was made in 2015 when the World Bank increased the line to international- $1.90.
Because many of the worlds poorest people do not have a monetary income, the poverty measurement is based on the monetary value of a persons consumption. Otherwise the poverty measurement would be missing the home production of subsistence farmers that consume largely their own production.
Alternative definitions
The $1.90/day extreme poverty line remains the most widely used metric as it highlights the reality of those in the most severe conditions. Although widely used by most international organizations, it has come under scrutiny due to a variety of factors. For example, it does not account for how far below the line people are, referred to as the depth of poverty. For this purpose, the same institutions publish data on the poverty gap.
The international poverty line is designed to stay constant over time, to allow comparisons between different years. It is therefore a measure of absolute poverty and is not measuring relative poverty. It is also not designed to capture how people view their own financial situation (known as the socially subjective poverty line). Moreover, the calculation of the poverty line relies on information about consumer prices to calculate purchasing power parity, which are very hard to measure and are necessarily debatable. As with all other metrics, there may also be missing data from the poorest and most fragile countries.
Several alternative instruments for measuring extreme poverty have been suggested which incorporate other factors such as malnutrition and lack of access to a basic education. The Multidimensional Poverty Index (MPI), based on the Alkire-Foster Method, is published by the Oxford Poverty & Human Development Initiative (OPHI): it measures deprivation in basic needs and can be broken down to reflect both the incidence and the intensity of poverty. For example, under conventional measures, in both Ethiopia and Uzbekistan about 40% of the population is considered extremely poor, but based on the MPI, 90% of Ethiopians but only 2% of Uzbeks are in multidimensional poverty.The MPI is useful for development officials to determine the most likely causes of poverty within a region, using the M0 measure of the method (which is calculated by multiplying the fraction of people in poverty by the fraction of dimensions they are deprived in). For example, in the Gaza Strip of Palestine, using the M0 measure of the Alkire-Foster method reveals that poverty in the region is primarily caused by a lack of access to electricity, lack of access to drinking water, and widespread overcrowding. In contrast, data from the Chhukha District of Bhutan reveals that income is a much larger contributor to poverty as opposed to other dimensions within the region. However, the MPI only presents data from 105 countries, so it cannot be used for global measurements.
Share of the population living in extreme poverty
Current trends
Getting to zero
Using the World Bank definition of $1.90/day, as of 2021, roughly 710 million people remained in extreme poverty (or roughly 1 in 10 people worldwide). Nearly half of them live in India and China, with more than 85% living in just 20 countries. Since the mid-1990s, there has been a steady decline in both the worldwide poverty rate and the total number of extreme poor. In 1990, the percentage of the global population living in extreme poverty was 43%, but in 2011, that percentage had dropped down to 21%. This halving of the extreme poverty rate falls in line with the first Millennium Development Goal (MDG1) proposed by former UN Secretary-General Kofi Annan, who called on the international community at the turn of the century to reduce the percentage of people in extreme poverty by half by 2015.This reduction in extreme poverty took place most notably in China, Indonesia, India, Pakistan and Vietnam. These five countries accounted for the alleviation of 715 million people out of extreme poverty between 1990 and 2010 – more than the global net total of roughly 700 million. This statistical oddity can be explained by the fact that the number of people living in extreme poverty in Sub-Saharan Africa rose from 290 million to 414 million over the same period. However, there have been many positive signs for extensive, global poverty reduction as well. Since 1999, the total number of extreme poor has declined by an average of 50 million per year. Moreover, in 2005, for the first time in recorded history, poverty rates began to fall in every region of the world, including Africa.As aforementioned, the number of people living in extreme poverty has reduced from 1.9 billion to 766 million over the span of the last decades. If we remain on our current trajectory, many economists predict we could reach global zero by 2030–2035, thus ending extreme poverty. Global zero entails a world in which fewer than 3% of the global population lives in extreme poverty (projected under most optimistic scenarios to be fewer than 200 million people). This zero figure is set at 3% in recognition of the fact that some amount of frictional (temporary) poverty will continue to exist, whether it is caused by political conflict or unexpected economic fluctuations, at least for the foreseeable future. However, the Brookings Institution notes that any projection about poverty more than a few years into the future runs the risk of being highly uncertain. This is because changes in consumption and distribution throughout the developing world over the next two decades could result in monumental shifts in global poverty, for better or worse.Others are more pessimistic about this possibility, predicting a range of 193 million to 660 million people still living in extreme poverty by 2035. Additionally, some believe the rate of poverty reduction will slow down in the developing world, especially in Africa, and as such it will take closer to five decades to reach global zero. Despite these reservations, several prominent international and national organizations, including the UN, the World Bank and the United States Federal Government (via USAID), have set a target of reaching global zero by the end of 2030.
Exacerbating factors
There are a variety of factors that may reinforce or instigate the existence of extreme poverty, such as weak institutions, cycles of violence and a low level of growth. Recent World Bank research shows that some countries can get caught in a "fragility trap", in which self-reinforcing factors prevent the poorest nations from emerging from low-level equilibrium in the long run. Moreover, most of the reduction in extreme poverty over the past twenty years has taken place in countries that have not experienced a civil conflict or have had governing institutions with a strong capacity to actually govern. Thus, to end extreme poverty, it is also important to focus on the interrelated problems of fragility and conflict.
USAID defines fragility as a governments lack of both legitimacy (the perception the government is adequate at doing its job) and effectiveness (how good the government is at maintaining law and order, in an equitable manner). As fragile nations are unable to equitably and effectively perform the functions of a state, these countries are much more prone to violent unrest and mass inequality. Additionally, in countries with high levels of inequality (a common problem in countries with inadequate governing institutions), much higher growth rates are needed to reduce the rate of poverty when compared with other nations. Additionally, if China and India are removed from the equation, up to 70% of the worlds poor live in fragile states by some definitions of fragility. Some analysts project that extreme poverty will be increasingly concentrated in fragile, low-income states like Haiti, Yemen and the Central African Republic. However, some academics, such as Andy Sumner, say that extreme poverty will be increasingly concentrated in middle-income countries, creating a paradox where the worlds poor do not actually live in the poorest countries.To help low-income earners, fragile states make the transition towards peace and prosperity, the New Deal for Engagement in Fragile States, endorsed by roughly forty countries and multilateral institutions, was created in 2011. This represents an important step towards redressing the problem of fragility as it was originally articulated by self-identified fragile states who called on the international community to not only "do things differently", but to also "do different things".Civil conflict also remains a prime cause for the perpetuation of poverty throughout the developing world. Armed conflict can have severe effects on economic growth for many reasons such as the destruction of assets, destruction of livelihoods, creation of unwanted mass migration, and diversion of public resources towards war. Significantly, a country that experienced major violence during 1981–2005 had extreme poverty rates 21 percentage points higher than a country with no violence. On average, each civil conflict will cost a country roughly 30 years of GDP growth. Therefore, a renewed commitment from the international community to address the deteriorating situation in highly fragile states is necessary to both prevent the mass loss of life, but to also prevent the vicious cycle of extreme poverty.
Population trends and dynamics (e.g. population growth) can also have a large impact on prospects for poverty reduction. According to the United Nations, "in addition to improving general health and well-being, analysis shows that meeting the reproductive health and contraceptive needs of all women in the developing world more than pays for itself").In 2013, a prevalent finding in a report by the World Bank was that extreme poverty is most prevalent in low-income countries. In these countries, the World Bank found that progress in poverty reduction is the slowest, the poor live under the worst conditions, and the most affected persons are children age 12 and under.
International initiatives
Millennium Summit and Millennium Development Goals
In September 2000, world leaders gathered at the Millennium Summit held in New York, launching the United Nations Millennium Project suggested by then UN Secretary-General Kofi Annan. Prior to the launch of the conference, the office of Secretary-General Annan released a report entitled "We The Peoples: The Role of the United Nations in the 21st Century". In this document, now widely known as the Millennium Report, Kofi Annan called on the international community to reduce the proportion of people in extreme poverty by half by 2015, a target that would affect over 1 billion people. Citing the close correlation between economic growth and the reduction of poverty in poor countries, Annan urged international leaders to indiscriminately target the problem of extreme poverty across every region. In charge of managing the project was Jeffrey Sachs, a noted development economist, who in 2005 released a plan for action called "Investing in Development: A Practical Plan to Achieve the Millennium Development Goals". Thomas Pogge criticized the 2000 Millennium Declaration for being less ambitious than a previous declaration from the World Food Summit due to using 1990 as the benchmark rather than 1996.Overall, there has been significant progress towards reducing extreme poverty, with the MDG1 target of reducing extreme poverty rates by half being met five years early, representing 700 million people being lifted out of extreme poverty from 1990 to 2010, with 1.2 billion people still remaining under those conditions. The notable exception to this trend was in Sub-Saharan Africa, the only region where the number of people living in extreme poverty rose from 290 million in 1990 to 414 million in 2010, comprising more than a third of those living in extreme poverty worldwide.
2005 World Summit
The 2005 World Summit, held in September which was organized to measure international progress towards fulfilling the Millennium Development Goals (MDGs). Notably, the conference brought together more than 170 Heads of State. While world leaders at the summit were encouraged by the reduction of poverty in some nations, they were concerned by the uneven decline of poverty within and among different regions of the globe. However, at the end of the summit, the conference attendees reaffirmed the UNs commitment to achieve the MDGs by 2015 and urged all supranational, national and non-governmental organizations to follow suit.
Sustainable Development Goals
As the expiration of the Millennium Development Goals approached in 2015, the UN convened a panel to advise on a Post-2015 Development Agenda, which led to a new set of 17 goals for 2030 titled the Sustainable Development Goals (SDGs). The first goal (SDG 1) is to "End poverty in all its forms everywhere."The HLP report, entitled A New Global Partnership: Eradicate Poverty and Transform Economies Through Sustainable Development, was published in May 2013. In the report, the HLP wrote that:
Ending extreme poverty is just the beginning, not the end. It is vital, but our vision must be broader: to start countries on the path of sustainable development – building on the foundations established by the 2012 UN Conference on Sustainable Development in Rio de Janeiro, and meeting a challenge that no country, developed or developing, has met so far. We recommend to the Secretary-General that deliberations on a new development agenda must be guided by the vision of eradicating extreme poverty once and for all, in the context of sustainable development.
Therefore, the report determined that a central goal of the Post-Millennium Development agenda is to eradicate extreme poverty by 2030. However, the report also emphasized that the MDGs were not enough on their own, as they did not "focus on the devastating effects of conflict and violence on development ... the importance to development of good governance and institution ... nor the need for inclusive growth..." Consequently, there now exists synergy between the policy position papers put forward by the United States (through USAID), the World Bank and the UN itself in terms of viewing fragility and a lack of good governance as exacerbating extreme poverty. However, in a departure from the views of other organizations, the commission also proposed that the UN focus not only on extreme poverty (a line drawn at $1.25), but also on a higher target, such as $2. The report notes this change could be made to reflect the fact that escaping extreme poverty is only a first step.In addition to the UN, a host of other supranational and national actors such as the European Union and the African Union have published their own positions or recommendations on what should be incorporated in the Post-2015 agenda. The European Commissions communication, published in A decent Life for all: from vision to collective action, affirmed the UNs commitment to "eradicate extreme poverty in our lifetime and put the world on a sustainable path to ensure a decent life for all by 2030". A unique vision of the report was the commissions environmental focus (in addition to a plethora of other goals such as combating hunger and gender inequality). Specifically, the Commission argued, "long-term poverty reduction ... requires inclusive and sustainable growth. Growth should create decent jobs, take place with resource efficiency and within planetary boundaries, and should support efforts to mitigate climate change." The African Unions report, entitled Common African Position (CAP) on the Post-2015 Development Agenda, likewise encouraged the international community to focus on eradicating the twin problems of poverty and exclusion in our lifetime. Moreover, the CAP pledged that "no person – regardless of ethnicity, gender, geography, disability, race or other status – is denied universal human rights and basic economic opportunities".
Least developed country conferences
The UN least developed country (LDC) conferences were a series of summits organized by the UN to promote the substantial and even development of the worlds least developed countries.
The first UN LDC Conference was held between 1 and 14 September 1981, in Paris, the first UN LDC Conference was organized to finalize the UNs "Substantial New Programme of Action" for the 1980s in Least Developed Countries. This program, which was unanimously adopted by the conference attendees, argued for internal reforms in LDCs (meant to encourage economic growth) to be complemented by strong international measures. However, despite the major economic and policy reforms initiated many of these LDCs, in addition to strong international aid, the economic situation of these countries worsened as a whole in the 1980s. This prompted the organization of a 2nd UN LDC conference almost a decade later.
The second UN LDC Conference was held between 3 and 14 September 1990, once again in Paris, the second UN LDC Conference was convened to measure the progress made by the LDCs towards fulfilling their development goals during the 1980s. Recognizing the problems that plagued the LDCs over the past decade, the conference formulated a new set of national and international policies to accelerate the growth rates of the poorest nations. These new principles were embodied in the "Paris Declaration and Programme of Action for the Least Developed Countries for the 1990s".The fourth UN LDC Conference was the most recent conference. It was held in May 2011 in Istanbul, recognized that the nature of development had fundamentally changed since the 1st conference held almost 30 years earlier. In the 21st century, the capital flow into emerging economies has increasingly become dominated by foreign direct investment and remittances, as opposed to bilateral and multilateral assistance. Moreover, since the 1980s, significant structural changes have taken place on the international stage. With the creation of the G-20 conference of the largest economic powers, including many nations in the Global South, formerly undeveloped nations are now able to have a much larger say in international relations. Furthermore, the conference recognized that in the midst of a deep global recession, coupled with multiple crises (energy, climate, food, etc.), the international community would have fewer resources to aid the LDCs. Thus, the UN considered the participation of a wide range of stakeholders (not least the LDCs themselves), crucial to the formulation of the conference.
Organizations working to end extreme poverty
International organizations
World Bank
In 2013, the Board of Governors of the World Bank Group (WBG) set two overriding goals for the WBG to commit itself to in the future. First, to end extreme poverty by 2030, an objective that echoes the sentiments of the UN and the Obama administration. Additionally, the WBG set an interim target of reducing extreme poverty to below 9% by 2020. Second, to focus on growth among the bottom 40% of people, as opposed to standard GDP growth. This commitment ensures that the growth of the developing world lifts people out of poverty, rather than exacerbating inequality.As the World Banks primary focus is on delivering economic growth to enable equitable prosperity, its developments programs are primarily commercial-based in nature, as opposed to the UN. Since the World Bank recognizes better jobs will result in higher income, and thus less poverty, the WBG seeks to support employment training initiatives, small business development programs and strong labor protection laws. However, since much of the growth in the developing world has been inequitable, the World Bank has also begun teaming with client states to map out trends in inequality and to propose public policy changes that can level the playing field.Moreover, the World Bank engages in a variety of nutritional, transfer payments and transport-based initiatives. Children who experience under-nutrition from conception to two years of age have a much higher risk of physical and mental disability. Thus, they are often trapped in poverty and are unable to make a full contribution to the social and economic development of their communities as adults. The WBG estimates that as much as 3% of GDP can be lost as a result of under-nutrition among the poorest nations. To combat undernutrition, the WBG has partnered with UNICEF and the WHO to ensure all small children are fully fed. The WBG also offers conditional cash transfers to poor households who meet certain requirements such as maintaining childrens healthcare or ensuring school attendance. Finally, the WBG understands investment in public transportation and better roads is key to breaking rural isolation, improving access to healthcare and providing better job opportunities for the Worlds poor.
United Nations
The UN Office for the Coordination of Humanitarian Affairs (OCHA) works to synchronize the disparate international, national and non-governmental efforts to contest poverty. OCHA seeks to prevent "confusion" in relief operations and to ensure that the humanitarian response to disaster situations has greater accountability and predictability. To do so, OCHA has begun deploying Humanitarian Coordinators and Country Teams to provide a solid architecture for the international community to work through.The United Nations Childrens Fund (UNICEF) was created by the UN to provide food, clothing and healthcare to European children facing famine and disease in the immediate aftermath of World War II. After the UN General Assembly extended UNICEFs mandate indefinitely in 1953, it actively worked to help children in extreme poverty in more than 190 countries and territories to overcome the obstacles that poverty, violence, disease and discrimination place in a childs path. Its current focus areas are 1) Child survival & development 2) Basic education & gender equality 3) Children and HIV/AIDS and 4) Child protection.The UN Refugee Agency (UNHCR) is mandated to lead and coordinate international action to protect refugees worldwide. Its primary purpose is to safeguard the rights of refugees by ensuring anyone can exercise the right to seek asylum in another state, with the option to return home voluntarily, integrate locally or resettle in a third country. The UNHCR operates in over 125 countries, helping approximately 33.9 million persons.The World Food Programme (WFP) is the largest agency dedicated to fighting hunger worldwide. On average, the WFP brings food assistance to more than 90 million people in 75 countries. The WFP not only strives to prevent hunger in the present, but also in the future by developing stronger communities which will make food even more secure on their own. The WFP has a range of expertise from Food Security Analysis, Nutrition, Food Procurement and Logistics.The World Health Organization (WHO) is responsible for providing leadership on global health matters, shaping the health research agenda, articulating evidence-based policy decisions and combating diseases that are induced from poverty, such as HIV/AIDS, malaria and tuberculosis. Moreover, the WHO deals with pressing issues ranging from managing water safety, to dealing with maternal and newborn health.
Governmental agencies
USAID
The US Agency for International Development (USAID) is the lead US government agency dedicated to ending extreme poverty. Currently the largest bilateral donor in the world, the United States channels the majority of its development assistance through USAID and the US Department of State. In President Obamas 2013 State of the Union address, he declared, "So the United States will join with our allies to eradicate such extreme poverty in the next two decades ... which is within our reach." In response to Obamas call to action, USAID has made ending extreme poverty central to its mission statement. Under its New Model of Development, USAID seeks to eradicate extreme poverty through the use of innovation in science and technology, by putting a greater emphasis on evidence based decision-making, and through leveraging the ingenuity of the private sector and global citizens.A major initiative of the Obama Administration is Power Africa, which aims to bring energy to 20 million people in Sub-Saharan Africa. By reaching out to its international partners, whether commercial or public, the US has leveraged over $14 billion in outside commitments after investing only US$7 billion of its own. To ensure that Power Africa reaches the regions poorest, the initiative engages in a transaction based approach to create systematic change. This includes expanding access to electricity to more than 20,000 additional households which already live without power.In terms of specific programming, USAID works in a variety of fields from preventing hunger, reducing HIV/AIDS, providing general health assistance and democracy assistance, as well as dealing with gender issues. To deal with food security, which affects roughly 842 million people (who go to bed hungry each night), USAID coordinates the Feed the Future Initiative (FtF). FtF aims to reduce poverty and under-nutrition each by 20% over five years. Because of the Presidents Emergency Plan for AIDS Relief (PEPFAR) and a variety of congruent actors, the incidence of AIDS and HIV, which used to ravage Africa, reduced in scope and intensity. Through PEPFAR, the United States has ensured over five million people have received life-saving antiviral drugs, a significant proportion of the eight million people receiving treatment in relatively poor nations.In terms of general health assistance, USAID has worked to reduce maternal mortality by 30%, under-five child mortality by 35%, and has accomplished a host of other goals. USAID also supports the gamut of democratic initiatives, from promoting human rights and accountable, fair governance, to supporting free and fair elections and the rule of law. In pursuit of these goals, USAID has increased global political participation by training more than 9,800 domestic election observers and providing civic education to more than 6.5 million people. Since 2012, the Agency has begun integrating critical gender perspectives across all aspects of its programming to ensure all USAID initiatives work to eliminate gender disparities. To do so, USAID seeks to increase the capability of women and girls to realize their rights and determine their own life outcomes. Moreover, USAID supports additional programs to improve womens access to capital and markets, builds theirs skills in agriculture, and supports womens desire to own businesses.
Others
Other major government development agencies with annual aid programmes of more than $10 billion include: GIZ (Germany), FCDO (United Kingdom), JICA (Japan), European Union and AFD (France).
Non-Governmental Organizations
A multitude of non-governmental organizations operate in the field of extreme poverty, actively working to alleviate the poorest of the poor of their deprivation. To name but a few notable organizations: Save the Children, the Overseas Development Institute, Concern Worldwide, ONE, Trickle Up and Oxfam have all done a considerable amount of work in extreme poverty.
Save the Children is the leading international organization dedicated to helping the worlds indigent children. In 2013, Save the Children reached over 143 million children through their work, including over 52 million children directly </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Facial trauma and explain in detail? [/INST] Facial trauma, also called maxillofacial trauma, is any physical trauma to the face. Facial trauma can involve soft tissue injuries such as burns, lacerations and bruises, or fractures of the facial bones such as nasal fractures and fractures of the jaw, as well as trauma such as eye injuries. Symptoms are specific to the type of injury; for example, fractures may involve pain, swelling, loss of function, or changes in the shape of facial structures.
Facial injuries have the potential to cause disfigurement and loss of function; for example, blindness or difficulty moving the jaw can result. Although it is seldom life-threatening, facial trauma can also be deadly, because it can cause severe bleeding or interference with the airway; thus a primary concern in treatment is ensuring that the airway is open and not threatened so that the patient can breathe. Depending on the type of facial injury, treatment may include bandaging and suturing of open wounds, administration of ice, antibiotics and pain killers, moving bones back into place, and surgery. When fractures are suspected, radiography is used for diagnosis. Treatment may also be necessary for other injuries such as traumatic brain injury, which commonly accompany severe facial trauma.
In developed countries, the leading cause of facial trauma used to be motor vehicle accidents, but this mechanism has been replaced by interpersonal violence; however auto accidents still predominate as the cause in developing countries and are still a major cause elsewhere. Thus prevention efforts include awareness campaigns to educate the public about safety measures such as seat belts and motorcycle helmets, and laws to prevent drunk and unsafe driving. Other causes of facial trauma include falls, industrial accidents, and sports injuries.
Signs and symptoms
Fractures of facial bones, like other fractures, may be associated with pain, bruising, and swelling of the surrounding tissues (such symptoms can occur in the absence of fractures as well). Fractures of the nose, base of the skull, or maxilla may be associated with profuse nosebleeds. Nasal fractures may be associated with deformity of the nose, as well as swelling and bruising. Deformity in the face, for example a sunken cheekbone or teeth which do not align properly, suggests the presence of fractures. Asymmetry can suggest facial fractures or damage to nerves. People with mandibular fractures often have pain and difficulty opening their mouths and may have numbness in the lip and chin. With Le Fort fractures, the midface may move relative to the rest of the face or skull.
Cause
Injury mechanisms such as falls, assaults, sports injuries, and vehicle crashes are common causes of facial trauma in children as well as adults. Blunt assaults, blows from fists or objects, are a common cause of facial injury. Facial trauma can also result from wartime injuries such as gunshots and blasts.
Animal attacks and work-related injuries such as industrial accidents are other causes. Vehicular trauma is one of the leading causes of facial injuries. Trauma commonly occurs when the face strikes a part of the vehicles interior, such as the steering wheel. In addition, airbags can cause corneal abrasions and lacerations (cuts) to the face when they deploy.
Diagnosis
Radiography, imaging of tissues using X-rays, is used to rule out facial fractures. Angiography (X-rays taken of the inside of blood vessels) can be used to locate the source of bleeding. However the complex bones and tissues of the face can make it difficult to interpret plain radiographs; CT scanning is better for detecting fractures and examining soft tissues, and is often needed to determine whether surgery is necessary, but it is more expensive and difficult to obtain. CT scanning is usually considered to be more definitive and better at detecting facial injuries than X-ray. CT scanning is especially likely to be used in people with multiple injuries who need CT scans to assess for other injuries anyway.
Classification
Soft tissue injuries include abrasions, lacerations, avulsions, bruises, burns and cold injuries.
Commonly injured facial bones include the nasal bone (the nose), the maxilla (the bone that forms the upper jaw), and the mandible (the lower jaw). The mandible may be fractured at its symphysis, body, angle, ramus, and condyle. The zygoma (cheekbone) and the frontal bone (forehead) are other sites for fractures. Fractures may also occur in the bones of the palate and those that come together to form the orbit of the eye.
At the beginning of the 20th century, René Le Fort mapped typical locations for facial fractures; these are now known as Le Fort I, II, and III fractures (right). Le Fort I fractures, also called Guérin or horizontal maxillary fractures, involve the maxilla, separating it from the palate. Le Fort II fractures, also called pyramidal fractures of the maxilla, cross the nasal bones and the orbital rim. Le Fort III fractures, also called craniofacial disjunction and transverse facial fractures, cross the front of the maxilla and involve the lacrimal bone, the lamina papyracea, and the orbital floor, and often involve the ethmoid bone, are the most serious. Le Fort fractures, which account for 10–20% of facial fractures, are often associated with other serious injuries. Le Fort made his classifications based on work with cadaver skulls, and the classification system has been criticized as imprecise and simplistic since most midface fractures involve a combination of Le Fort fractures. Although most facial fractures do not follow the patterns described by Le Fort precisely, the system is still used to categorize injuries.
Prevention
Measures to reduce facial trauma include laws enforcing seat belt use and public education to increase awareness about the importance of seat belts and motorcycle helmets. Efforts to reduce drunk driving are other preventative measures; changes to laws and their enforcement have been proposed, as well as changes to societal attitudes toward the activity. Information obtained from biomechanics studies can be used to design automobiles with a view toward preventing facial injuries. While seat belts reduce the number and severity of facial injuries that occur in crashes, airbags alone are not very effective at preventing the injuries. In sports, safety devices including helmets have been found to reduce the risk of severe facial injury. Additional attachments such as face guards may be added to sports helmets to prevent orofacial injury (injury to the mouth or face); mouth guards also used. In addition to factors listed above, correction of dental features that are associated with receiving more dental trauma also helps, such as increased overjet, Class II malocclusions, or correction of detofacal deformities with small mandible
Treatment
An immediate need in treatment is to ensure that the airway is open and not threatened (for example by tissues or foreign objects), because airway compromisation can occur rapidly and insidiously, and is potentially deadly. Material in the mouth that threatens the airway can be removed manually or using a suction tool for that purpose, and supplemental oxygen can be provided. Facial fractures that threaten to interfere with the airway can be reduced by moving the bones back into place; this both reduces bleeding and moves the bone out of the way of the airway. Tracheal intubation (inserting a tube into the airway to assist breathing) may be difficult or impossible due to swelling. Nasal intubation, inserting an endotracheal tube through the nose, may be contraindicated in the presence of facial trauma because if there is an undiscovered fracture at the base of the skull, the tube could be forced through it and into the brain. If facial injuries prevent orotracheal or nasotracheal intubation, a surgical airway can be placed to provide an adequate airway. Although cricothyrotomy and tracheostomy can secure an airway when other methods fail, they are used only as a last resort because of potential complications and the difficulty of the procedures.
A dressing can be placed over wounds to keep them clean and to facilitate healing, and antibiotics may be used in cases where infection is likely. People with contaminated wounds who have not been immunized against tetanus within five years may be given a tetanus vaccination. Lacerations may require stitches to stop bleeding and facilitate wound healing with as little scarring as possible. Although it is not common for bleeding from the maxillofacial region to be profuse enough to be life-threatening, it is still necessary to control such bleeding. Severe bleeding occurs as the result of facial trauma in 1–11% of patients, and the origin of this bleeding can be difficult to locate. Nasal packing can be used to control nose bleeds and hematomas that may form on the septum between the nostrils. Such hematomas need to be drained. Mild nasal fractures need nothing more than ice and pain killers, while breaks with severe deformities or associated lacerations may need further treatment, such as moving the bones back into alignment and antibiotic treatment.Treatment aims to repair the faces natural bony architecture and to leave as little apparent trace of the injury as possible. Fractures may be repaired with metal plates and screws commonly made from Titanium. Resorbable materials are also available; these are biologically degraded and removed over time but there is no evidence supporting their use over conventional Titanium plates. Fractures may also be wired into place. Bone grafting is another option to repair the bones architecture, to fill out missing sections, and to provide structural support. Medical literature suggests that early repair of facial injuries, within hours or days, results in better outcomes for function and appearance.Surgical specialists who commonly treat specific aspects of facial trauma are oral and maxillofacial surgeons, otolaryngologists, and plastic surgeons. These surgeons are trained in the comprehensive management of trauma to the lower, middle and upper face and have to take written and oral board examinations covering the management of facial injuries.
Prognosis and complications
By itself, facial trauma rarely presents a threat to life; however it is often associated with dangerous injuries, and life-threatening complications such as blockage of the airway may occur.
The airway can be blocked due to bleeding, swelling of surrounding tissues, or damage to structures. Burns to the face can cause swelling of tissues and thereby lead to airway blockage. Broken bones such as combinations of nasal, maxillary, and mandibular fractures can interfere with the airway. Blood from the face or mouth, if swallowed, can cause vomiting, which can itself present a threat to the airway because it has the potential to be aspirated. Since airway problems can occur late after the initial injury, it is necessary for healthcare providers to monitor the airway regularly.Even when facial injuries are not life-threatening, they have the potential to cause disfigurement and disability, with long-term physical and emotional results. Facial injuries can cause problems with eye, nose, or jaw function and can threaten eyesight.
As early as 400 BC, Hippocrates is thought to have recorded a relationship between blunt facial trauma and blindness. Injuries involving the eye or eyelid, such as retrobulbar hemorrhage, can threaten eyesight; however, blindness following facial trauma is not common.Incising wounds of the face may involve the parotid duct. This is more likely if the wound crosses a line drawn between the tragus of the ear to the upper lip. The approximate location of the course of the duct is the middle third of this line.Nerves and muscles may be trapped by broken bones; in these cases the bones need to be put back into their proper places quickly. For example, fractures of the orbital floor or medial orbital wall of the eye can entrap the medial rectus or inferior rectus muscles. In facial wounds, tear ducts and nerves of the face may be damaged. Fractures of the frontal bone can interfere with the drainage of the frontal sinus and can cause sinusitis.Infection is another potential complication, for example when debris is ground into an abrasion and remains there. Injuries resulting from bites carry a high infection risk.
Epidemiology
As many as 50–70% of people who survive traffic accidents have facial trauma. In most developed countries, violence from other people has replaced vehicle collisions as the main cause of maxillofacial trauma; however in many developing countries traffic accidents remain the major cause. Increased use of seat belts and airbags has been credited with a reduction in the incidence of maxillofacial trauma, but fractures of the mandible (the jawbone) are not decreased by these protective measures. The risk of maxillofacial trauma is decreased by a factor of two with use of motorcycle helmets. A decline in facial bone fractures due to vehicle accidents is thought to be due to seat belt and drunk driving laws, strictly enforced speed limits and use of airbags. In vehicle accidents, drivers and front seat passengers are at highest risk for facial trauma.Facial fractures are distributed in a fairly normal curve by age, with a peak incidence occurring between ages 20 and 40, and children under 12 have only 5–10% of all facial fractures. Most facial trauma in children involves lacerations and soft tissue injuries. There are several reasons for the lower incidence of facial fractures in children: the face is smaller in relation to the rest of the head, children are less often in some situations associated with facial fractures such as occupational and motor vehicle hazards, there is a lower proportion of cortical bone to cancellous bone in childrens faces, poorly developed sinuses make the bones stronger, and fat pads provide protection for the facial bones.Head and brain injuries are commonly associated with facial trauma, particularly that of the upper face; brain injury occurs in 15–48% of people with maxillofacial trauma. Coexisting injuries can affect treatment of facial trauma; for example they may be emergent and need to be treated before facial injuries. People with trauma above the level of the collar bones are considered to be at high risk for cervical spine injuries (spinal injuries in the neck) and special precautions must be taken to avoid movement of the spine, which could worsen a spinal injury.
References
Cited texts
Jeroukhimov I, Cockburn M, Cohn S (2004). "Facial trauma: Overview of trauma care". In Thaller SR (ed.). Facial trauma. New York, N.Y: Marcel Dekker. ISBN 0-8247-4625-2. Retrieved 2008-10-19.
Neuman MI, Eriksson E (2006). "Facial trauma". In Fleisher GR, Ludwig S, Henretig FM (eds.). Textbook of Pediatric Emergency Medicine. Hagerstwon, MD: Lippincott Williams & Wilkins </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Facial weakness and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Monoclonal gammopathy and explain in detail? [/INST] Monoclonal gammopathy, also known as paraproteinemia, is the presence of excessive amounts of myeloma protein or monoclonal gamma globulin in the blood. It is usually due to an underlying immunoproliferative disorder or hematologic neoplasms, especially multiple myeloma. It is sometimes considered equivalent to plasma cell dyscrasia. The most common form of the disease is monoclonal gammopathy of undetermined significance.
Causes
Causes of paraproteinemia include the following:
Leukemias and lymphomas of various types, but usually B-cell non-Hodgkin lymphomas with a plasma cell component.
Myeloma
Plasmacytoma
Lymphoplasmacytic lymphoma
Idiopathic (no discernible cause): some of these will be revealed as leukemias or lymphomas over the years.
AL amyloidosis
Diagnosis
These are characterized by the presence of any abnormal protein that is involved in the immune system, which are most often immunoglobulins and are associated with the clonal proliferation of lymphocytes.When a paraproteinemia is present in the blood, there will be a narrow band, or spike, in the serum protein electrophoresis because there will be an excess of production of one protein.There are two large classes of blood proteins: albumin and globulin. They are generally equal in proportion, but albumin is much smaller than globulin, and slightly negatively charged, which leads to an accumulation at the end of the electrophoretic gel </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Discoid meniscus and explain in detail? [/INST] Discoid meniscus is a rare human anatomic variant that usually affects the lateral meniscus of the knee. Usually a person with this anomaly has no complaints; however, it may present as pain, swelling, or a snapping sound heard from the affected knee. Strong suggestive findings on magnetic resonance imaging includes a thickened meniscal body seen on more than two contiguous sagittal slices.
Description
The Watanabe classification of discoid lateral meniscus is: (A) Incomplete, (B) Complete, and C) Wrisberg-ligament variant Normally, the meniscus is a thin crescent-shaped piece of cartilage that lies between the weight bearing joint surfaces of the femur and the tibia. It is attached to the lining of the knee joint along its periphery and serves to absorb about a third of the impact load that the joint cartilage surface sees and also provides some degree of stabilization for the knee. There are two menisci in the knee joint, with one on the outside (away from midline) being the lateral meniscus and the inner one (towards midline) the medial meniscus. A discoid meniscus is a congenital anomaly of the knee found in 3% of the population (up to 15% in Asia). It typically affects the lateral meniscus and may be found bilaterally (20%). Instead of the narrow crescent shape, as seen in a normal meniscus above, a discoid meniscus is thickened, and has a fuller crescent shape. In addition, a discoid meniscus does not taper as much towards the center of the joint and is shaped like a disc. The thickness of the meniscus, its diminished vascular blood supply, and in some instances, weak capsular attachment, makes it more prone to tears compared to a normal meniscus. The anomaly in itself is asymptomatic; however, a tear of the meniscus can result in pain, swelling, and snapping in the affected knee. The orthopedic classification of discoid menisci includes: complete, incomplete or Wrisberg-ligament types as depicted here. Coverage of the lateral tibial plateau determines the designation of complete or incomplete. The Wrisberg-ligament type has an abnormal posterior attachment by attaching to part of the posterior cruciate ligament.
Symptoms
Symptom is usually pain in the knee. And when the knee is bent or stretched it makes the sound of bouncing bones.
Diagnosis
The transverse diameter of a normal meniscus is approximately 10 to 11 mm; therefore a normal meniscus body will be visible on only 2 slices of a MR with 4-5-mm sagittal slices </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Maternal care and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Iron deficiency and explain in detail? [/INST] Iron deficiency, or sideropenia, is the state in which a body lacks enough iron to supply its needs. Iron is present in all cells in the human body and has several vital functions, such as carrying oxygen to the tissues from the lungs as a key component of the hemoglobin protein, acting as a transport medium for electrons within the cells in the form of cytochromes, and facilitating oxygen enzyme reactions in various tissues. Too little iron can interfere with these vital functions and lead to morbidity and death.Total body iron averages approximately 3.8 g in men and 2.3 g in women. In blood plasma, iron is carried tightly bound to the protein transferrin. There are several mechanisms that control iron metabolism and safeguard against iron deficiency. The main regulatory mechanism is situated in the gastrointestinal tract. The majority of iron absorption occurs in the duodenum, the first section of the small intestine. A number of dietary factors may affect iron absorption. When loss of iron is not sufficiently compensated by intake of iron from the diet, a state of iron deficiency develops over time. When this state is uncorrected, it leads to iron-deficiency anemia, a common type of anemia. Before anemia occurs, the medical condition of iron deficiency without anemia is called latent iron deficiency (LID).
Anemia is a condition characterized by inadequate red blood cells (erythrocytes) or hemoglobin. When the body lacks sufficient amounts of iron, production of the protein hemoglobin is reduced. Hemoglobin binds to oxygen, enabling red blood cells to supply oxygenated blood throughout the body. Women of child-bearing age, children, and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like an irregular heartbeat, pregnancy complications, and delayed growth in infants and children that could affect their cognitive development and their behavior.
Signs and symptoms
Symptoms of iron deficiency can occur even before the condition has progressed to iron deficiency anemia.
Symptoms of iron deficiency are not unique to iron deficiency (i.e. not pathognomonic). Iron is needed for many enzymes to function normally, so a wide range of symptoms may eventually emerge, either as the secondary result of the anemia, or as other primary results of iron deficiency. Symptoms of iron deficiency include:
fatigue
dizziness/lightheadedness
pallor
hair loss
twitches
irritability
weakness
pica
brittle or grooved nails
hair thinning
Plummer–Vinson syndrome: painful atrophy of the mucous membrane covering the tongue, the pharynx and the esophagus
impaired immune function
pagophagia
restless legs syndromeContinued iron deficiency may progress to anemia and worsening fatigue. Thrombocytosis, or an elevated platelet count, can also result. A lack of sufficient iron levels in the blood is one reason that some people cannot donate blood.
Signs and symptoms in children
pale skin
fatigue
slowed growth and development
poor appetite
decrease in the size of testes
behavioral problems
abnormal rapid breathing
frequent infection
Iron requirements in young children to teenagers
Causes
blood loss (hemoglobin contains iron)
donation
excessive menstrual bleeding
non-menstrual bleeding
bleeding from the gastrointestinal tract (anus) (ulcers, hemorrhoids, ulcerative colitis, stomach or colon cancer, etc.)
rarely, laryngological bleeding or from the respiratory tract
inadequate intake (see below)
substances (in diet or drugs) interfering with iron absorption
Fluoroquinolone antibiotics
malabsorption syndromes
inflammation where it is adaptive to limit bacterial growth in infection, but is also present in many other chronic diseases such as Inflammatory bowel disease and rheumatoid arthritis
parasitic infectionThough genetic defects causing iron deficiency have been studied in rodents, there are no known genetic disorders of human iron metabolism that directly cause iron deficiency.
Athletics
Possible reasons that athletics may contribute to lower iron levels includes mechanical hemolysis (destruction of red blood cells from physical impact), loss of iron through sweat and urine, gastrointestinal blood loss, and haematuria (presence of blood in urine). Although small amounts of iron are excreted in sweat and urine, these losses can generally be seen as insignificant even with increased sweat and urine production, especially considering that athletes bodies appear to become conditioned to retain iron better. Mechanical hemolysis is most likely to occur in high-impact sports, especially among long-distance runners who experience "foot-strike hemolysis" from the repeated impact of their feet with the ground. Exercise-induced gastrointestinal bleeding is most likely to occur in endurance athletes. Haematuria in athletes is most likely to occur in those that undergo repetitive impacts on the body, particularly affecting the feet (such as running on a hard road, or Kendo) and hands (e.g. Conga or Candombe drumming). Additionally, athletes in sports that emphasize weight loss (e.g. ballet, gymnastics, marathon running, and cycling) as well as sports that emphasize high-carbohydrate, low-fat diets, may be at an increased risk for iron deficiency.
Inadequate intake
A U.S. federal survey of food consumption determined that for women and men over the age of 19, average iron consumption from foods and beverages was 13.1 and 18.0 mg/day, respectively. For women, 16% in the age range 14–50 years consumed less than the Estimated Average Requirement (EAR), for men ages 19 and up, fewer than 3%. Consumption data were updated in a 2014 U.S. government survey and reported that for men and women ages 20 and older the average iron intakes were, respectively, 16.6 and 12.6 mg/day. People in the U.S. usually obtain adequate amounts of iron from their diets. However, subgroups like infants, young children, teenaged girls, pregnant women, and premenopausal women are at risk of obtaining less than the EAR. Socio-economic and racial differences further affect the rates of iron deficiency.
Bioavailability
Iron is needed for bacterial growth making its bioavailability an important factor in controlling infection. Blood plasma as a result carries iron tightly bound to transferrin, which is taken up by cells by endocytosing transferrin, thus preventing its access to bacteria.: 30 Between 15 and 20 percent of the protein content in human milk consists of lactoferrin that binds iron. As a comparison, in cows milk, this is only 2 percent. As a result, breast fed babies have fewer infections. Lactoferrin is also concentrated in tears, saliva and at wounds to bind iron to limit bacterial growth. Egg white contains 12% conalbumin to withhold it from bacteria that get through the egg shell (for this reason, prior to antibiotics, egg white was used to treat infections).: 29 To reduce bacterial growth, plasma concentrations of iron are lowered in a variety of systemic inflammatory states due to increased production of hepcidin which is mainly released by the liver in response to increased production of pro-inflammatory cytokines such as interleukin-6. This functional iron deficiency will resolve once the source of inflammation is rectified; however, if not resolved, it can progress to anaemia of chronic inflammation. The underlying inflammation can be caused by fever, inflammatory bowel disease, infections, chronic heart failure (CHF), carcinomas, or following surgery.
Reflecting this link between iron bioavailability and bacterial growth, the taking of oral iron supplements in excess of 200 mg/day causes a relative overabundance of iron that can alter the types of bacteria that are present within the gut. There have been concerns regarding parenteral iron being administered whilst bacteremia is present, although this has not been borne out in clinical practice. A moderate iron deficiency, in contrast, can provide protection against acute infection, especially against organisms that reside within hepatocytes and macrophages, such as malaria and tuberculosis. This is mainly beneficial in regions with a high prevalence of these diseases and where standard treatment is unavailable.
Diagnosis
A complete blood count can reveal microcytic anemia, although this is not always present – even when iron deficiency progresses to iron-deficiency anemia.
Low serum ferritin (see below)
Low serum iron
High TIBC (total iron binding capacity), although this can be elevated in cases of anemia of chronic inflammation.
It is possible that the fecal occult blood test might be positive, if iron deficiency is the result of gastrointestinal bleeding; although the sensitivity of the test may mean that in some cases it will be negative even with enteral blood loss.As always, laboratory values have to be interpreted with the labs reference values in mind and considering all aspects of the individual clinical situation.
Serum ferritin can be elevated in inflammatory conditions; so a normal serum ferritin may not always exclude iron deficiency, and the utility is improved by taking a concurrent C-reactive protein (CRP). The level of serum ferritin that is viewed as "high" depends on the condition. For example, in inflammatory bowel disease the threshold is 100, where as in chronic heart failure (CHF) the levels are 200.
Treatment
Before commencing treatment, there should be definitive diagnosis of the underlying cause for iron deficiency. This is particularly the case in older patients, who are most susceptible to colorectal cancer and the gastrointestinal bleeding it often causes. In adults, 60% of patients with iron-deficiency anemia may have underlying gastrointestinal disorders leading to chronic blood loss.
It is likely that the cause of the iron deficiency will need treatment as well.
Upon diagnosis, the condition can be treated with iron supplements. The choice of supplement will depend upon both the severity of the condition, the required speed of improvement (e.g. if awaiting elective surgery) and the likelihood of treatment being effective (e.g. if the patient has underlying IBD, is undergoing dialysis, or is having ESA therapy).
Examples of oral iron that are often used are ferrous sulfate, ferrous gluconate, or amino acid chelate tablets. Recent research suggests the replacement dose of iron, at least in the elderly with iron deficiency, may be as little as 15 mg per day of elemental iron.
Food sources
Mild iron deficiency can be prevented or corrected by eating iron-rich foods and by cooking in an iron skillet. Because iron is a requirement for most plants and animals, a wide range of foods provide iron. Good sources of dietary iron have heme-iron, as this is most easily absorbed and is not inhibited by medication or other dietary components. Two examples are red meat & poultry. Non-heme sources do contain iron, though it has reduced bioavailability. Examples are lentils, beans, leafy vegetables, pistachios, tofu, fortified bread, and fortified breakfast cereals.
Iron from different foods is absorbed and processed differently by the body; for instance, iron in meat (heme-iron source) is more easily absorbed than iron in grains and vegetables ("non-heme" iron sources). Minerals and chemicals in one type of food may also inhibit absorption of iron from another type of food eaten at the same time. For example, oxalates and phytic acid form insoluble complexes which bind iron in the gut before it can be absorbed.
Because iron from plant sources is less easily absorbed than the heme-bound iron of animal sources, vegetarians and vegans should have a somewhat higher total daily iron intake than those who eat meat, fish or poultry. Legumes and dark-green leafy vegetables like broccoli, kale and Asian greens are especially good sources of iron for vegetarians and vegans. However, spinach and Swiss chard contain oxalates which bind iron, making it almost entirely unavailable for absorption. Iron from non-heme sources is more readily absorbed if consumed with foods that contain either heme-bound iron or vitamin C. This is due to a hypothesised "meat factor" which enhances iron absorption.Following are two tables showing the richest foods in heme and non-heme iron.
In both tables, food serving sizes may differ from the usual 100g quantity for relevancy reasons. Arbitrarily, the guideline is set at 18 mg, which is the USDA Recommended Dietary Allowance for women aged between 19 and 50.
Food recommendations for children
Children at 6 months should start having solid food that contains enough iron, which could be found in both heme and non-heme ironHeme iron:
Red meat (for example, beef, pork, lamb, goat, or venison)
Fatty fish
Poultry (for example, chicken or turkey)
EggsNon-heme iron:
Iron-fortified infant cereals
Tofu
Beans and lentils
Dark green leafy vegetablesIron deficiency can have serious health consequences that diet may not be able to quickly correct; hence, an iron supplement is often necessary if the iron deficiency has become symptomatic.
Blood transfusion
Blood transfusion is sometimes used to treat iron deficiency with hemodynamic instability. Sometimes transfusions are considered for people who have chronic iron deficiency or who will soon go to surgery, but even if such people have low hemoglobin, they should be given oral treatment or intravenous iron.
Intravenous iron therapy for non-anaemic, iron-deficient adultsCurrent evidence is limited to base any recommendations that intravenous iron therapy is beneficial for treating non-anaemic, iron-deficient adults. Further research in this area is needed as current body of evidence is very low quality.
Cancer research
The presence of Helicobacter pylori in the stomach can cause inflammation and can lower the threshold for the development of gastric cancer. In the setting of iron deficiency, H. pylori causes more severe inflammation and the development of premalignant lesions. This inflammatory effect appears to be mediated, in part, through altered bile acid production including an increase in deoxycholic acid, a secondary bile acid implicated in colon cancer and other gastrointestinal cancers.
See also
Bahima disease
CO2 fertilization effect
Notes
References
Further reading
. Zeglam A, Abugrara A, Kabuka M </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Cerebral edema and explain in detail? [/INST] Cerebral edema is excess accumulation of fluid (edema) in the intracellular or extracellular spaces of the brain. This typically causes impaired nerve function, increased pressure within the skull, and can eventually lead to direct compression of brain tissue and blood vessels. Symptoms vary based on the location and extent of edema and generally include headaches, nausea, vomiting, seizures, drowsiness, visual disturbances, dizziness, and in severe cases, coma and death.Cerebral edema is commonly seen in a variety of brain injuries including ischemic stroke, subarachnoid hemorrhage, traumatic brain injury, subdural, epidural, or intracerebral hematoma, hydrocephalus, brain cancer, brain infections, low blood sodium levels, high altitude, and acute liver failure. Diagnosis is based on symptoms and physical examination findings and confirmed by serial neuroimaging (computed tomography scans and magnetic resonance imaging).The treatment of cerebral edema depends on the cause and includes monitoring of the persons airway and intracranial pressure, proper positioning, controlled hyperventilation, medications, fluid management, steroids. Extensive cerebral edema can also be treated surgically with a decompressive craniectomy. Cerebral edema is a major cause of brain damage and contributes significantly to the mortality of ischemic strokes and traumatic brain injuries.As cerebral edema is present with many common cerebral pathologies, the epidemiology of the disease is not easily defined. The incidence of this disorder should be considered in terms of its potential causes and is present in most cases of traumatic brain injury, central nervous system tumors, brain ischemia, and intracerebral hemorrhage. For example, malignant brain edema was present in roughly 31% of people with ischemic strokes within 30 days after onset.
Signs and symptoms
The extent and severity of the symptoms of cerebral edema depend on the exact etiology but are generally related to an acute increase of the pressure within the skull. As the skull is a fixed and inelastic space, the accumulation of cerebral edema can displace and compress vital brain tissue, cerebral spinal fluid, and blood vessels, according to the Monro-Kellie doctrine.Increased intracranial pressure (ICP) is a life-threatening surgical emergency marked by symptoms of headache, nausea, vomiting, decreased consciousness. Symptoms are frequently accompanied by visual disturbances such as gaze paresis, reduced vision, and dizziness. Increased pressures within the skull can cause a compensatory elevation of blood pressure to maintain cerebral blood flow, which, when associated with irregular breathing and a decreased heart rate, is called the Cushing reflex. The Cushing reflex often indicates compression of the brain on brain tissue and blood vessels, leading to decreased blood flow to the brain and eventually death.
Causes
Cerebral edema is frequently encountered in acute brain injuries from a variety of origins, including but not limited to:
Traumatic brain injury
Stroke
Tumors
Infections (such as a brain abscess or meningitis)
Hepatic encephalopathy
Posterior reversible encephalopathy syndrome
Radiation-induced brain edema
Post-surgical changes
Amyloid-related imaging abnormalities associated with edema (ARIA-E)
Hyponatremia
High-altitude cerebral edema
Risk factors
Cerebral edema is present with many common cerebral pathologies and risk factors for development of cerebral edema will depend on the cause. The following were reliable predictors for development of early cerebral edema in ischemic strokes.
Younger age
Higher severity of symptoms on the National Institutes of Health Stroke Scale
Signs of current ischemia on clinical exam
Decreased level of consciousness
Hyper dense artery sign and larger affected area on CT imaging
Higher blood glucose
Classification
Cerebral edema has been traditional classified into two major sub-types: cytotoxic and vasogenic cerebral edema. This simple classification helps guide medical decision making and treatment of patients affected with cerebral edema. There are, however, several more differentiated types including but not limited to interstitial, osmotic, hydrostatic, and high altitude associated edema. Within one affected person, many individual sub-types can be present simultaneously.The following individual sub-types have been identified:
Cytotoxic
In general, cytotoxic edema is linked to cell death in the brain through excessive cellular swelling. During cerebral ischemia for example, the blood–brain barrier remains intact but decreased blood flow and glucose supply leads to a disruption in cellular metabolism and creation of energy sources, such as adenosine triphosphate (ATP). Exhaustion of energy sources impairs functioning of the sodium and potassium pump in the cell membrane, leading to cellular retention of sodium ions. Accumulation of sodium in the cell causes a rapid uptake of water through osmosis, with subsequent swelling of the cells. The ultimate consequence of cytotoxic edema is the oncotic death of neurons. The swelling of the individual cells of the brain is the main distinguishing characteristic of cytotoxic edema, as opposed to vasogenic edema, wherein the influx of fluid is typically seen in the interstitial space rather than within the cells themselves. Researchers have proposed that "cellular edema" may be more preferable to the term "cytotoxic edema" given the distinct swelling and lack of consistent "toxic" substance involved.There are several clinical conditions in which cytotoxic edema is present:
Commonly caused by traumatic brain injuries, intracerebral hemorrhage, and the early phase of ischemic stroke.
Also seen in acute liver failure where toxic waste, most notably ammonia, accumulates in the blood stream and crosses the blood–brain barrier. Hyperammonemia in central nervous system (CNS) cells causes oxidative stress and mitochrondrial dysfunction, leading to astrocytic cell swelling. Additionally, ammonia is converted to glutamine in CNS cells which acts as an osmolyte and draws further water into the cell through osmosis. Cerebral edema occurs most commonly in conjunction with a rapid rise in ammonia levels.
Toxic exposures to methionine sulfoxime, cuprizone, isoniazid, triethyl tin, hexachlorophene, and hydrogen cyanide have been associated with cytotoxic edema and swelling of astrocytic cells.
Hypoxia, anoxia can lead to cytotoxic edema through several mechanisms
Vasogenic
Extracellular brain edema, or vasogenic edema, is caused by an increase in the permeability of the blood–brain barrier. The blood–brain barrier consists of astrocytes and pericytes joined together with adhesion proteins producing tight junctions. Return of blood flow to theses cells after an ischemic stroke can cause excitotoxicity and oxidative stress leading to dysfunction of the endothelial cells and disruption of the blood-brain barrier. The breakdown of the tight endothelial junctions that make up the blood–brain barrier causes extravasation of fluid, ions, and plasma proteins, such as albumin, into the brain parenchyma. Accumulation of extracellular fluid increases brain volume and then intracranial pressure causing the symptoms of cerebral edema.There are several clinical conditions in which vasogenic edema is present:
CNS tumors, like glioblastoma and meningioma
Infections like meningitis, abscess, and encephalitis
Inflammatory central nervous system disease such as multiple sclerosis
Brain hemorrhage
Traumatic brain injuries can lead to increased intracranial pressure, local damage, reduced cerebral blood flow, and focal ischemia secondary to vasogenic edema.
Late stage of ischemic stroke after rapid recovery from cytotoxic edema
Hypertensive encephalopathy
Radiation injury
Ionic (Osmotic)
In ionic edema, the solute concentration (osmolality) of the brain exceeds that of the plasma and the abnormal pressure gradient leads to accumulation of water intake into the brain parenchyma through the process of osmosis. The blood-brain barrier is intact and maintains the osmotic gradient.The solute concentration of the blood plasma can be diluted by several mechanisms:
Improper administration of intravenous fluids, isotonic or hypotonic.
Excessive water intake, syndrome of inappropriate antidiuretic hormone (SIADH).
Rapid reduction of blood glucose in diabetic ketoacidosis or hyperosmolar hyperglycemic state.
Hemodialysis has been associated with ionic edema and cellular swelling.
Cerebral edema is a potentially life threatening complication of severely decreased sodium ion concentration in the blood (hyponatremia).Ionic brain edema can also occur around the sites of brain hemorrhages, infarcts, or contusions due to a local plasma osmolality pressure gradient when compared to the high osmolality in the affected tissue.
Interstitial (hydrocephalic)
Interstitial edema can be best characterized by in noncomunnicating hydrocephalus where there is an obstruction to the outflow of cerebrospinal fluid within the ventricular system. The obstruction creates a rise in the intraventricular pressure and causes CSF to flow through the wall of the ventricles into the extracellular fluid within brain. The fluid has roughly the same composition of CSF.Other causes of interstitial edema include but are not limited to communicating hydrocephalus, and normal pressure hydrocephalus.
Hydrostatic
Hydrostatic extracellular brain edema is typically caused by severe arterial hypertension. A difference in the hydrostatic pressure within the arterial system relative to the endothelial cells allows ultrafiltration of water, ions, and low molecular weight substances (such as glucose, small amino acids) into the brain parenchyma. The blood-brain barrier is intact usually and the extent of the edema depends on the arterial pressure. The regulatory processes of the brain circulation can function up to systolic arterial pressures of 150 mm Hg and will have impaired function at higher blood pressures.
Combined types of cerebral edema
Cytotoxic, osmotic, and vasogenic edema exist on a continuum. The mechanism of the cause of cerebral edema can often overlap between these types. In most instances, cytotoxic and vasogenic edema occur together. When the two edema types evolve simultaneously, the damage of one type reaches a limit and will bring about the other type of injury. For example, when cytotoxic edema occurs in the endothelial cells of the blood–brain barrier, oncotic cell death contributes to loss of integrity of the blood–brain barrier and promotes the progression to vasogenic edema. When brain edema types are combined, there is typically a primary form and the edema type and context of the cause must be determined in order to start appropriate medical or surgical therapy. The use of specific MRI techniques has allowed for some differentiation between the mechanisms.
Subtypes
High-altitude cerebral edema
If not properly acclimatized to high altitude, a person may be negatively affected by the lower oxygen concentration available. These hypoxia-related illnesses include acute mountain sickness (AMS), high-altitude pulmonary edema, and high-altitude cerebral edema (HACE). High-altitude cerebral edema is a severe and sometimes fatal form of altitude sickness that results from capillary fluid leakage due to the effects of hypoxia on the mitochondria-rich endothelial cells of the blood–brain barrier. The edema can be characterized by vasogenic cerebral edema with symptoms of impaired consciousness and truncal ataxia.Altitude-related illnesses can be prevented most effectively with slow ascent to high altitudes, an average ascent of 300 to 500 meters per day is recommended. Pharmacological prophylaxis with acetazoloamide or corticosteroids can be used in non pre-acclimatized individuals. If symptoms of high-altitude cerebral edema do not resolve or worsen, immediate descent is necessary, and symptoms can be improved with administration of dexamethasone.
Amyloid-related imaging abnormalities – edema
Amyloid-related imaging abnormalities (ARIA) are abnormal differences seen in neuroimaging of Alzheimers disease patients given targeted amyloid-modifying therapies. Human monoclonal antibodies such as aducanumab, solanezumab, and bapineuzumab have been associated with these neuroimaging changes and additionally, cerebral edema. These therapies are associated with dysfunction of the tight endothelial junctions of the blood-brain barrier, leading to vasogenic edema as described above. In addition to edema, these therapies are associated with microhemorrhages in the brain known as ARIA-H. Familiarity with ARIA can aid radiologists and clinicians in determining optimal management for those affected.
Posterior reversible encephalopathy syndrome
Posterior reversible encephalopathy syndrome (PRES) is a rare clinical disease characterized by cerebral edema. The exact pathophysiology, or cause, of the syndrome is still debated but is hypothesized to be related to the disruption of the blood-brain barrier. The syndrome features acute neurological symptoms and reversible subcortical vasogenic edema predominantly involving the parieto-occipital areas on MR imaging. PRES in general has a benign course, but PRES-related intracranial hemorrhage has been associated with a poor prognosis.
Idiopathic delayed-onset edema
Deep brain stimulation (DBS) is effective treatment for several neurological and psychiatric disorders, most notably Parkinsons disease. DBS is not without risks and although rare, idiopathic delayed-onset edema (IDE) surrounding the DBS leads have been reported. Symptoms can be mild and nonspecific, including reduction of the stimulation effect, and can be confused for other causes of edema. Thus, imaging is recommended to rule out other causes. The condition is generally self-limiting and the exact mechanism of the cause is unexplained. Early identification can help persons affected avoid unnecessary surgical procedures or antibiotic treatments.
Massive brain swelling after cranioplasty
Decompressive craniectomy is frequently performed in cases of resistant intracranial hypertension secondary to several neurological conditions and is commonly followed by cranioplasty. Complications, such as infection and hematomas after cranioplasty occur in roughly about a third of cases. Massive brain swelling after cranioplasty (MSBC) is a rare and potentially fatal complication of an uneventful cranioplasty that has recently been elucidated. Preoperative sinking skin flap (SSF) and intracranial hypotension were factors associated with the development of MSBC after cranioplasty. Data suggests that pathologic changes are triggered immediately following the procedure, especially an acute increase in intracranial pressure.
Radiation-induced brain edema
With the rise of sophisticated treatment modalities such as gamma knife, Cyberknife, and intensity-modulated radiotherapy, a large number of individuals with brain tumors are treated with radiosurgery and radiotherapy. Radiation-induced brain edema (RIBE) is a potentially life threatening complication of brain tissue radiation and is characterized radiation necrosis, endothelial cell dysfunction, increased capillary permeability, and breakdown of the blood–brain barrier. Symptoms include headache, seizure, psychomotor slowing, irritability, and focal neurological deficits. Options for management of RIBE are limited and include corticosteroids, antiplatelet drugs, anticoagulants, hyperbaric oxygen therapy, multivitamins, and bevacizumab.
Brain tumor-associated cerebral edema
This kind of cerebral edema is a significant cause of morbidity and mortality in patients with brain tumors and characterized by a disruption of the blood brain barrier and vasogenic edema. The exact mechanism is unclear but hypothesized that cancerous glial cells (glioma) of the brain can increase secretion of vascular endothelial growth factor (VEGF), which weakens the tight junctions of the blood–brain barrier. Historically, corticosteroids such as dexamethasone were used to reduce brain tumor-associated vascular permeability through poorly understood mechanisms and was associated with systemic side effects. Agents that target the VEGF signaling pathways, such as cediranib, have been promising in prolonging survival in rat models but associated with local and systemic side effects as well.
Diagnosis
Cerebral edema is commonly present in a variety of neurological injuries. Thus, determining a definitive contribution of cerebral edema to the neurological status of an affected person can be challenging. Close bedside monitoring of a persons level of consciousness and awareness of any new or worsening focal neurological deficits is imperative but demanding, frequently requiring admission into the intensive care unit (ICU).Cerebral edema with sustained increased intracranial hypertension and brain herniation can signify impending catastrophic neurological events which require immediate recognition and treatment to prevent injury and even death. Therefore, diagnosis of cerebral edema earlier with rapid intervention can improve clinical outcomes and can mortality, or risk of death.Diagnosis of cerebral edema relies on the following:
Imaging
Serial neuroimaging (CT scans and magnetic resonance imaging) can be useful in diagnosing or excluding intracranial hemorrhage, large masses, acute hydrocephalus, or brain herniation as well as providing information on the type of edema present and the extent of affected area. CT scan is the imaging modality of choice as it is widely available, quick, and with minimal risks. However, CT scan can be limited in determining the exact cause of cerebral edema in which cases, CT angiography (CTA), MRI, or digital subtraction angiography (DSA) may be necessary. MRI is particularly useful as it can differentiate between cytotoxic and vasogenic edema, guiding future treatment decisions.
Intracranial pressure monitoring
Intracranial pressure (ICP) and its management is a fundamental concept in traumatic brain injury (TBI). The Brain Trauma Foundation guidelines recommend ICP monitoring in individuals with TBI that have decreased Glasgow Coma Scale (GCS) scores, abnormal CT scans, or additional risk factors such as older age and elevated blood pressure. However, no such guidelines exist for ICP monitoring in other brain injuries such as ischemic stroke, intracerebral hemorrhage, cerebral neoplasm.Clinical researches have recommended ICP and cerebral perfusion pressure (CPP) monitoring in any persons with cerebral injury who are at risk of elevated intracranial pressure based on clinical and neuroimaging features. Early monitoring can be used to guide medical and surgical decision making and to detect potentially life-threatening brain herniation. There was however, conflicting evidence on the threshold values of ICP that indicated the need for intervention. Researches also recommend that medical decisions should be tailored to the specific diagnosis (e.g. subarachnoid hemorrhage, TBI, encephalitis) and that ICP elevation should be used in conjunction with clinical and neuroimaging and not as an isolated prognostic marker.
Treatment
The primary goal in cerebral edema is to optimize and regulate cerebral perfusion, oxygenation, and venous drainage, decrease cerebral metabolic demands, and to stabilize the osmolality pressure gradient between the brain and the surrounding vasculature. As cerebral edema is linked to increased intracranial pressure (ICP), many of the therapies will focus on ICP.
General measures for managing cerebral edema
Positioning
Finding the optimal head position in persons with cerebral edema is necessary to avoid compression of the jugular vein and obstruction of venous outflow from the skull, and for decreasing cerebrospinal fluid hydrostatic pressure. The current recommendation is to elevate the head of the bed to 30 degrees to optimize cerebral perfusion pressure and control the increase in intracranial pressure. It is also worth noting that measures should taken to reduce restrictive neck dressings or garments as these may lead to compression of the internal jugular veins and reduce venous outflow.
Ventilation and oxygenation
Decreased oxygen concentration in the blood, hypoxia, and increase in the carbon dioxide concentration in the blood, hypercapnia, are potent vasodilators in the cerebral vasculature, and should be avoided in those with cerebral edema. It is recommended that persons with decreased levels of consciousness be intubated for airway protection and maintenance of oxygen and carbon dioxide levels. However, the laryngeal instrumentation involved in the intubation process is associated with a acute, brief rise in intracranial pressure. Pretreatment with a sedative agent and neuromuscular blocking agent to induce unconsciousness and motor paralysis has been recommended as part of standard Rapid Sequence Intubation (RSI). Intravenous lidocaine prior to RSI has been suggested to reduce the rise in ICP but there is no supporting data at this time.Additionally, ventilation with use of positive pressure (PEEP) can improve oxygenation with the negative effect of decreasing cerebral venous drainage and increasing intracranial pressure (ICP), and thus, must be used with caution.
Fluid management and cerebral perfusion
Maintenance of cerebral perfusion pressure using appropriate fluid management is essential in patients with brain injury. Dehydration, or intravascular volume loss, and the use of hypotonic fluids, such as D5W or half normal saline, should be avoided. Blood serum ion concentration, or osmolality, should be maintained in the normo to hyperosmolar range. Judicial use of hypertonic saline can be used to increase serum osmolality and decrease cerebral edema, as discussed below.Blood pressure should be sufficient so as to sustain cerebral perfusion pressures greater than 60 mm Hg for optimal blood blow to the brain. Vasopressors may be used to achieve adequate blood pressures with minimal risk of increasing intracranial pressures. However, sharp rises in blood pressure should be avoided. Maximum blood pressures tolerated are variable and controversial depending on the clinical situation.
Seizure prophylaxis
Seizures, including subclinical seizure activity, can complicate clinical courses and increase progression of brain herniation in persons with cerebral edema and increased intracranial pressure. Anticonvulsants can be used to treat seizures caused by acute brain injuries from a variety of origins. However, there are no clear guidelines on the use of anticonvulsants for prophylactic use. Their use may be warranted on depending on the clinical scenario and studies have shown that anticonvulsants such as phenytoin can be given prophylactically without a significant increase in drug-related side effects.
Fever
Fever has been demonstrated to increase metabolism and oxygen demand in the brain. The increased metabolic demand results in an increase in cerebral blood flow and can increase the intracranial pressure within the skull. Therefore, maintaining a stable body temperature within the normal range is strongly recommended. This can be achieved through the use of antipyretics such as acetaminophen (paracetamol) and cooling the body, as described below.
Hyperglycemia
Elevated blood glucose levels, known as hyperglycemia, can exacerbate brain injury and cerebral edema and has been associated with worse clinical outcomes in persons affected by traumatic brain injuries, subarachnoid hemorrhages, and ischemic strokes.
Sedation
Pain and agitation can worsen cerebral edema, acutely increase intracranial pressure (ICP), and should be controlled. Careful use of pain medication such as morphine or fentanyl can be used for analgesia. For those persons with decreased levels of consciousness, sedation is necessary for endotracheal intubation and maintenance of a secure airway. Sedative medication used in the intubation process, specifically propofol, have been shown to control ICP, decrease cerebral metabolic demand, and have antiseizure properties. Due to a short half-life, propofol, is a quick-acting medication whose administration and removal is well tolerated, with hypotension being the limiting factor in its continued use. Additionally, the use of nondepolarizing neuromusclar blocking agents (NMBA), such as doxacurium or atracurium, have been indicated to facilitate ventilation and manage brain injuries but there are no controlled studies on the use of NMBAs in the management of increased intracranial pressure. Depolarizing neuromuscular blocking agents, most notably succinylcholine, can worsen increased ICP due to induction of muscle contraction within the body.
Nutrition
Nutritional support is necessary in all patients with acute brain injury. Enteral feeding, or through mouth via tube, is the preferred method, unless contraindicated. Additional attention must be placed on the solute concentration of the formulations to avoid free water intake, decreased serum osmolality, and worsening of the cerebral edema.Elevated blood glucose, or hyperglycemia, is associated with increased edema in patients with cerebral ischemia and increases the risk of a hemorrhagic transformation of ischemic stroke. Maintaining a normal blood glucose level of less than 180 mg/dL is suggested. However, tight glycemic control of blood glucose under 126 mg/dL is associated with worsening of stroke size.
Specific measures
Although cerebral edema is closely related to increased intracranial pressure (ICP) and cerebral herniation and the general treatment strategies above are useful, the treatment should ultimately be tailored to the primary cause of the symptoms. The management of individual diseases are discussed separately.
The following interventions are more specific treatments for managing cerebral edema and increased ICP:
Osmotic therapy
The goal of osmotic therapy is to create a higher concentration of ions within the vasculature at the blood–brain barrier. This will create an osmotic pressure gradient and will cause the flow of water out of the brain and into the vasculature for drainage elsewhere. An ideal osmotic agent produces a favorable osmotic pressure gradient, is nontoxic, and is not filtered out by the blood–brain barrier. Hypertonic saline and mannitol are the main osmotic agents in use, while loop diuretics can aid in the removal of the excess fluid pulled out of the brain.
Hypertonic saline is a highly concentrated solution of sodium chloride in water and is administered intravenously. It has a rapid-onset, with reduction of pressures within 5 minutes of infusion, lasting up to 12 hours in some cases, and with negligible rebound pressure. The exact volume and concentration of the hypertonic saline varies between clinical studies. Bolus doses, particularly at higher concentrations, for example 23.4%, are effective at reducing ICP and improving cerebral perfusion pressure. In traumatic brain injuries, a responsiveness to hypertonic saline lasting greater than 2 hours was associated with decreased chance of death and improved neurologic outcomes. The effects of hypertonic saline can be prolonged with combination to agents such as dextran or hydroxyethyl starch, although their use is currently controversial. When compared to mannitol, hypertonic saline has been shown to be as effective as mannitol in decreased ICP in neurocritical care and is more effective in many cases. Hypertonic saline may be preferable to mannitol in persons with hypovolemia or hyponatremia.
Mannitol is an alcohol derivative of simple sugar mannose, and is historically the most commonly used osmotic diuretic. Mannitol acts as an inert solute in the blood, decreasing ICP through osmosis as discussed above. Additionally, mannitol decreases ICP and increased cerebral perfusion pressure by increasing reabsorption of cerebrospinal fluid, dilutes and decreased the viscosity of the blood, and can cause cerebral vasoconstriction. Furthermore, mannitol acts in a dose-dependent manner and will not lower ICP if it is not elevated. However, the common limitation of the use of mannitol is its tendency to cause low blood pressure hypotension. Compared to hypertonic saline, mannitol may be more effective at increasing cerebral perfusion pressures and may be preferable in those with hypoperfusion.
Loop diuretics, commonly furosemide, act within kidney to increase excretion of water and solutes. Combination with mannitol produces a profound diuresis and increases the risk of systemic dehydration and hypotension. Their use remains controversial.
Acetazolamide, a carbonic anhydrase inhibitor, acts as a weak diuretic and modulates CSF production but has not role in the management of cerebral edema from acute brain injuries. It can be used in the outpatient management of cerebral edema caused by idiopathic intracranial hypertension (pseudotumor cerebri).
Glucocorticoids
Glucocorticoids, such as dexamethasone, have been shown to decrease tight-junction permeability and stabilize the blood-brain barrier. Their main use has been in the management of vasogenic cerebral edema associated with brain tumors, brain irradiation, and surgical manipulation. Glucocorticoids have not been shown to have any benefit in ischemic stroke and have been found to be harmful in traumatic brain injury. Due to the negative side effects (such as peptic ulcers, hyperglycemia, and impairment of wound healing), steroid use should be restricted to cases where they are absolutely indicated.
Hyperventilation
As mentioned previously, hypoxia and hypercapnia are potent vasodilators in the cerebral vasculature, leading to increased cerebral blood flow (CBF) and worsening of cerebral edema. Conversely, therapeutic hyperventilation can be used to lower the carbon dioxide content in the blood and reduce ICP through vasoconstriction. The effects of hyperventilation, although effective, are short-lived and once removed, can often lead to a rebound elevation of ICP. Furthermore, overaggressive hyperventilation and vasoconstriction and lead to severe reduction in CBF and cause cerebral ischemia, or strokes. As a result, standard practice is to slowly reverse hyperventilation while more definitive treatments aimed at the primary cause are instituted.It is important to note that prolonged hyperventilation in those with traumatic brain injuries has been shown to worsen outcomes.
Barbiturates
Induction of a coma via the use of barbiturates, most notably pentobarbital and thiopental, after brain injury is used for secondary treatment of refractory ICP. Yet their use is not without controversy and it is not clear whether barbiturates are favored over surgical decompression. In patients with traumatic brain injuries, barbiturates are effective in reducing ICP but have failed to show benefit to clinical outcomes. Evidence is limited for their use in cerebral disease that include tumor, intracranial hypertension, and ischemic stroke. There are several adverse effects of barbiturates that limit their use, such as lowering of systemic blood pressure and cerebral perfusion pressure, cardiodepression, immunosuppression, and systemic hypothermia.
Hypothermia
As discussed previously in the treatment of fever, temperature control has been shown to decrease metabolic demand and reduce further ischemic injury. In traumatic brain injury, induced hypothermia may reduce the risks of mortality, poor neurologic outcome in adults. However, outcomes varied greatly depth and duration of hypothermia as well as rewarming procedures. In children with traumatic brain injury, there was no benefit to therapeutic hypothermia and increased the risk of mortality and arrhythmia. The adverse effects of hypothermia are serious and require clinical monitoring including increased chance of infection, coagulopathy, and electrolyte derangement. The current consensus is that adverse effects outweigh the benefits and its use restricted to clinical trials and refractory increased ICP to other therapies.
Surgery
The Monroe-Kellie doctrine states that the skull is a fixed and inelastic space and the accumulation of edema will compress vital brain tissue and blood vessels. Surgical treatment of cerebral edema in the context of cerebellar or cerebral infarction is typically done by removing part of the skull to allow expansion of the dura. This will help to reduce the volume constraints inside of the skull. A decompressive hemicraniectomy is the most commonly used procedure. Multiple randomized clinical trials have shown reduced risk of death with hemicraniectomy compared with medical management. However, no individual study has shown an improvement in the percentage of survivors with good functional outcomes.Timing of decompressive craniectomy remains controversial, but is generally suggested that the surgery is best performed before there are clinical signs of brainstem compression. Postoperative complications include wound dehiscence, hydrocephalus, infection, and a substantial proportion of patients may also require tracheostomy and gastrotomy in the early phase after surgery.
Outcomes
Cerebral edema is a severe complication of acute brain injuries, most notably ischemic stroke and traumatic brain injuries, and a significant cause of morbidity and mortality.
Cerebral edema is the cause of death in 5% of all patients with cerebral infarction and mortality after large ischemic strokes with cerebral edema is roughly 20 to 30% despite medical and surgical interventions. Cerebral edema usually occurs between the second and fifth day after onset of symptoms. Large territory ischemic strokes can lead to the rapid development of malignant brain edema and increased intracranial pressure. Cerebral edema in the context of a malignant middle cerebral artery (MCA) infarct has a mortality of 50 to 80% if treated conservatively. Individuals with cerebral edema had a worse 3-month functional outcome than those without edema. These effects were more pronounced with increasing extent of cerebral edema and were independent of the size of the infarct.
Mild traumatic brain injury (TBI) represents 70–90% of all reported head injuries. The presence of brain edema on the initial CT scan of those with traumatic brain injuries is an independent prognostic indicator of in-hospital death. The association of brain edema with increased in hospital risk of death was observed in TBI across all level of severity. Edema in the acute and chronic phases were associated with a worse neurologic and clinical outcome. Children with TBI and cerebral edema have worse clinical outcomes as well.
Epidemiology
As cerebral edema is present with many common cerebral pathologies, the epidemiology of the disease is not easily defined </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Urethral stricture and explain in detail? [/INST] A urethral stricture is a narrowing of the urethra, the tube connected to the bladder that allows the passing of urine. The narrowing reduces the flow of urine and makes it more difficult or even painful to empty the bladder.Urethral stricture is caused by injury, instrumentation, infection, and certain non-infectious forms of urethritis The condition is more common in men due to their longer urethra.
Signs and symptoms
The hallmark sign of urethral stricture is a weak urinary stream. Other symptoms include:
Splaying of the urinary stream
Urinary frequency
Urinary urgency
Straining to urinate
Pain during urination
Urinary tract infection
Prostatitis
Inability to completely empty the bladder.Some people with severe urethral strictures are completely unable to urinate. This is referred to as acute urinary retention, and is a medical emergency. Hydronephrosis and kidney failure may also occur.
Complications
Causes
Urethral strictures most commonly result from injury, urethral instrumentation, infection, non-infectious inflammatory conditions of the urethra, and after prior hypospadias surgery. Less common causes include congenital urethral strictures and those resulting from malignancy.Urethral strictures after blunt trauma can generally be divided into two sub-types;
Pelvic fracture-associated urethral disruption occurs in as many as 15% of severe pelvic fractures. These injuries are typically managed with suprapubic tube placement and delayed urethroplasty 3 months later. Early endoscopic realignment may be used in select cases instead of a suprapubic tube, but these patients should be monitored closely as vast majority of them will require urethroplasty.
Blunt trauma to the perineum compresses the bulbar urethra against the pubic symphysis, causing a "crush" injury. These patients are typically treated with suprapubic tube and delayed urethroplasty.Other specific causes of urethral stricture include:
Instrumentation (e.g., after transurethral resection of prostate, transurethral resection of bladder tumor, or endoscopic kidney surgery)
Infection (typically with Gonorrhea)
Lichen sclerosus
Surgery to address hypospadias can result in a delayed urethral stricture, even decades after the original surgery.
Diagnosis
Among ways to diagnose this condition is:
Cystoscopy
Urethrography
Treatment
Initial treatment usually involves urethral dilation (widening the tube) or urethrotomy, where the stricture is cut away with a cystoscope.
Dilation and urethrotomy
Urethral dilation and other endoscopic approaches such as direct vision internal urethrotomy (DVIU), laser urethrotomy, and self intermittent dilation are the most commonly used treatments for urethral stricture. However, these approaches are associated with low success rates and may worsen the stricture, making future attempts to surgically repair the urethra more difficult.A Cochrane review found that performing intermittent self-dilatation may confer a reduced risk of recurrent urethral stricture after endoscopic treatment, but the evidence is weak.
Urethroplasty
Urethroplasty refers to any open reconstruction of the urethra. Success rates range from 85% to 95% and depend on a variety of clinical factors, such as stricture as the cause, length, location, and caliber. Urethroplasty can be performed safely on men of all ages.In the posterior urethra, anastomotic urethroplasty (with or without preservation of bulbar arteries) is typically performed after removing scar tissue.In the bulbar urethra, the most common types of urethroplasty are anastomotic (with or without preservation of corpus spongiosum and bulbar arteries) and substitution with buccal mucosa graft, full-thickness skin graft, or split thickness skin graft. These are nearly always done in a single setting (or stage).In the penile urethra, anastomotic urethroplasties are rare because they can lead to chordee (penile curvature due to a shortened urethra). Instead, most penile urethroplasties are substitution procedures utilizing buccal mucosa graft, full-thickness skin graft, or split-thickness skin graft. These can be done in one or more settings, depending on stricture location, severity, cause and patient or surgeon preference.
Urethral stent
A permanent urethral stent was approved for use in men with bulbar urethral strictures in 1996, but was recently removed from the market.A temporary thermoexpandable urethral stent (Memotherm) is available in Europe but is not currently approved for use in the United States.
Emergency treatment
When in acute urinary retention, treatment of the urethral stricture or diversion is an emergency. Options include:
Urethral dilatation and catheter placement </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Sauk people and explain in detail? [/INST] The Sauk or Sac are a group of Native Americans of the Eastern Woodlands culture group, who lived primarily in the region of what is now Green Bay, Wisconsin, when first encountered by the French in 1667. Their autonym is oθaakiiwaki, and their exonym is Ozaagii(-wag) in Ojibwe. The latter name was transliterated into French and English by colonists of those cultures. Today they have three federally recognized tribes, together with the Meskwaki (Fox), located in Iowa, Oklahoma and Kansas.
History
Early history
The Sauk, an Algonquian languages people, are believed to have developed as a people along the St. Lawrence River, which is now northern New York. The precise time is unknown, but around the time of the year 1600, they were driven from the area of the St. Lawrence river. Some historians believe that the Sauk migrated to what is now eastern Michigan, where they settled around Saginaw Bay (Ojibwe: Zaagiinaad-wiikwed – "Of the Outlet Bay"). For many years, the Sauk are believed to have prospered in the fertile valley of Saginaw thereafter. They had been driven west by pressure from other tribes, especially the powerful Iroquois League or Haudenosaunee, which sought control over hunting grounds in the area.
The neighboring Anishanabeg Ojibwe (Sauk name: Ochipwêwa) and Odawa peoples referred to them by the exonym Ozaagii(-wag), meaning "those at the outlet". French colonists transliterated that as Sac and the English as "Sauk". The Sauk/Sac called themselves the
autonym of Othâkîwa, Thâkîwa, Thâkîwaki or Asaki-waki/Oθaakiiwaki people of the yellow earth [("people coming forth [from the outlet]," i.e., "from the water")], which is often interpreted to mean "yellow-earth people" or "the Yellow-Earths", due to the yellow-clay soils found around Saginaw Bay. This interpretation possibly derived from the Sauk words Athâwethiwa or Athâw(i) ("yellow") and Neniwaki ("men, people"). This was later shortened to "Asaki-waki". In addition, the Fox (Meskwaki) were generally known among neighboring tribes as the "people of the red earth" - the Sauk and Fox also used this term: Êshkwîha or Meshkwahkîha ("people of the red earth").
Some Ojibwe oral histories also place the Sauk in the Saginaw Valley some time before the arrival of Europeans. Sauk traditions state that the tribe occupied the vicinity of Saginaw river. (In this tradition, the name Saginaw comes from the Ojibwe "O-Sauk-e-non," meaning "land of the Sauks" or "where the Sauks were.") Approximately from the years 1638 to 1640, it is believed that a fierce battle ensued, nearly annihilating the entire Sauk Tribe. The story goes that the Chippewa inhabited the lands north of the Saginaw Bay, and the harsher northern climate caused more difficulty in prosperity compared to that of the Sauk occupying the area of Saginaw Valley. The Chippewa allied with the Ottawa, who resided south of the Sauk, and sprung a series of attacks on the Sauk Tribe which practically decimated their people. One such attack, the Battle of Skull Island, occurred on an island which is now called Skull Island in Michigan. (Its name came from the many skulls and bones supposedly found in mounds on that island over the years.) In this battle, it is said that the Sauk had used their boats to cross part of the river, escape to the island, and were temporarily free from their attackers. But when morning came, ice had solidified the river enough for the Chippewa to cross. They killed every member of the Sauk tribe who had fled to that island besides twelve women whom they later sent west of the Mississippi River.But the Sauk may have been mistakenly recorded as dwelling at this location near Lake Huron at that time. There is little archaeological evidence that the Sauk lived in the Saginaw area. In the early 17th century, when natives told French explorer Samuel de Champlain that the Sauk nation was located on the west shore of Lake Michigan, Champlain mistakenly placed them on the western shore of Lake Huron. This mistake was copied on subsequent maps, and future references identified this as the place of the Sauk. Champlain never visited what is now Michigan.Anishinaabe expansion and the Huron attempting to gain regional stability drove the Sac out of their territory. The Huron were armed with guns supplied by their French trading partners. The Sac moved south to territory in parts of what are now northern Illinois and Wisconsin. In the seventeenth century the Sauk also maintained close relations with the Potawatomi (Pehkînenîha or Shîshîpêhinenîha). This relation has been found by borrowings of Sauk vocabulary that appear in the Potawatomi language.
In a loose coalition of tribes – including Dakota (Ashâha), Ojibwe, Odawa, Potawatomi, Kickapoo (Kîkâpôwa), Meskwaki (Fox), and Sauk, along with the Shawnee (Shâwanôwa), Cherokee (Shanahkîha), and Choctaw (Châkitâha) from the Southeast – they attacked the tribes of the Illinois Confederation ("Illinois/Inoca") (Mashkotêwa) and tried to invade their tribal areas. The "Illinois/Inoca" became their worst common enemies. The coalition warred for years until they destroyed the Illinois Confederation.
Later they moved out on the prairie (Mashkotêwi) along the Mississippi and adopted the semi-sedentary lifestyle of Plains Indians (Mashkotêwineniwa). In addition to hunting buffalo, they lived in villages, raised crops, and actively traded with other tribes. The Sauk and allied eastern tribes had to compete with tribes who already occupied this territory. Disputes and clashes arose with the Dakota, Pawnee (Pânîha) and, most of all, the powerful Osage (Washâsha).
The Sauk had good relations with the English (Thâkanâsha) through trading. At first, the Sauk had good relations with New France too, until their alliance with the Meskwaki (Fox) made them short-term enemies of the French (Mêmehtekôshîha, Wêmehtekôshîha).
A closely allied tribe, the Meskwaki (Fox), were noted for resisting French encroachment, having fought two wars against them in the early 18th century. After a devastating battle of 9 September 1730, in Illinois, in which hundreds of warriors were killed and many women and children taken captive by French allies, Fox refugees took shelter with the Sac. This made them subject to French attack in turn. The Sac continued moving west to Iowa and Kansas. Keokuk and Black Hawk were two important leaders who arose among the Sauk. At first Keokuk accepted the loss of land as inevitable in the face of the vast numbers of white soldiers and settlers coming west. He tried to preserve tribal land and his people, and to keep the peace.
Having failed to receive expected supplies from the Americans on credit, Black Hawk wanted to fight, saying his people were "forced into war by being deceived". Led by Black Hawk in 1832, the mainly Sac band resisted the continued loss of lands (in western Illinois, this time.) Their warfare with United States forces resulted in defeat at the hands of General Edmund P. Gaines in the Black Hawk War.
Oklahoma history
About this time, one group of Sac moved into Missouri, and later to Kansas and Nebraska. In 1869, after the Civil War, the United States forced the larger group of Sac to move into a reservation in Indian Territory (now the state of Oklahoma). They merged with the Meskwaki as the federally recognized Sac and Fox Nation. (The United States had been making treaties with the two tribes together since their residency in the Midwest.) A smaller number returned to the Midwest from Oklahoma (or resisted leaving.) They joined the Mesquakie at the Mesqwaki Settlement, Iowa.
The land currently occupied by the Sauk is only a section of what used to be the Sac and Fox Reservation during the time of 1867–1891. This reservation was established by the U.S. and spanned 480,000 acres. In 1887, however, the Dawes Act failed to recognize Native American culture by dividing the reservation into small allotments designated for individual households.
The remainder of land not allotted to the Sac and Fox was then sold to non-Native settlers in an attempt to gain Oklahoma statehood and the full assimilation of its Native American population.
By 1889, 519 of the tribe were located in Indian Territory, what is now central Oklahoma. On June 10, 1890, they ceded these Indian Territory lands to the federal government.
Treaties with U.S.
Many of the latter treaties listed have little to no information regarding their details, besides the date. In total, twenty two treaties were signed from 1789 to, more than a century later, 1891.
Treaty of Fort Harmar
Treaty of Greenville
Treaty of St. Louis (1804)
Treaty of Portage des Sioux
Treaty of St. Louis (1816)
Treaty of St. Louis (1822)
Treaty of Washington
First Treaty of Prairie du Chien
Fourth Treaty of Prairie du Chien
1832 Treaty
September 27, 1836 Treaty
September 28, 1836 Treaty
September 28, 1836 Treaty
1837 Treaty
1837 Treaty
1842 Treaty
1854 Treaty (Missouri Sac and Fox)
1859 Treaty
1861 Treaty (Iowa Sac and Fox)
Feb 18th, 1867
June 10, 1872 "Act of Congress" (Missouri Sac and Fox)
Feb 13th, 1891 "Act of Congress"
Clan system
The Sauk and Fox peoples were divided into two moieties or "divisions", which in turn were subdivided into Patri-lineages and Clans as local subgroups (segments).
The moieties were known as the Kishko/Ki-sko-ha/Kîshkôha (male: Kîshkôha, female: Kîshkôhkwêha) ("the long-haired") and as the Oskush/Askasa/Shkasha (male: Shkasha/Oshkashîwiwa, female: Shkashîhkwêwa/Oshkashîhkwêwiwa) ("the brave"). The two moieties were each symbolized by two colors: The Askasa/Shkasha painted their faces and partly their bodies with charcoal in mahkatêwâwi (black) and the Ki-sko-ha/Kîshkôha painted their bodies with white clay in wâpeshkyâwi (white). This duality was also celebrated by the two moieties in Lacrosse, which was often played extremely brutally to toughen young warriors for combat, for recreation, as part of festivals, and used as preparation for imminent wars or raids.
This division has survived to the present day, but is now more related to the political system of the United States: the supporters of the Democratic Party are associated with the Kîshkôha/Kîshkôhkwêha, while the supporters of the Republican Party are associated with the Shkasha/Shkashîhkwêwa.
Originally, the Sauk had a patrilineal and exogamous clan system, in which descent and inheritance was traced through the father. Clans or Mîthonî distinguished and named on the basis of totem animals, which are: Mahkwithowa (Bear Clan), Amehkwithowa (Beaver Clan), Peshekethiwithowa (Deer Clan), Ketiwithowa / Mekethiwithowa (Eagle Clan), Nemêthithowa (Fish Clan), Wâkoshêhithowa (Fox Clan), Kehchikamîwithowa (Ocean/Sea/Great Lake Clan), Keshêhokimâwithowa (Peace Clan), Ahpenîthowa (Potato Clan), Akônithowa (Snow Clan), Nenemehkiwithowa (Thunder Clan), Manethenôkimâwithowa (Warrior Clan), and Mahwêwithowa (Wolf Clan).Saukenuk or Saukietown (today: Black Hawk State Historic Site) near the mouth of the Rock River (Sinnissippi – "rocky waters") into the Mississippi (Mäsesibowi – "great river"), the most important Sauk settlement in the 18th and 19th centuries with about 4,000 inhabitants, was divided into 12 districts, which were assigned to the respective clans.
The tribe was governed by a council of sacred clan chiefs, a war chief, the head of families, and the warriors. Chiefs were recognized in three categories: civil, war, and ceremonial. Only the civil chiefs were hereditary. The other two chiefs were recognized by bands after they demonstrated their ability or spiritual power.
This traditional manner of selecting historic clan chiefs and governance was replaced in the 19th century by the United States appointing leaders through their agents at the Sac and Fox Agency, or reservation in Indian Territory (now Oklahoma). In the 20th century, the tribe adopted a constitutional government patterned after the United States form. They elect their chiefs.
Federally recognized tribes
Today the federally recognized Sac and Fox tribes are:
Sac and Fox Nation (Meskwaki: Othâkîwaki, meaning: "People of the yellow earth"), headquartered in Stroud, Oklahoma;
Sac and Fox Tribe of the Mississippi in Iowa (Meskwaki: Meshkwahkîhaki, meaning: "People of the red earth"), headquartered in Tama, Iowa; and
Sac and Fox Nation of Missouri in Kansas and Nebraska (Meskwaki: Nîmahâhaki), headquartered in Reserve, Kansas.
Language
Sauk is a dialect of the Fox language, one of the many Algonquian languages. It is very closely related to the dialects spoken by the Meskwaki and the Kickapoo tribes. Each of the dialects contains archaisms and innovations that distinguish them from each other. Sauk and Meskwaki appear to be the most closely related of the three, reflecting the peoples long relationship. Sauk is considered to be mutually intelligible, to a point, with Fox.
In their own language, the Sauk at one time called themselves asakiwaki [a-sak-i-wa-ki], "people of the outlet". The Sauk people have a syllabic orthography for their language. They published a Primer Book in 1975, based on a "traditional" syllabary that existed in 1906. It is intended to help modern-day Sauk to learn to write and speak their ancestral tongue. A newer orthography was proposed around 1994 to aid in language revival. The former syllabary was aimed at remaining native speakers of Sauk; the more recent orthography was developed for native English speakers, as many Sauk grow up with English as their first languageSauk has so few speakers that it is considered an endangered language, as are numerous others native to North America.
In 2005, A Concise Dictionary of the Sauk Language was published using the Algonquianist Standard Roman Orthography.In 2012, Shawnee High School in Shawnee, Oklahoma, began to offer a Sauk language course.
Sauk language loss
Use of Sauk as the official tribal language in everyday communication declined rapidly between 1935 and 1945. The chances of coming across a tribal member who can understand basic phrases of Sauk is small, due to the fact that the main language spoken by the Sac and Fox today is English. The loss of Sauk, as with many other Native American languages, lies in context with American history. Speaking in the native language was actively discouraged and often punished in Indian boarding schools during the late 19th and early 20th century, when the U.S. was attempting to assimilate Native Americans into European-American culture.
Another conflict for the preservation of the language, is that the original syllabary created was intended for the use of native Sauk speakers, and its clarification was designed from the Sauk language. This is a problem because this is no longer as helpful for the majority of the Sac and Fox nation, as the official tribe language today is English. The issue arises in instances when Sauk is being taught to a school in the tribe, and an elder, who is fluent in the language, disagrees with the pronunciation being taught.
Phonology
Sauk does not have many phonemes in comparison to many other languages: four vowels, two semivowels, and nine consonants.
Consonants
The following consonant phonemes are given in Reinschmidt (1995):
The representation of /h/ was omitted in the 1977 syllabary </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Blood phobia and explain in detail? [/INST] Blood phobia (also known as hemophobia or hematophobia in American English and haemophobia or haematophobia in British English) is an extreme irrational fear of blood, a type of specific phobia. Severe cases of this fear can cause physical reactions that are uncommon in most other fears, specifically vasovagal syncope (fainting). Similar reactions can also occur with trypanophobia and traumatophobia. For this reason, these phobias are categorized as blood-injection-injury phobia by the DSM-IV. Some early texts refer to this category as "blood-injury-illness phobia."
Causes
Blood phobia is often caused by direct or vicarious trauma in childhood. Though some have suggested a possible genetic link, a study of twins suggests that social learning and traumatic events, rather than genetics, is of greater significance.. Blood-injection-injury phobia (BII) affects about 4% of the population in the United States.The inclusion of BII within the category of specific or simple phobias in classificatory systems reflects a perception that fear has a primary role in the disorder. Consistent with this assumption, blood-injury phobia appears to share a common etiology with other phobias. Kendler, Neale, Kessler, Heath, and Eaves (1992) have argued from data comparing monozygotic with dizygotic twins that the genetic factor common to all phobias (agoraphobia, social phobia, and specific phobias), strongly predisposes a person to specific phobias.
The recognition of an inherited vulnerability common to all phobias is consistent with the notion that elevated trait anxiety predisposes one to anxiety disorders. Trait anxiety provides a background of affective arousal that permits a more rapid activation of the fight or flight response </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Anaphylaxis and explain in detail? [/INST] Anaphylaxis is a serious, potentially fatal allergic reaction and medical emergency that is rapid in onset and requires immediate medical attention regardless of use of emergency medication on site. It typically causes more than one of the following: an itchy rash, throat closing due to swelling which can obstruct or stop breathing; severe tongue swelling which can also interfere with or stop breathing; shortness of breath, vomiting, lightheadedness, loss of consciousness, low blood pressure, and medical shock. These symptoms typically start in minutes to hours and then increase very rapidly to life-threatening levels. Urgent medical treatment is required to prevent serious harm or death, even if the patient has used an epipen or has taken other medications in response, and even if symptoms appear to be improving.Common causes include allergies to insect bites and stings, allergies to foods – including nuts, milk, fish, shellfish, eggs and some fresh fruits or dried fruits; allergies to sulfites – a class of food preservatives and a byproduct in some fermented foods like vinegar; allergies to medications – including some antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs) like aspirin; allergy to general anaesthetic (used to make people sleep during surgery); allergy to contrast agents – dyes used in some medical tests to help certain areas of your body show up better on scans; allergy to latex – a type of rubber found in some rubber gloves and condoms. Other causes can include physical exercise, and cases may also occur in some people due to escalating reactions to simple throat irritation or may also occur without an obvious reason. The mechanism involves the release of inflammatory mediators in a rapidly escalating cascade from certain types of white blood cells triggered by either immunologic or non-immunologic mechanisms. Diagnosis is based on the presenting symptoms and signs after exposure to a potential allergen or irritant and in some cases, reaction to physical exercise.The primary treatment of anaphylaxis is epinephrine injection into a muscle, intravenous fluids, then placing the person "in a reclining position with feet elevated to help restore normal blood flow". Additional doses of epinephrine may be required. Other measures, such as antihistamines and steroids, are complementary. Carrying an epinephrine autoinjector, commonly called an "epipen" and identification regarding the condition is recommended in people with a history of anaphylaxis. Immediately contacting ambulance / EMT services is always strongly recommended, regardless of any on site treatment. Getting to a doctor or hospital as soon as possible is absolutely required in all cases, even if it appears to be getting better.Worldwide, 0.05–2% of the population is estimated to experience anaphylaxis at some point in life. Globally, as underreporting declined into the 2010s, the rate appeared to be increasing. It occurs most often in young people and females. About 99.7% of people hospitalized with anaphylaxis in the United States survive. The term comes from the Ancient Greek: ἀνά, romanized: ana, lit. against, and the Ancient Greek: φύλαξις, romanized: phylaxis, lit. protection.
Signs and symptoms
Anaphylaxis typically presents many different symptoms over minutes or hours with an average onset of 5 to 30 minutes if exposure is intravenous and up to 2 hours if from eating food. The most common areas affected include: skin (80–90%), respiratory (70%), gastrointestinal (30–45%), heart and vasculature (10–45%), and central nervous system (10–15%) with usually two or more being involved.
Skin
Symptoms typically include generalized hives, itchiness, flushing, or swelling (angioedema) of the affected tissues. Those with angioedema may describe a burning sensation of the skin rather than itchiness. Swelling of the tongue or throat occurs in up to about 20% of cases. Other features may include a runny nose and swelling of the conjunctiva. The skin may also be blue tinged because of lack of oxygen.
Respiratory
Respiratory symptoms and signs that may be present include shortness of breath, wheezes, or stridor. The wheezing is typically caused by spasms of the bronchial muscles while stridor is related to upper airway obstruction secondary to swelling. Hoarseness, pain with swallowing, or a cough may also occur.
Cardiovascular
While a fast heart rate caused by low blood pressure is more common, a Bezold–Jarisch reflex has been described in 10% of people, where a slow heart rate is associated with low blood pressure. A drop in blood pressure or shock (either distributive or cardiogenic) may cause the feeling of lightheadedness or loss of consciousness. Rarely very low blood pressure may be the only sign of anaphylaxis.Coronary artery spasm may occur with subsequent myocardial infarction, dysrhythmia, or cardiac arrest. Those with underlying coronary disease are at greater risk of cardiac effects from anaphylaxis. The coronary spasm is related to the presence of histamine-releasing cells in the heart.
Other
Gastrointestinal symptoms may include severe crampy abdominal pain, diarrhea, and vomiting. There may be confusion, a loss of bladder control or pelvic pain similar to that of uterine cramps. Dilation of blood vessels around the brain may cause headaches. A feeling of anxiety or of "impending doom" has also been described.
Causes
Anaphylaxis can occur in response to almost any foreign substance. Common triggers include venom from insect bites or stings, foods, and medication. Foods are the most common trigger in children and young adults, while medications and insect bites and stings are more common in older adults. Less common causes include: physical factors, biological agents such as semen, latex, hormonal changes, food additives and colors, and topical medications. Physical factors such as exercise (known as exercise-induced anaphylaxis) or temperature (either hot or cold) may also act as triggers through their direct effects on mast cells. Events caused by exercise are frequently associated with cofactors such as the ingestion of certain foods or taking an NSAID. In aspirin-exacerbated respiratory disease (AERD), alcohol is a common trigger. During anesthesia, neuromuscular blocking agents, antibiotics, and latex are the most common causes. The cause remains unknown in 32–50% of cases, referred to as "idiopathic anaphylaxis." Six vaccines (MMR, varicella, influenza, hepatitis B, tetanus, meningococcal) are recognized as a cause for anaphylaxis, and HPV may cause anaphylaxis as well.
Food and alcohol
Many foods can trigger anaphylaxis; this may occur upon the first known ingestion. Common triggering foods vary around the world due to cultural cuisine. In Western cultures, ingestion of or exposure to peanuts, wheat, nuts, certain types of seafood like shellfish, milk, fruit and eggs are the most prevalent causes. Sesame is common in the Middle East, while rice and chickpeas are frequently encountered as sources of anaphylaxis in Asia. Severe cases are usually caused by ingesting the allergen, but some people experience a severe reaction upon contact. Children can outgrow their allergies. By age 16, 80% of children with anaphylaxis to milk or eggs and 20% who experience isolated anaphylaxis to peanuts can tolerate these foods. Any type of alcohol, even in small amounts, can trigger anaphylaxis in people with AERD.
Medication
Any medication may potentially trigger anaphylaxis. The most common are β-lactam antibiotics (such as penicillin) followed by aspirin and NSAIDs. Other antibiotics are implicated less frequently. Anaphylactic reactions to NSAIDs are either agent specific or occur among those that are structurally similar meaning that those who are allergic to one NSAID can typically tolerate a different one or different group of NSAIDs. Other relatively common causes include chemotherapy, vaccines, protamine and herbal preparations. Some medications (vancomycin, morphine, x-ray contrast among others) cause anaphylaxis by directly triggering mast cell degranulation.The frequency of a reaction to an agent partly depends on the frequency of its use and partly on its intrinsic properties. Anaphylaxis to penicillin or cephalosporins occurs only after it binds to proteins inside the body with some agents binding more easily than others. Anaphylaxis to penicillin occurs once in every 2,000 to 10,000 courses of treatment, with death occurring in fewer than one in every 50,000 courses of treatment. Anaphylaxis to aspirin and NSAIDs occurs in about one in every 50,000 persons. If someone has a reaction to penicillin, his or her risk of a reaction to cephalosporins is greater but still less than one in 1,000. The old radiocontrast agents caused reactions in 1% of cases, while the newer lower osmolar agents cause reactions in 0.04% of cases.
Venom
Venom from stinging or biting insects such as Hymenoptera (ants, bees, and wasps) or Triatominae (kissing bugs) may cause anaphylaxis in susceptible people. Previous reactions that are anything more than a local reaction around the site of the sting, are a risk factor for future anaphylaxis; however, half of fatalities have had no previous systemic reaction.
Risk factors
People with atopic diseases such as asthma, eczema, or allergic rhinitis are at high risk of anaphylaxis from food, latex, and radiocontrast agents but not from injectable medications or stings. One study in children found that 60% had a history of previous atopic diseases, and of children who die from anaphylaxis, more than 90% have asthma. Those with mastocytosis or of a higher socioeconomic status are at increased risk.
Pathophysiology
Anaphylaxis is a severe allergic reaction of rapid onset affecting many body systems. It is due to the release of inflammatory mediators and cytokines from mast cells and basophils, typically due to an immunologic reaction but sometimes non-immunologic mechanism.Interleukin (IL)–4 and IL-13 are cytokines important in the initial generation of antibody and inflammatory cell responses to anaphylaxis.
Immunologic
In the immunologic mechanism, immunoglobulin E (IgE) binds to the antigen (the foreign material that provokes the allergic reaction). Antigen-bound IgE then activates FcεRI receptors on mast cells and basophils. This leads to the release of inflammatory mediators such as histamine. These mediators subsequently increase the contraction of bronchial smooth muscles, trigger vasodilation, increase the leakage of fluid from blood vessels, and cause heart muscle depression. There is also a non-immunologic mechanism that does not rely on IgE, but it is not known if this occurs in humans.
Non-immunologic
Non-immunologic mechanisms involve substances that directly cause the degranulation of mast cells and basophils. These include agents such as contrast medium, opioids, temperature (hot or cold), and vibration. Sulfites may cause reactions by both immunologic and non-immunologic mechanisms.
Diagnosis
Anaphylaxis is diagnosed on the basis of a persons signs and symptoms. When any one of the following three occurs within minutes or hours of exposure to an allergen there is a high likelihood of anaphylaxis:
Involvement of the skin or mucosal tissue plus either respiratory difficulty or a low blood pressure causing symptoms
Two or more of the following symptoms after a likely contact with an allergen:
a. Involvement of the skin or mucosa
b. Respiratory difficulties
c. Low blood pressure
d. Gastrointestinal symptoms
Low blood pressure after exposure to a known allergenSkin involvement may include: hives, itchiness or a swollen tongue among others. Respiratory difficulties may include: shortness of breath, stridor, or low oxygen levels among others. Low blood pressure is defined as a greater than 30% decrease from a persons usual blood pressure. In adults a systolic blood pressure of less than 90 mmHg is often used.During an attack, blood tests for tryptase or histamine (released from mast cells) might be useful in diagnosing anaphylaxis due to insect stings or medications. However these tests are of limited use if the cause is food or if the person has a normal blood pressure, and they are not specific for the diagnosis.
Classification
There are three main classifications of anaphylaxis.
Anaphylactic shock is associated with systemic vasodilation that causes low blood pressure which is by definition 30% lower than the persons baseline or below standard values.
Biphasic anaphylaxis is the recurrence of symptoms within 1–72 hours after resolution of an initial anaphylactic episode. Estimates of incidence vary, between less than 1% and up to 20% of cases. The recurrence typically occurs within 8 hours. It is managed in the same manner as anaphylaxis.
Anaphylactoid reaction, non-immune anaphylaxis, or pseudoanaphylaxis, is a type of anaphylaxis that does not involve an allergic reaction but is due to direct mast cell degranulation. Non-immune anaphylaxis is the current term, as of 2018, used by the World Allergy Organization with some recommending that the old terminology, "anaphylactoid", no longer be used.
Allergy skin testing
Allergy testing may help in determining the trigger. Skin allergy testing is available for certain foods and venoms. Blood testing for specific IgE can be useful to confirm milk, egg, peanut, tree nut and fish allergies.Skin testing is available to confirm penicillin allergies, but is not available for other medications. Non-immune forms of anaphylaxis can only be determined by history or exposure to the allergen in question, and not by skin or blood testing.
Differential diagnosis
It can sometimes be difficult to distinguish anaphylaxis from asthma, syncope, and panic attacks. Asthma however typically does not entail itching or gastrointestinal symptoms, syncope presents with pallor rather than a rash, and a panic attack may have flushing but does not have hives. Other conditions that may present similarly include: scrombroidosis and anisakiasis.
Post-mortem findings
In a person who died from anaphylaxis, autopsy may show an "empty heart" attributed to reduced venous return from vasodilation and redistribution of intravascular volume from the central to the peripheral compartment. Other signs are laryngeal edema, eosinophilia in lungs, heart and tissues, and evidence of myocardial hypoperfusion. Laboratory findings could detect increased levels of serum tryptase, increase in total and specific IgE serum levels.
Prevention
Avoidance of the trigger of anaphylaxis is recommended. In cases where this may not be possible, desensitization may be an option. Immunotherapy with Hymenoptera venoms is effective at desensitizing 80–90% of adults and 98% of children against allergies to bees, wasps, hornets, yellowjackets, and fire ants. Oral immunotherapy may be effective at desensitizing some people to certain food including milk, eggs, nuts and peanuts; however, adverse effects are common. For example, many people develop an itchy throat, cough, or lip swelling during immunotherapy. Desensitization is also possible for many medications, however it is advised that most people simply avoid the agent in question. In those who react to latex it may be important to avoid cross-reactive foods such as avocados, bananas, and potatoes among others.
Management
Anaphylaxis is a medical emergency that may require resuscitation measures such as airway management, supplemental oxygen, large volumes of intravenous fluids, and close monitoring. Passive leg raise may also be helpful in the emergency management.Administration of epinephrine is the treatment of choice with antihistamines and steroids (for example, dexamethasone) often used as adjuncts. A period of in-hospital observation for between 2 and 24 hours is recommended for people once they have returned to normal due to concerns of biphasic anaphylaxis.
Epinephrine
Epinephrine (adrenaline) (1 in 1,000) is the primary treatment for anaphylaxis with no absolute contraindication to its use. It is recommended that an epinephrine solution be given intramuscularly into the mid anterolateral thigh as soon as the diagnosis is suspected. The injection may be repeated every 5 to 15 minutes if there is insufficient response. A second dose is needed in 16–35% of episodes with more than two doses rarely required. The intramuscular route is preferred over subcutaneous administration because the latter may have delayed absorption. It is recommended that after diagnosis and treatment of anaphylaxis, the patient should be kept under observation in an appropriate clinical setting until symptoms have fully resolved. Minor adverse effects from epinephrine include tremors, anxiety, headaches, and palpitations.People on β-blockers may be resistant to the effects of epinephrine. In this situation if epinephrine is not effective intravenous glucagon can be administered which has a mechanism of action independent of β-receptors.If necessary, it can also be given intravenously using a dilute epinephrine solution. Intravenous epinephrine, however, has been associated both with dysrhythmia and myocardial infarction. Epinephrine autoinjectors used for self-administration typically come in two doses, one for adults or children who weigh more than 25 kg and one for children who weigh 10 to 25 kg.
Adjuncts
Antihistamines (both H1 and H2), while commonly used and assumed effective based on theoretical reasoning, are poorly supported by evidence. A 2007 Cochrane review did not find any good-quality studies upon which to base recommendations and they are not believed to have an effect on airway edema or spasm. Corticosteroids are unlikely to make a difference in the current episode of anaphylaxis, but may be used in the hope of decreasing the risk of biphasic anaphylaxis. Their prophylactic effectiveness in these situations is uncertain. Nebulized salbutamol may be effective for bronchospasm that does not resolve with epinephrine. Methylene blue has been used in those not responsive to other measures due to its presumed effect of relaxing smooth muscle.
Preparedness
People prone to anaphylaxis are advised to have an allergy action plan. Parents are advised to inform schools of their childrens allergies and what to do in case of an anaphylactic emergency. The action plan usually includes use of epinephrine autoinjectors, the recommendation to wear a medical alert bracelet, and counseling on avoidance of triggers. Immunotherapy is available for certain triggers to prevent future episodes of anaphylaxis. A multi-year course of subcutaneous desensitization has been found effective against stinging insects, while oral desensitization is effective for many foods.
Prognosis
In those in whom the cause is known and prompt treatment is available, the prognosis is good. Even if the cause is unknown, if appropriate preventive medication is available, the prognosis is generally good. If death occurs, it is usually due to either respiratory (typically asphyxia) or cardiovascular causes (shock), with 0.7–20% of cases causing death. There have been cases of death occurring within minutes. Outcomes in those with exercise-induced anaphylaxis are typically good, with fewer and less severe episodes as people get older.
Epidemiology
The number of people who get anaphylaxis is 4–100 per 100,000 persons per year, with a lifetime risk of 0.05–2%. About 30% of people get more than one attack. Exercise-induced anaphylaxis affects about 1 in 2000 young people.Rates appear to be increasing: the numbers in the 1980s were approximately 20 per 100,000 per year, while in the 1990s it was 50 per 100,000 per year. The increase appears to be primarily for food-induced anaphylaxis. The risk is greatest in young people and females.Anaphylaxis leads to as many as 500–1,000 deaths per year (2.7 per million) in the United States, 20 deaths per year in the United Kingdom (0.33 per million), and 15 deaths per year in Australia (0.64 per million). Another estimate from the United States puts the death rate at 0.7 per million. Mortality rates have decreased between the 1970s and 2000s. In Australia, death from food-induced anaphylaxis occur primarily in women while deaths due to insect bites primarily occur in males. Death from anaphylaxis is most commonly triggered by medications.
History
The conditions of anaphylaxis has been known since ancient times. French physician François Magendie had described how rabbits were killed by repeated injections of egg albumin in 1839. However, the phenomenon was discovered by two French physiologists Charles Richet and Paul Portier. In 1901, Albert I, Prince of Monaco requested Richet and Portier join him on a scientific expedition around the French coast of the Atlantic Ocean, specifically to study on the toxin produced by cnidarians (like jellyfish and sea anemones). Richet and Portier boarded Alberts ship Princesse Alice II for ocean exploration to make collections of the marine animals.Richet and Portier extracted a toxin called hypnotoxin from their collection of jellyfish (but the real source was later identified as Portuguese man o war) and sea anemone (Actinia sulcata). In their first experiment on the ship, they injected a dog with the toxin in an attempt to immunise the dog, which instead developed a severe reaction (hypersensitivity). In 1902, they repeated the injections in their laboratory and found that dogs normally tolerated the toxin at first injection, but on re-exposure, three weeks later with the same dose, they always developed fatal shock. They also found that the effect was not related to the doses of toxin used, as even small amounts in secondary injections were lethal. Thus, instead of inducing tolerance (prophylaxis) which they expected, they discovered effects of the toxin as deadly.In 1902, Richet introduced the term aphylaxis to describe the condition of lack of protection. He later changed the term to anaphylaxis on grounds of euphony. The term is from the Greek ἀνά-, ana-, meaning "against", and φύλαξις, phylaxis, meaning "protection". On 15 February 1902, Richet and Portier jointly presented their findings before the Societé de Biologie in Paris </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Supermarket and explain in detail? [/INST] A supermarket is a self-service shop offering a wide variety of food, beverages and household products, organized into sections. This kind of store is larger and has a wider selection than earlier grocery stores, but is smaller and more limited in the range of merchandise than a hypermarket or big-box market. In everyday U.S. usage, however, "grocery store" is synonymous with supermarket, and is not used to refer to other types of stores that sell groceries.The supermarket typically has places for fresh meat, fresh produce, dairy, deli items, baked goods, etc.
Shelf space is also reserved for canned and packaged goods and for various non-food items such as kitchenware, household cleaners, pharmacy products and pet supplies. Some supermarkets also sell other household products that are consumed regularly, such as alcohol (where permitted), medicine, and clothing, and some sell a much wider range of non-food products: DVDs, sporting equipment, board games, and seasonal items (e.g., Christmas wrapping paper in December).
A larger full-service supermarket combined with a department store is sometimes known as a hypermarket. Other services may include those of banks, cafés, childcare centers/creches, insurance (and other financial services), mobile phone services, photo processing, video rentals, pharmacies, and gas stations. If the eatery in a supermarket is substantial enough, the facility may be called a "grocerant", a blend of "grocery" and "restaurant".The traditional supermarket occupies a large amount of floor space, usually on a single level. It is usually situated near a residential area in order to be convenient to consumers. The basic appeal is the availability of a broad selection of goods under a single roof, at relatively low prices. Other advantages include ease of parking and frequently the convenience of shopping hours that extend into the evening or even 24 hours of the day. Supermarkets usually allocate large budgets to advertising, typically through newspapers. They also present elaborate in-shop displays of products.
Supermarkets typically are chain stores, supplied by the distribution centers of their parent companies, thus increasing opportunities for economies of scale. Supermarkets usually offer products at relatively low prices by using their buying power to buy goods from manufacturers at lower prices than smaller stores can. They also minimise financing costs by paying for goods at least 30 days after receipt and some extract credit terms of 90 days or more from vendors. Certain products (typically staple foods such as bread, milk and sugar) are very occasionally sold as loss leaders so as to attract shoppers to their store. Supermarkets make up for their low margins by a high volume of sales, and with of higher-margin items bought by the attracted shoppers. Self-service with shopping carts (trolleys) or baskets reduces labor costs, and many supermarket chains are attempting further reduction by shifting to self-service check-out.
History
In the early days of retailing, generally an assistant fetched products from shelves behind the merchants counter while customers waited in front of the counter, indicating the items they wanted. Most foods and merchandise did not come in individually wrapped consumer-sized packages, so an assistant measured out and wrapped the precise amount requested by the consumer. This offered opportunities for social interaction: many regarded this style of shopping as "a social occasion" and would often "pause for conversations with the staff or other customers". These practices were by nature slow and had high labor intensity and therefore also quite expensive. The number of customers who could be attended to at one time was limited by the number of staff employed in the store. Shopping for groceries also often involved trips to multiple specialty shops, such as a greengrocer, butcher, bakery, fishmonger and dry goods store, in addition to a general store. Milk and other items of short shelf life were delivered by a milkman.
The concept of an inexpensive food market relying on economies of scale was developed by Vincent Astor. He founded the Astor Market in 1915, investing $750,000 of his fortune into a 165′ by 125′ (50×38-metre) corner of 95th and Broadway, Manhattan, creating, in effect, an open-air mini-mall that sold meat, fruit, produce and flowers. The expectation was that customers would come from great distances ("miles around"), but in the end, even attracting people from ten blocks away was difficult, and the market folded in 1917.The concept of a self-service grocery store was developed by entrepreneur Clarence Saunders and his Piggly Wiggly stores, the first of which opened in 1916. Saunders was awarded several patents for the ideas he incorporated into his stores. The stores were a financial success and Saunders began to offer franchises.
The Great Atlantic & Pacific Tea Company, which was established in 1859, was another successful early grocery store chain in Canada and the United States, and became common in North American cities in the 1920s. Early self-service grocery stores did not sell fresh meats or produce. Combination stores that sold perishable items were developed in the 1920s.
The general trend since then has been to stock shelves at night so that customers, the following day, can obtain their own goods and bring them to the front of the store to pay for them. Although there is a higher risk of shoplifting, the costs of appropriate security measures ideally will be outweighed by reduced labor costs.Historically, there has been debate about the origin of the supermarket, with King Kullen and Ralphs of California having strong claims. Other contenders included Weingartens and Henke & Pillot. To end the debate, the Food Marketing Institute in conjunction with the Smithsonian Institution and with funding from H.J. Heinz, researched the issue. They defined the attributes of a supermarket as "self-service, separate product departments, discount pricing, marketing and volume selling".They determined that the first true supermarket in the United States was opened by a former Kroger employee, Michael J. Cullen, on 4 August 1930, inside a 6,000-square-foot (560 m2) former garage in Jamaica, Queens in New York City. The store, King Kullen, operated under the slogan "Pile it high. Sell it low." At the time of Cullens death in 1936, there were seventeen King Kullen stores in operation. Although Saunders had brought the world self-service, uniform stores, and nationwide marketing, Cullen built on this idea by adding separate food departments, selling large volumes of food at discount prices and adding a parking lot.
Other established American grocery chains in the 1930s, such as Kroger and Safeway Inc. at first resisted Cullens idea, but eventually were forced to build their own supermarkets as the economy sank into the Great Depression, while consumers were becoming price-sensitive at a level never experienced before. Kroger took the idea one step further and pioneered the first supermarket surrounded on all four sides by a parking lot.As larger chain supermarkets began to dominate the market in the US, able to supply consumers with the desired lower prices as opposed to the smaller "mom and pop" stands with considerably more overhead costs, the backlash of this infrastructure alteration was seen through numerous anti-chain campaigns. The idea of "monopsony", proposed by Cambridge economist Joan Robinson in 1933, that a single buyer could out-power the market of multiple sellers, became a strong anti-chain rhetorical device. With public backlash came political pressure to even the playing field for smaller vendors lacking the luxury of economies of scale. In 1936, the Robinson-Patman Act was implemented as a way of preventing such larger chains from using this buying power to reap advantages over smaller stores, although the act was not well enforced and did not have much impact on the prevention of larger chains overtaking power in the markets.Supermarkets proliferated across Canada and the United States with the growth of automobile ownership and suburban development after World War II. Most North American supermarkets are located in suburban strip shopping centers as an anchor store along with other smaller retailers. They are generally regional rather than national in their company branding. Kroger is perhaps the most nationally oriented supermarket chain in the United States but it has preserved most of its regional brands, including Ralphs, City Market, King Soopers, Frys, Smiths, and QFC.
In Canada, the largest such company is Loblaw, which operates stores under a variety of banners targeted to different segments and regions, including Fortinos, Zehrs, No Frills, the Real Canadian Superstore, and Loblaws, the foundation of the company. Sobeys is Canadas second largest supermarket with locations across the country, operating under many banners (Sobeys IGA in Quebec). Québecs first supermarket opened in 1934 in Montréal, under the banner Steinbergs.In the United Kingdom, self-service shopping took longer to become established. Even in 1947, there were just ten self-service shops in the country. In 1951, ex-US Navy sailor Patrick Galvani, son-in-law of Express Dairies chairman, made a pitch to the board to open a chain of supermarkets across the country. The UKs first supermarket under the new Premier Supermarkets brand opened in Streatham, South London, taking ten times as much per week as the average British general store of the time. Other chains caught on, and after Galvani lost out to Tescos Jack Cohen in 1960 to buy the 212 Irwins chain, the sector underwent a large amount of consolidation, resulting in the big four dominant UK of today: Tesco, Asda, Sainsburys and Morrisons.
In the 1950s, supermarkets frequently issued trading stamps as incentives to customers. Today, most chains issue store-specific "membership cards", "club cards", or "loyalty cards". These typically enable the cardholder to receive special members-only discounts on certain items when the credit card-like device is scanned at check-out. Sales of selected data generated by club cards is becoming a significant revenue stream for some supermarkets.
Traditional supermarkets in many countries face intense competition from discounters such as Wal-Mart, Aldi and Lidl, which typically is non-union and operates with better buying power. Other competition exists from warehouse clubs such as Costco that offer savings to customers buying in bulk quantities. Superstores, such as those operated by Wal-Mart and Asda, often offer a wide range of goods and services in addition to foods. In Australia, Aldi, Woolworths and Coles are the major players running the industry with fierce competition among all the three. The rising market share of Aldi has forced the other two to cut prices and increase their private label product ranges. The proliferation of such warehouse and superstores has contributed to the continuing disappearance of smaller, local grocery stores; increased dependence on the automobile; suburban sprawl because of the necessity for large floor space and increased vehicular traffic. For example, in 2009 51% of Wal-Marts $251 billion domestic sales were recorded from grocery goods. Some critics consider the chains common practice of selling loss leaders to be anti-competitive. They are also wary of the negotiating power that large, often multinationals have with suppliers around the world.
Online-only supermarkets (21st century)
During the dot-com boom, Webvan, an online-only supermarket, was formed and went bankrupt after three years and was acquired by Amazon. The British online supermarket Ocado, which uses a high degree of automation in its warehouses, was the first successful online-only supermarket. Ocado expanded into providing services to other supermarket firms such as Waitrose and Morrisons.
Grocery stores such as Walmart employ food delivery services offered by third parties such as DoorDash. Other online food delivery services, such as Deliveroo in the United Kingdom, have begun to pay specific attention to supermarket delivery.Delivery robots are offered by various companies partnering with supermarkets.
Micro-fulfillment centers (MFC) are relatively small warehouses with sophisticated automated rack-and-tote systems which prepare orders for pickup and delivery. Once the order is complete, the customer will pick it up (i.e. "click-and-collect") or have it fulfilled via home delivery. Supermarkets are investing in micro-fulfillment centers with the hope that automation can help reduce the costs of online commerce and ecommerce by shortening the distances from store to home and speeding up deliveries. In short, MFCs are said by many to be the key to profitably fulfilling online orders.
Types
U.S. categorization
The U.S. FMI food industry association, drawing on research by Willard Bishop, defines the following formats (store types) that sell groceries:
Organic and environmentally-friendly supermarkets
Some supermarkets are focusing on selling more (or even exclusively) organically certified produce. Others are trying to differentiate themselves by selling fewer (or no) products containing palm oil. This as the demand of palm oil is a main driver for the destruction of rainforests.
As a response to the growing concern on the heavy use of petroleum-based plastics for food packaging, so-called "zero waste" and "plastic-free" supermarkets and groceries are on the rise.
Growth in developing countries
Beginning in the 1990s, the food sector in developing countries has rapidly transformed, particularly in Latin America, South-East Asia, India, China and South Africa. With growth, has come considerable competition and some amount of consolidation. The growth has been driven by increasing affluence and the rise of a middle class; the entry of women into the workforce; with a consequent incentive to seek out easy-to-prepare foods; the growth in the use of refrigerators, making it possible to shop weekly instead of daily; and the growth in car ownership, facilitating journeys to distant stores and purchases of large quantities of goods. The opportunities presented by this potential have encouraged several European companies to invest in these markets (mainly in Asia) and American companies to invest in Latin America and China. Local companies also entered the market. Initial development of supermarkets has now been followed by hypermarket growth. In addition there were investments by companies such as Makro and Metro Cash and Carry in large-scale Cash-and-Carry operations.
While the growth in sales of processed foods in these countries has been much more rapid than the growth in fresh food sales, the imperative nature of supermarkets to achieve economies of scale in purchasing means that the expansion of supermarkets in these countries has important repercussions for small farmers, particularly those growing perishable crops. New supply chains have developed involving cluster formation; development of specialized wholesalers; leading farmers organizing supply, and farmer associations or cooperatives. In some cases supermarkets have organized their own procurement from small farmers; in others wholesale markets have adapted to meet supermarket needs.
Typical supermarket merchandise
Larger supermarkets in North America and in Europe typically sell many items among many brands, sizes and varieties. U.S. publisher Supermarket News lists the following categories, for example: Hypermarkets have a larger range of non-food categories such as clothing, electronics, household decoration and appliances.
Bakery (packaged and sometimes a service bakery and/or onsite bakery)
Beverages (non-alcoholic packaged, sometimes also alcoholic if laws permit)
Nonfood & Pharmacy (e.g. cigarettes, lottery tickets and over-the-counter medications (as laws permit), DVD rentals, books and magazines, including supermarket tabloids, greeting cards, toys, small selection of home goods like light bulbs, housewares (typically limited)
Personal care e.g. cosmetics, soap, shampoo
Produce (fresh fruits and vegetables)
Floral (flowers and plants)
Deli (sliced meats, cheeses, etc.)
Prepared Foods (packaged and frozen foods)
Meat (fresh packaged, frozen, sometimes with a butcher service counter)
Seafood (fresh packaged, frozen, sometimes with a butcher service counter)
Dairy (milk products and eggs)
Center store (e.g. detergent, paper products, household cleaning supplies)
Multicultural (ethnic foods)
Bulk dried foods
Animal foods, toys and products
Layout strategies
Most merchandise is already packaged when it arrives at the supermarket. Packages are placed on shelves, arranged in aisles and sections according to type of item. Some items, such as fresh produce, are stored in bins. Those requiring an intact cold chain are in temperature-controlled display cases.
While branding and store advertising will differ from company to company, the layout of a supermarket remains virtually unchanged. Although big companies spend time giving consumers a pleasant shopping experience, the design of a supermarket is directly connected to the in-store marketing that supermarkets must conduct to get shoppers to spend more money while there.
Every aspect of the store is mapped out and attention is paid to color, wording and even surface texture. The overall layout of a supermarket is a visual merchandising project that plays a major role. Stores can creatively use a layout to alter customers perceptions of the atmosphere. Alternatively, they can enhance the stores atmospherics through visual communications (signs and graphics), lighting, colors, and even scents. For example, to give a sense of the supermarket being healthy, fresh produce is deliberately located at the front of the store. In terms of bakery items, supermarkets usually dedicate 30 to 40 feet of store space to the bread aisle.Supermarkets are designed to "give each product section a sense of individual difference and this is evident in the design of what is called the anchor departments; fresh produce, dairy, delicatessen, meat and the bakery". Each section has different floor coverings, style, lighting and sometimes even individual services counters to allow shoppers to feel as if there are a number of markets within this one supermarket.
Marketers use well-researched techniques to try to control purchasing behavior. The layout of a supermarket is considered by some to consist of a few rules of thumb and three layout principles. The high-draw products are placed in separate areas of the store to keep drawing the consumer through the store. High impulse and high margin products are placed in the most predominant areas to grab attention. Power products are placed on both sides of the aisle to create increased product awareness, and end caps are used to receive a high exposure of a certain product whether on special, promotion or in a campaign, or a new line.
The first principle of the layout is circulation. Circulation is created by arranging product so the supermarket can control the traffic flow of the consumer. Along with this path, there will be high-draw, high-impulse items that will influence the consumer to make purchases which they did not originally intend. Service areas such as restrooms are placed in a location which draws the consumer past certain products to create extra buys. Necessity items such as bread and milk are found at the rear of the store to increase the start of circulation. Cashiers desks are placed in a position to promote circulation. In most supermarkets, the entrance will be on the right-hand side because some research suggests that consumers who travel in a counter-clockwise direction spend more. However, other researchers have argued that consumers moving in a clockwise direction can form better mental maps of the store leading to higher sales in turn.The second principle of the layout is coordination. Coordination is the organized arrangement of product that promotes sales. Products such as fast-selling and slow-selling lines are placed in strategic positions in aid of the overall sales plan. Managers sometimes place different items in fast-selling places to increase turnover or to promote a new line.
The third principle is consumer convenience. The layout of a supermarket is designed to create a high degree of convenience to the consumer to make the shopping experience pleasant and increase customer spending. This is done through the character of merchandising and product placement. There are many different ideas and theories in relation to layout and how product layout can influence the purchases made. One theory suggests that certain products are placed together or near one another that are of a similar or complementary nature to increase the average customer spend. This strategy is used to create cross-category sales similarity. In other words, the toothpaste is next to or adjacent the toothbrushes and the tea and coffee are down the same aisle as the sweet biscuits. These products complement one another and placing them near is one-way marketers try to increase purchases.For vertical placement, cheap generic brands tend to be on the lowest shelves, products appealing to children are placed at the mid-thigh level, and the most profitable brands are placed at eye level.The fourth principle is the use of color psychology, and the locations of the food, similar to its use in fast food branding.
Consumer psychologists suggest that most buyers tend to enter the store and shop to their right first. Some supermarkets, therefore, choose to place the entrance to the left-hand side as the consumer will likely turn right upon entry, and this allows the consumer to do a full anticlockwise circle around the store before returning to the checkouts. This suggests that supermarket marketers should use this theory to their advantage by placing their temporary displays of products on the right-hand side to entice you to make an unplanned purchase. Furthermore, aisle ends are extremely popular with product manufacturers, who pay top dollar to have their products located there. These aisle ends are used to lure customers into making a snap purchase and to also entice them to shop down the aisle. The most obvious place supermarket layout influences consumers are at the checkout. Small displays of chocolates, magazines, and drinks are located at each checkout to tempt shoppers while they wait to be served.
Criticisms
The large scale of supermarkets, while often improving cost and efficiency for customers, can place significant economic pressure on suppliers and smaller shopkeepers.
Supermarkets often generate considerable food waste, although modern technologies such as biomethanation units may be able to process the waste into an economical source of energy. Also, purchases tracking may help as supermarkets then become better able to size their stock (of perishable goods), reducing food spoilage.
See also
Hypermarket
List of grocers
Short food supply chains
Farmers markets
Types of retail outlets
Effects of the car on societies
References
Further reading
Greer, William R.; Logan, John A.; Willis, Paul S. (1986). America the Bountiful: How the Supermarket Came to Main Street : an Oral History. Washington, D.C.: Food Marketing Institute in cooperation with Beatrice Companies. OCLC 14357784.
Longstreth, R. W. (1999). The Drive-In, the Supermarket, and the Transformation of Commercial Space in Los Angeles, 1914-1941. The MIT Press.
Lorr, B. (2020). The Secret Life of Groceries: The Dark Miracle of the American Supermarket. Avery.
Newman, K </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Neurofibromatosis and explain in detail? [/INST] Neurofibromatosis (NF) is a group of three conditions in which tumors grow in the nervous system. The three types are neurofibromatosis type I (NF1), neurofibromatosis type II (NF2), and schwannomatosis. In NF1 symptoms include light brown spots on the skin, freckles in the armpit and groin, small bumps within nerves, and scoliosis. In NF2, there may be hearing loss, cataracts at a young age, balance problems, flesh colored skin flaps, and muscle wasting. In schwannomatosis there may be pain either in one location or in wide areas of the body. The tumors in NF are generally non-cancerous.The cause is a genetic mutation in certain oncogenes. These can be inherited from a persons parents, or in about half of cases spontaneously occur during early development. Different mutations result in the three types of NF. Neurofibromatosis arise from the supporting cells of the nervous system rather than the neurons themselves. In NF1, the tumors are neurofibromas (tumors of the peripheral nerves), while in NF2 and schwannomatosis tumors of Schwann cells are more common. Diagnosis is typically based on symptoms, examination, medical imaging, and biopsy. Genetic testing may rarely be done to support the diagnosis.There is no known prevention or cure. Surgery may be done to remove tumors that are causing problems or have become cancerous. Radiation and chemotherapy may also be used if cancer occurs. A cochlear implant or auditory brainstem implant may help some who have hearing loss due to the condition.In the United States, about 1 in 3,500 people have NF1 and 1 in 25,000 have NF2. Males and females are affected equally often. In NF1, symptoms are often present at birth or develop before 10 years of age. While the condition typically worsens with time, most people with NF1 have a normal life expectancy. In NF2, symptoms may not become apparent until early adulthood. NF2 increases the risk of early death. Descriptions of the condition occur as far back as the 1st century. It was formally described by Friedrich Daniel von Recklinghausen in 1882, after whom it was previously named.
Signs and symptoms
Neurofibromatosis type 1 in early life may cause learning and behavior problems – about 60% of children who have NF1 have mild difficulty in school. Signs the individual might have are as follows:
Six or more light brown dermatological spots ("café au lait spots")
At least two neurofibromas
At least two growths on the eyes iris
Abnormal growth of the spine (scoliosis)People with neurofibromatosis type 2 can exhibit the same type of skin symptoms as type 1, but not necessarily in every case. The symptom most characteristic of NF2 is hearing loss. The hearing loss occurs due to the pressure of tumors on the acoustic nerve. The same pressure can cause headaches, dizziness, and nausea.The main symptom of schwannomatosis is localized pain. This pain is due to tissues and nerves experiencing more pressure because of nearby tumors.
Cause
The three types of Neurofibromatosis are caused by different mutations on chromosomes. NF1 is caused by a mutation on the NF1 gene on the arm of chromosome 17. NF2 is caused by a mutation on the NF2 tumor suppressor gene on chromosome 22. Schwannomatosis is caused by various mutations on chromosome 22.Neurofibromatosis is an autosomal dominant disorder, which means only one copy of the affected gene is needed for the disorder to develop. If one parent has neurofibromatosis, his or her children have a 50% chance of developing the condition as well. The severity of the parents condition does not affect the child; the affected child may have mild NF1 even though it was inherited from a parent with a severe form of the disorder. The types of neurofibromatosis are:
Neurofibromatosis type I, in which the nerve tissue grows tumors (neurofibromas) that may be benign, but may cause serious damage by compressing nerves and other tissues.
Neurofibromatosis type II, in which bilateral acoustic neuromas (tumors of the vestibulocochlear nerve or cranial nerve 8 (CN VIII) also known as schwannoma) develop, often leading to hearing loss.
Schwannomatosis, in which painful schwannomas develop on spinal and peripheral nerves.
Pathophysiology
Neurofibromatosis type I is caused by a mutation on chromosome 17 encoding a cytoplasmic protein known as neurofibromin. This protein is a tumor suppressor and therefore serves as a signal regulator of cell proliferation and differentiation. A dysfunction or lack of neurofibromin can affect regulation, and cause uncontrolled cell proliferation, leading to the tumors (neurofibromas) that characterize NF1. The neurofibromas caused by NF consist of Schwann cells, fibroblasts, perineuronal cells, mast cells and axons embedded in an extracellular matrix. Another function of neurofibromin is to bind to microtubules that play a role in the release of adenylyl cyclase and its activity. Adenylyl cyclase plays an essential role in cognition. Neurofibromins role in the activity of adenylyl cyclase explains why patients with NF experience cognitive impairment.Neurofibromatosis type II is caused by a mutation on chromosome 22. The mutation falls on the NF2 tumor suppressor gene. The gene normally encodes a cytoplasmic protein known as merlin. The normal function of merlin is to regulate the activity of multiple growth factors, the mutated copy of the gene leads to merlins loss of function. The loss of function leads to increased activity of growth factors normally regulated by merlin, leading to the formation of the tumors associated with NF2.Schwannomatosis is caused by a mutation on the SMARCB1 gene. This gene is located near the NF2 tumor suppressor gene leading to the thought that schwannomatosis and NF2 were the same condition. The two conditions show different mutations on two different genes. The normal function of the SMARCB1 gene is to encode a protein called SMARCB1 that is part of a larger protein complex whose function is not completely understood. The complex including SMARCB1 plays a role in tumor suppression. The mutation of the SMARCB1 gene causes a loss of function in the complex leading to the formation of tumors indicative of schwannomatosis.
Diagnosis
The neurofibromatoses are considered as RASopathies and as members of the neurocutaneous syndromes (phakomatoses).
The diagnosis of neurofibromatosis is done via the following means:
Differential diagnosis
Conditions similar to NF include:
Treatment
Surgical removal of tumors is an option; however, the risks involved should be assessed first. With regard to OPG (optic pathway gliomas), the preferred treatment is chemotherapy. However, radiotherapy is not recommended in children who present with this disorder. It is recommended that children diagnosed with NF1 at an early age have an examination each year, which allows any potential growths or changes related to the disorder to be monitored.
Prognosis
In most cases, symptoms of NF1 are mild, and individuals live normal and productive lives. In some cases, however, NF1 can be severely debilitating and may cause cosmetic and psychological issues. The course of NF2 varies greatly among individuals. In some cases of NF2, the damage to nearby vital structures, such as other cranial nerves and the brain stem, can be life-threatening. Most individuals with schwannomatosis have significant pain. In some extreme cases, the pain will be severe and disabling.
Epidemiology
In the United States, about 1 in 3,500 people have NF1, 1 in 25,000 have NF2, and 1 in 40,000 have schwannomatosis. Males and females are affected equally often in all three conditions </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Glossary of cricket terms and explain in detail? [/INST] This is a general glossary of the terminology used in the sport of cricket. Where words in a sentence are also defined elsewhere in this article, they appear in italics. Certain aspects of cricket terminology are explained in more detail in cricket statistics and the naming of fielding positions is explained at fielding (cricket).
Cricket is known for its rich terminology. Some terms are often thought to be arcane and humorous by those not familiar with the game.
A
Across the line
A shot which is played with the bat moving lateral to the direction of motion of the ball. Used when the batsman is aiming square or behind square, but requires excellent timing. Considered risky, as mistiming the shot can result in a leading edge, being strangled, or missing the ball entirely and being out bowled or leg before wicket.
Action
See bowling action
Agricultural shot
A powerful slog shot across the line (resembling a scything motion), played with little technique or footwork, particularly one that damages the pitch with the bat. May result in the ball going to cow corner.
Air
A delivery by a spin bowler on a higher trajectory than usual, typically phased as giving it some air. The term flight is a near synonym. Can be combined with top spin or back spin to deceive the batsman on the length of the delivery, or with off spin or leg spin to give the ball more time to drift.
All out
An innings which has ended due to the batting side running out of wickets, usually because ten of the eleven batsmen have been dismissed. It also applies when, through a combination of ten dismissals and players who have retired or are absent hurt, the team has only one able batsman remaining. The term is a slight misnomer, as there is always one batsman left not out.
All-rounder
Traditionally, a player adept at both batting and bowling. Some recent sources regard a wicket-keeper/batsman as another type of all-rounder, but this usage is not universal.
Amateur
1. A cricketer who plays for pleasure, not pay. A non-professional.
2. A former official distinction between players of differing social class, important in 18th and 19th century England. Amateurs (also known as gentlemen) were upper class, had usually been to private school and/or Oxbridge, and could claim expenses but not receive any other payment. Professionals (also known as players) were working class, paid wages, and relied on cricket as their primary source of income. Most county cricket teams consisted of a mix of amateurs and professionals; the captain was always an amateur. The annual Gentlemen vs Players match pitched the two classes against each other. The distinction became obsolete in the early 20th century and was finally abolished in 1962.
Analysis
see bowling analysis
Anchor
A batsman who remains in for a long time, scoring at a moderate strike rate whilst preserving their wicket by avoiding risky shots. A less defensive and more flexible version of a blocker. Top- or middle-order batsmen may adopt an anchor role if the other batsmen experience a batting collapse.
Appeal
A bowler or fielder asking the umpire to dismiss the batsman, usually by shouting howzat (hows that?). Variations include howzee (hows he?), or simply turning to the umpire and cheering. The umpire cannot give a batsman out unless the fielding side appeals, even if the criteria for a dismissal have otherwise been met. Batsmen who are obviously out (e.g. bowled or indisputably caught) will normally walk from the field without waiting for an appeal.
Approach
1. The motion of the bowler before bowling the ball; also known as the run-up.
2. The ground a bowler runs on during their run up; e.g.: "Play was delayed because the bowlers approaches were slippery."
Arm ball
A variation bowled by a finger spinner, which appears to be their stock ball but does not spin. Such a delivery does not turn when bouncing but travels straight on i.e. following the line of the bowlers arm, hence the name. An arm ball might also swing.
Around the wicket (or round the wicket)
A right-handed bowler passing to the right of the non-strikers stumps in their run-up, and vice versa for a left-handed bowler. The opposite of over the wicket.
The Ashes
The trophy for the England v Australia Test match series. The Ashes originated as a result of a satirical obituary published in a British newspaper, The Sporting Times, in 1882 after a match at The Oval in which Australia beat England on an English ground for the first time. The obituary stated that English cricket had died, and the body would be cremated and the ashes taken to Australia. The English press dubbed the next English tour to Australia (1882–83) as the quest to regain The Ashes. During that tour a small terracotta urn was presented to England captain Ivo Bligh by a group of Melbourne women. The urn is reputed to contain the ashes of one or two bails.
Asking rate
see required run rate
Attacking field
A fielding configuration in which more fielders are close in to the pitch so as to take catches more readily, at the risk of allowing more runs to be scored.
Attacking shot
An aggressive or strong hit by the batsman designed to score runs.
Audi
Two consecutive pairs, or four consecutive ducks. The term alludes to Audi, a German car manufacturer, whose logo is four linked rings.
Average
1. see batting average
2. see bowling average
Away swing
see outswing
B
Back foot
1. When batting, the foot that is closest to the stumps. For a right-hand batsmans stance, the back foot is the right foot; for a left-hand batsman it is the left foot.
2. When bowling, the foot which contacts the ground before the front foot i.e. the back foot is the second contact before the ball is released. Usually the back foot is also the bowling foot, unless the bowlers action is off the wrong foot.
Back foot contact
The point in a bowling action when the back foot lands on the ground, just before release of the ball.
Back foot shot
A shot played with the batsmans weight on their back foot. Most commonly used when aiming behind square.
Back spin
(Also under-spin.) A delivery with a backward spin, so that after pitching the ball immediately slows down, or bounces lower and skids on to the batsman.
Backing up
1. The non-striking batsman leaving their crease during the bowlers action, before the ball is released. This shortens the distance the non-striker will need to cover to score a run, if the striker completes a shot. Backing up too far or too early risks a run out, either by one of the fielders, or by the bowler in a mankad.
2. A fielder placing themselves on the far side of the wicket from a team-mate who is throwing the ball at the stumps to attempt a run out. The fielder who is backing up can then recover the ball if the throw misses the stumps, thereby preventing overthrows.
Backlift
Lifting the bat in preparation to hitting the ball.
Badger
A particularly enthusiastic cricketer, someone with an extreme love of the game.
bad light
In a match played during the day, "bad light" refers to the umpires taking the players from the field because the ambient light has dimmed to the point that the ball has become difficult to see. Done for both fairness and the safety of the batsmen. Often shortened simply to light. See also offer the light.
Baggy green
A cricket cap of myrtle green colour, which has been worn by Australian Test cricketers since around 1900. The cap is a symbol of Australian cricket, and the term is strongly associated with national cricketing pride.
Bail
One of the two small pieces of wood that lie on top of the stumps to form the wicket.
Ball
1. The spherical object which the bowler propels towards the batsman, who may attempt to hit it with the bat. Constructed of leather stiched around a cork core. A red ball is used in timed matches (or a pink ball for day/night cricket), whilst a white ball is used in limited overs cricket.
2. A single delivery. Each over contains six (legal) balls.
Ball tampering
Illegally modifying the condition of the ball, usually by a fielder to facilitate swing bowling. Ball tampering is a form of cheating, so accusations are often controversial.
Ball tracking
A computer vision system that determines the location of the ball, tracks its motion, and predicts its future trajectory. Used by the third umpire in the decision review system to assess LBW appeals, and by coaches or commentators to analyse player performances. Common brands include Hawk-Eye and Eagle-Eye, which are sometimes used as genericised trademarks for the concept.
Bang (it) in
To bowl a delivery on a shorter length with additional speed and force. The bowler is said to be "bending their back" when banging it in.
Bat
The wooden implement which the batsman uses to hit the ball. Usually constructed in two pieces, the rectangular-sectioned blade and cylindrical handle, which are joined at the splice.
Bat-pad
A fielder placed close to the batsman on the leg side to catch the ball if it hits the bat and pad (in either order), and rises to a catchable height. Also a defence against being given out lbw, that the ball may have hit the bat first, however indiscernible.
Batsman
(also batting player, batter, bat, or batswoman) A player on the batting side, or a player whose speciality is batting, or one of the two members of the batting side who are currently at the crease. The Laws of Cricket specify that the term batsman applies regardless of the players gender; however some players and commentators prefer to use batting player, batter or batswoman (when appropriate) instead.
Batting
The act and skill of defending ones wicket and scoring runs.
Batting average
The average number of runs scored per innings by a batsman, calculated by dividing the batsmans total runs scored by the number of times the batsman was out.
Batting collapse
When several batsmen are dismissed in rapid succession for very few runs. The terms top order collapse or middle order collapse may refer to batting collapses in a specific part of the batting order.
Batting for a draw
Defensive batting in a timed match by a team with little chance of victory, who are instead attempting to salvage a draw. The batsmen seek to survive as many balls as possible before losing their wicket, without attempting to score many runs and avoiding aggressive shots. Sometimes regarded as boring to watch, and sometimes as producing tense finishes.
Batting order
The order in which the batsmen bat, from the openers, through the top order and middle order to the lower order.
Beach cricket
An informal form of the game played on beaches, particularly in Australia, New Zealand, Sri Lanka and cricket-playing Caribbean countries.
Beamer
A delivery that reaches the batsman above waist height without bouncing. This is illegal and an automatic no-ball. Further sanctions are applied to repeat offences, though the exact rules have changed several times. Since 2019, if the umpire believes that the beamer was dangerous and risked injury to the batsman, they issue a first and final warning to the bowler; a second dangerous delivery results in the bowler being banned from bowling for the remainder of that innings.
Beat the bat
When a batsman narrowly avoids touching the ball with the edge of their bat, through good fortune rather than skill. Considered a moral victory for the bowler. The batsman is said to have been beaten. In some cases, this may be expanded to "beaten all ends up".
Beehive
A diagram showing where a number of balls, usually from a particular bowler, have passed the batsman. Compare pitch map.
Beer match
In club cricket, where the scheduled game ends early, a friendly match concocted to fill in time, originally where licensing hours would have prevented the teams from retiring to the pub, but also for love of the game. Often played with an unusual format, such as "reverse batting order", "every outfielder must bowl", or "bats retire at 25".
Belter
A belter of a pitch is a pitch offering advantage to the batsman.
Bend the back
Of a pace bowler, to put in extra effort to extract extra speed or bounce.
Benefit season
A series of fundraising events to reward a long-serving player, typically those who have played over a decade for a single county cricket team, shortly before the player retires. Similar in concept to testimonial matches played in other sports.
Best bowling
The bowling analysis with the most wickets taken in an innings; the fewest runs conceded are used as a tie-breaker. This can be used to compare different bowlers within a single match, or to highlight the best performance by an individual over an extended period, such as a season or their whole career. The equivalent for batsmen is the high score.
Biffer
slang term for an attacking batsman. A biffer is the opposite of a blocker, being a defending player. In earlier times (particularly pre-World War II) cricketers were either amateur (Gentlemen) or professional (Players). Typically, but not universally, amateurs would be "biffers" and professionals "blockers". The word derives from the slightly archaic transitive verb, "biff" which means "hit". Today, biffers tend to be known as big hitters.
Bite
the turn a spin bowler is able to produce on a pitch.
Block
A defensive shot, intending to stop the ball safely without attempting to score runs. The most common form of block is the forward defensive.
Blocker
slang term for a defensive or slow-scoring batsman, the opposite of biffer. In earlier times (particularly pre-World War II) cricketers were either professional or amateur. Typically, but not universally, amateurs would be "biffers" and professionals "blockers". The word derives from the natural tendency of such batsmen to "block" each delivery, rather than try to score runs. Such players tend to have low strike rates, but in many cases high averages due to their lack of aggressive shots giving bowlers less chances to dismiss them. They are typically less effective in limited overs cricket where quick scoring is often needed, though some are able to adapt their game to score quicker in the shorter formats of the game.
Block hole
the gap between the bottom of the bat and the batsmans toes. This area is the target for a yorker, as it is difficult to block unless the batsman is expecting it.
Blob
See duck
Bodyline (or fast leg theory)
a historical tactic involving fast bowling aimed at the batsmans body with numerous close fielders placed on the leg side. Bodyline was developed in the early 1930s and used by England (who called it "fast leg theory") during the 1932–33 Ashes Tour. The tactic was highly controversial, leading to accusations of unsporting behaviour and deliberately attempting to injure or intimidate the batsmen. Changes to the rules were brought it to outlaw dangerous deliveries and to limit the number of leg-side fielders and bouncers, making bodyline obsolete.
Boot Hill
Another term for short leg, the least liked and most dangerous of the fielding positions. The term is derived from this potential for getting hit by the ball, being a reference to the Boot Hills of the American West, graveyards for those who "died with their boots on". Due to its unpopularity the position has traditionally been taken by the junior professional, although this is no longer necessarily the case. According to former England fielding coach Richard Halsall, a good short leg has to be physically brave, an outstanding catcher, and preferably a good reader of the batsman, and is indeed normally a batsman themselves.
Bosie or bosey
See googly
Bottom hand
The hand of the batsman that is closest to the blade of the bat. Shots played with the bottom hand often are hit in the air.
Bouncer
a fast short pitched delivery that rises up near the batsmans head.
Bounce out
to get a batsman out with the aid of bounce, often by getting them caught out
Boundary
1. the perimeter of the field
2. a rope that demarcates that perimeter
3. a shot which reaches (or passes over) the boundary rope. If the ball touches the ground before reaching the boundary, the shot scores four runs. If it does not touch the ground before reaching the rope, the shot scores six runs.
Bowled
a mode of a batsmans dismissal. Occurs when a delivery hits the stumps and removes at least one bail.
Bowled around the legs
out bowled by a delivery which passes the batsman on the leg side before hitting the wicket. The term implies that the batsman has moved too far over to their off side.
Bowled out
1. see all out.
2. Sometimes incorrectly used in place of bowled.
Bowler
1. The player who is currently bowling.
2. A player who specialises in bowling.
Bowling
the act of delivering the cricket ball to the batsman.
Bowl-out
A method for breaking a tie which was used in some limited overs matches in the late 20th and early 21st centuries; most competitions have replaced it with the super over. Five players from each team bowled at an undefended wicket, with the team with the most hits winning. Sudden death was used if the number was equal. The concept was analogous to the penalty shootout used in other sports.
Bowling action or action
The set of movements a bowler goes through in a delivery.
Bowling analysis or bowling figures
A statistical summary of a bowlers performance. Two formats are common: either the numbers of overs–maidens–runs conceded–wickets, or the shorter numbers of wickets/runs e.g. 12-2-46-3 or 3/46, pronounced three for forty-six.
Bowling at the death
See death overs
Bowling average
The number of runs conceded by a bowler, divided by the number of wickets they have taken. One of several statistics used to compare the performances of bowlers over extended periods; lower is better.
Box
a protective item shaped like a half-shell and inserted into the front pouch of a jockstrap with cup pocket worn underneath a players (particularly a batsmans) trousers to protect their genitalia from the hard cricket ball. Also known as an abdominal protector, Hector protector, ball box, protector, athletic cup, protective cup or cup.
Brace
two wickets taken off two consecutive deliveries.
Break
a suffix denoting the ball changing direction after pitching caused by the bowlers spin or cut. For example, a leg spinner will deliver leg breaks (moving from leg to off).
Buffet bowling
see cafeteria bowling.
Bump ball
A ball played off the bat immediately into the ground and then caught by a fielder. Often this has the appearance of being a clean catch directly off the bat.
Bumper
old-fashioned name for a bouncer.
Bunny
see rabbit.
Bunsen
A pitch on which spin bowlers can turn the ball prodigiously. From the rhyming slang: Bunsen Burner meaning Turner.
Buzzer
see overthrows
Bye
extras scored in the same way as normal runs when the ball does not make contact with any part of the batsman (bat, protective gear, body parts).
C
Cafeteria bowling
(also buffet bowling) Poor quality bowling which is easy to hit, allowing the batsmen to help themselves to runs, analogous to a self-service cafeteria or buffet. Occasionally employed deliberately as declaration bowling.
Call
1. A shouted announcement by a fielder while the ball is in the air, usually the word "mine", indicating that they are about to attempt a catch. Considered good practice, to avoid two fielders colliding if both attempt to take the same catch.
2. A shouted, normally single word, instruction from a batsman to their partner, indicating whether to attempt a run. Only one batsman makes a call, to avoid confusion that could lead to a run out. Responsibility for making the call is held by whichever batsman has the better view of the ball and can see whether any fielders are close to it. This is most often the striker for a shot in front of square, and the non-striker for a shot behind square. Common possible calls include yes, no, wait, push, one, two or three.
Cap
1. A type of soft material hat, traditionally worn by fielders.
2. An appearance for a national team.
Captain
(or skipper) The player who has been appointed leader of their team. The captain decides which bowler will deliver each over, the placement of the fielders, the batting order, when to use the decision review system, whether and when to declare, and many other aspects of the game. The captains tactical abilities can have a major impact on the outcome of a match. The captain often has substantial responsibilities between games as well, such as in team selection or representing the team in the media.
Carrom ball
a style of slow bowling delivery in which the ball is released by flicking the ball between the thumb and a bent middle finger in order to impart spin
Carry
1. A shot has carried if the ball is hit in the air and reaches a fielder without touching the ground (regardless of whether the fielder then successfully takes a catch). If the ball touches the ground before reaching the fielder, it has not carried. Slow-motion television replays are sometimes required to determine whether the ball carried or bounced a few inches in front of the fielder.
2. The distance which a delivery bounces through to the wicketkeeper if the batsman does not play a shot. The wicketkeeper must position themselves at an appropriate distance behind the stumps for the carry, which depends on the hardness of the pitch, height of the bowler, the length they are bowling etc.
Carry the bat
(or carry their bat) An opener batting through an entire innings, remaining not out at the conclusion. The rest of the team must be all out (not any other end of innings scenario). Regarded as an excellent performance by the opening batsman.
Cartwheel
The motion of a stump when hit by a delivery hard enough that it rips out of the ground and flips end-over-end before landing, like a gymnastic cartwheel.
Castled
Bowled by a yorker.
Catch
A fielder gaining complete control of the ball, in one or both hands, before it touches the ground. The wicketkeeper may use gloves, but no other fielder may use any other equipment or clothing to assist in taking the catch. A fair catch leads to the batsman being caught.
Caught
a method of dismissal in which one of the fielders, including the bowler, catches the ball before it touches the ground after the batsman hits it (with either the bat or a glove that is in contact with the bat). This is the most common method of getting a batsman out in professional cricket.
Caught and bowled
Dismissed by a catch taken by the bowler. The term originates from the way such dismissals are recorded on a scorecard (c & b); the alternative "bowled and caught", referring to the sequence of events in the strict chronological order, is almost never used.
Caught behind
Dismissed by a catch taken by the wicket-keeper. Usually from an edge or glove, but sometimes from a skier.
Centurion
A player who has scored a century.
Century
1. A score of at least 100 runs by a single batsman; considered a substantial personal achievement.
2. (rare) An ironic term for a bowler who has conceded over 100 runs in an innings.
Chance
An opportunity to dismiss a batsman that isnt taken by the fielding side. Common examples include a shot that carries to a fielder who then drops the catch; a possible run out in which the fielders throw misses the stumps; or a fumble by a wicketkeeper during an attempted stumping. If a batsman doesnt provide any chances to the opponent, it is referred to as a "chanceless innings" or "chanceless knock". Some commentators refer to a very difficult opportunity as a "half chance".
Charge
A batsman moving quickly towards to the bowler while the delivery is in flight, to gain momentum for a powerful shot (often a slog). A more aggressive version of a dance down, with even higher risk, so usually only attempted in the death overs.
Check upstairs
Invoke the Umpire Decision Review System, especially by the umpires themselves. The third umpire is usually sat high in the stadium, and replays are often displayed on elevated screens, hence the expression.
Cherry
Another name for the ball, typically Australian. Alternatively can refer to the red mark made by the ball on bat.
Chest on
(also front on or square on)
1. A bowler whose chest and hips are aligned towards the batsman at the instant of back foot contact.
2. A batsman whose hips and shoulders face towards the bowler at the moment of release.
Chin music
The use of a series of bouncers from pace bowlers to intimidate a batsman. Term taken from baseball.
Chinaman
see left-arm unorthodox spin
Chinese cut
see French cut
Chop on
to be bowled by the ball deflecting off the inside or bottom edge of the bat and onto the stumps while playing a shot
Chuck
of a bowler, to throw the ball by bending the elbow instead of bowling it with a straight arm; also chucker: a bowler who chucks; and chucking: such an illegal bowling action. All are considered offensive terms as they imply cheating.
Circle
a painted circle (or ellipse), centred in the middle of the pitch, of radius 30-yard (27 m) marked on the field. The circle separates the infield from the outfield, used in policing the fielding regulations in certain one-day versions of the game. The exact nature of the restrictions vary depending on the type of game: see limited overs cricket, Twenty20 and powerplay (cricket).
Clean bowled
bowled, without the ball first hitting the bat or pad.
Club
1. A group of cricketers, from which one or more teams are formed.
2. To strike the ball gracelessly, but with great force. A weaker form of slog.
Club cricket
(grade cricket in Australia) Formally organised cricket played by skilled amateurs. Considered a higher standard of play than purely recreational village cricket, but still lower than professional forms of the sport.
Collapse
see batting collapse
Come to the crease
A phrase used to indicate a batsman walking onto the playing arena and arriving at the cricket pitch in the middle of the ground to begin batting.
Competitive Womens Cricket
The official designation given to the female equivalents to First-class, List A and T20 cricket.
Compulsory close (cc)
When a teams innings is closed and completed by a means other than being all out or declaring. This particularly applies in grade cricket competitions where each innings of a two-innings match is also subject to a time or overs limit. The letters (cc) appear in the teams score to reflect this; e.g. 266–7(cc).
Conventional swing
see swing
Contrived circumstances
Unusual tactics which are intended to achieve a legitimate outcome, but result in wild statistical abnormalities; for example, deliberately bowling extremely poorly to encourage a quick declaration. Wisden excludes records set in contrived circumstances from its official lists.
Cordon
(or slips cordon)Collective term for multiple fielders in the slips. Sometimes also includes gully.
Corridor of uncertainty
a good line. The corridor of uncertainty is a notional narrow area on and just outside a batsmans off stump. If a delivery is in the corridor, it is difficult for a batsman to decide whether to leave the ball, or play at it defensively, or play an attacking shot. The term was popularised by former England batsman and latterly commentator, Geoffrey Boycott.
County cricket
the highest level of domestic cricket in England and Wales.
Cover
(or the covers)
1. A fielding position on the off side, forward of point and squarer than mid-off. A cover fielder situated closer to point than usual is known as cover point, whilst one which is closer to mid-off is known as extra cover. The plural term the covers refers to all these variants collectively.
2. A waterproof assembly or tarpaulin used by the ground staff to protect parts of the field from rain (usually the pitch, square and run ups).
Cow corner
the area of the field (roughly) between deep mid-wicket and wide long-on. So called because few legitimate shots are deliberately aimed to this part of the field, so fielders are rarely placed there – leading to the concept that cows could happily graze undisturbed in that area.
Cow shot
a hard shot, usually in the air, across the line of a full-pitched ball, aiming to hit the ball over the boundary at cow corner, with very little regard to proper accepted technique. A type of slog.
Crease
a straight whitewash line painted on the pitch near the stumps. At each end of the pitch there are four creases: the popping crease, bowling crease and two return creases. The word crease without further qualification refers either to the popping crease or to the region beyond it. The creases are used to adjudge whether a run has been completed, a batsman is run out, the delivery is a no-ball or wide etc.
Cricketer
a person who plays cricket.
Cross-bat shot
a conventional shot played with the bat parallel with the ground, such as a cut or a pull. Also known as a horizontal-bat shot, it is generally considered a potentially risky option for batsmen.
Cross the rope
To cross over the boundary rope. Often said of batsmen who are beginning or resuming an innings, players entering or exiting the field to begin a session of play, fielders attempting catches, and the ball when the batter scores a boundary
Cut
a shot played square on the off side to a short-pitched delivery wide of off stump. So called because the batsman makes a "cutting" motion as they play the shot.
Cutter
a break delivery bowled by a fast or medium-pace bowler with similar action to a spin bowler, but at a faster pace. It is usually used in an effort to surprise the batsman, although some medium-pace bowlers use the cutter as their stock (main) delivery.
D
Daddy hundred
Informal term for a batsmans individual score that is substantially higher than a century. Some coaches set a requirement of at least 150 runs, but that value is not universal.
Daisy cutter
1. A low-trajectory delivery in which the ball bounces twice, or rolls along the ground, before reaching the batsman. A type of no ball.
2. A shot in which the ball travels on a low trajectory, bouncing just above the grass. This retains more speed than a rolling ball, while being very difficult to catch.
Dance down (the pitch)
A batsman using footwork to move closer to the bowler while the delivery is in flight, typically with a skipping motion, changing the effective length. A near-synonym of charge, but less aggressive. Considered risky, because if the batsman misses their shot they are likely to be stumped or run out.
Day/night cricket
a cricket match scheduled to begin during day time and end after sunset, using floodlights. Used in some one day internationals since 1979, and Test matches since 2015.
Dead ball
1. the state of play between deliveries, during which batsmen may not score runs or be given out. There are numerous situations where play becomes dead, see dead ball.
2. a signal given by the umpire to indicate a state of dead ball. Used only if the state is not obvious to the players.
Dead bat
A defensive shot with the bat held in a loose grip and/or angled towards the ground. If the ball strikes the bat it loses momentum and falls quickly to the ground. Used to reduce the chance of being caught off an edge.
Dead rubber
A match played in a series after one side has gained an unassailable lead, e.g. has already won 3 matches in a 5-match series, or in a tournament if both teams have already qualified (or failed to qualify) for the subsequent rounds. The term is used disparagingly to imply that the match is of lesser importance, but some regard the concept as outdated.
Death bowler
A bowler who specialises in delivering the death overs at the end of a limited overs match. Because batsmen usually play very aggressively in this period, a death bowler must be particularly skilled at variations which are difficult to score from and/or unexpected by the batsman, such as yorkers and slower balls.
Death overs
(or slog overs)
The final few overs of an innings in a limited overs match. Because only runs matter in determining the result, a batting side with wickets in hand will bat very aggressively in this period, seeking to hit every delivery for a boundary, even if it risks a dismissal. Typically the batting strike rate is highest in this period, and the bowlers economy is poorest.
Death rattle
The sound of the ball breaking the wicket when a batsman is out bowled.
Debenture
a certificate of loan agreement, used by some professional clubs to raise funding. Typically, investors loan money to the club for a fixed period in return for guaranteed free (or reduced price) match tickets during that period.
Decision review system (DRS)
see Umpire Decision Review System.
Declaration
the act of a captain voluntarily bringing their sides innings to a close, in the belief that their score is now great enough to prevent defeat. Occurs almost exclusively in timed forms of cricket where a draw is a possible result (such as first-class cricket), in order that the side declaring have enough time to bowl the opposition out and therefore win.
Declaration bowling
deliberately poor bowling (specifically Full tosses and Long hops) from the fielding team with the intention of allowing the batsmen to score runs quickly and thus encourage the opposing captain to declare, thereby giving both teams a reasonable chance of winning in a timed match which would otherwise have been inevitably drawn. Non-specialist bowlers will often be used in this context, largely to preserve dignity (and the bowling averages) of the specialist bowlers. This may occasionally be seen in domestic first-class cricket, where the points systems used often show little penalty between a draw and a loss; but it would very seldom be seen in Test cricket where the consequence of a loss in the context of a series is greater. Cf. Cafeteria bowling.
Deep
(of a fielding position) further away from the batsman and closer to the boundary.
Defensive field
A fielding configuration in which fielders are spread around the field so as to more readily stop hit balls and reduce the number of runs (particularly boundaries) being scored by batsmen, at the cost of fewer opportunities to take catches and dismiss batsmen.
Delivery
the act of bowling the ball; also, the quality of the way in which a ball is bowled, in either intent or effect.
Devils number
(or dreaded number)
a score of 87, regarded as unlucky in Australian cricket. Superstition holds that batsmen are more likely to be dismissed for 87, because it is 13 runs short of a century (see triskaidekaphobia).
Diamond duck
regional usage varies, but either a dismissal (usually run out) without facing a delivery, or a dismissal (for zero) off the first ball of a teams innings (the less common term platinum duck is used interchangeably).
Dibbly dobbly
(or dibbly dobbler)
1. a medium pace delivery, neither fast nor slow, with no special variation.
2. a bowler who uses this type of delivery as their stock ball.
3. a delivery that is easy to hit, but difficult to score quickly from.
Dilscoop
see paddle scoop
Dink
a deliberately gentle shot played by a batsman, without attempted power, in order to guide the ball into an unguarded area of the field, largely using the balls own momentum from the delivery. Often effective against fast bowlers.
Dinner
the second of the two intervals taken during a full days play specifically during a day/night test.
Dipper
a delivery bowled which curves into or away from the batsman before pitching.
Dismissal
Take the wicket of one of the batsmen, so they must cease batting for the remainder of the innings. A batsman who has been dismissed is referred to as out; if ten members of the batting side are dismissed (or retire), the team are all out. There are ten possible methods, but the vast majority of dismissals are one of five types: caught, bowled, leg before wicket, run out and stumped.
Direct hit
a throw from a fieldsman that directly strikes and puts down a wicket (without first being caught by a fieldsman standing at the stumps). Occurs when attempting a run out.
D/L
see Duckworth-Lewis method
DLS
see Duckworth-Lewis method
Dobbing
a synonym for Mankad, specifically used in Lancashire and some neighbouring counties
Doctored pitch
a cricket pitch which has been intentionally prepared in a specific manner in order to gain a competitive advantage for the home team, such as creating a dry, crumbly surface that particularly favours the home teams spin bowlers, or a firm, grassy surface which would favour fast bowlers, or any surface which exacerbates perceived weaknesses in the visiting teams batting. Whilst widely considered unsporting, the practice is comparatively common and is (within reason) legal, but the term is always used pejoratively.
Dolly
a very easy catch.
Donkey drop
A ball with a very high trajectory prior to bouncing.
Doosra
A delivery by a finger spin bowler which turns in the opposite direction to the stock delivery. The word is Hindustani for second or other. Some bowlers can perform the required bowling action legally, whilst others cannot do so without throwing, which is often ascribed to physiological differences.
Dorothy
rhyming slang for six, referring to the author Dorothy Dix.
Dot ball
a delivery bowled without any runs scored off it, so called because it is recorded in the score book with a single dot.
Double
normally the scoring of a 1000 runs and the taking of 100 wickets in the same season.
Down the pitch
(also down the wicket)
referring to the motion of a batsman towards the bowler prior to or during the delivery, made in the hope of turning a good length ball into a half-volley.
Drag
before the current "front foot rule" was introduced, bowlers had to release the ball with the rear foot behind the bowling crease; there were instances of bowlers (especially Gordon Rorke) managing to "drag" the rear foot forward before release and not being no-balled.
Draw
1. a result in timed matches where the team batting last are not all out, but fail to exceed their opponents total. Not to be confused with a tie, in which the side batting last is all out or run out of overs with the scores level.
2. an antiquated stroke that has fallen into disuse, it was originally a deliberate shot that resembled the French cut – the ball being played between ones own legs.
Draw stumps
Declare the game, or a days play, over; a reference to withdrawing the stumps from the ground by the umpire.
Drift
the slight lateral curved-path movement that a spinner extracts while the ball is in flight. Considered very good bowling as it increases the difficulty for a batsman to correctly judge the precise movement and path of the ball .
Drinks
an agreed short break in play, generally taken in the middle of a session, when refreshments are brought out to the players and umpires by the twelfth men of each side. Drinks breaks do not always need to take place, but they are usual in test matches, particularly in hot countries.
Drinks waiter
a jocular term for the twelfth man, referring to his job of bringing out drinks.
Drive
a shot played with a straight bat on the front foot, aiming anywhere between point on the off side and mid-wicket on the leg side. The drive is the most common shot for scoring runs, and often considered the most aesthetically pleasing. Several sub-types are recognised depending on direction, including the square drive, cover drive, off drive, straight drive and on drive.
Drop
1. the accidental "dropping" of a ball that was initially caught by a fielder, thus denying the dismissal of the batsman; when such an event occurs, the batsman is said to have been "dropped".
2. the number of dismissals which occur in a teams innings before a given batsman goes in to bat; a batsman batting at first drop is batting at number three in the batting order, going in after one wicket has fallen.
3. the act of excluding a player from selection in a squad even when they were included in the most recent prior selection.
Dropper
Obsolete term for a delivery by lob bowling, or a bowler who employed this style. Common in the 19th century, this form of bowling is now usually illegal.
Drop-in pitch
a temporary pitch that is cultivated off-site from the field which also allows other sports to share the use of the field with less chance of injury to the players.
DRS
see Umpire Decision Review System
Duck
a batsmans score of nought (zero) dismissed, as in "he was out for a duck." It can refer to a score of nought not out during an innings, as in "she hasnt got off her duck yet", but never refers to a completed innings score of nought not out. Originally called a "ducks egg" because of the "0" shape in the scorebook. ( see Golden, Diamond, and Platinum duck )
Duck under delivery
a short pitched delivery that appears to be a bouncer, making the striker duck to avoid from being hit; but instead of bouncing high, it has a low bounce which causes the batsman to be dismissed LBW, or occasionally bowled.
Duckworth-Lewis method (D/L)
(or Duckworth-Lewis-Stern method, DLS)
a rain rule used if a limited overs match is curtailed by bad weather. If the match cannot be completed, the Duckworth-Lewis method determines the winner, or if the match can be continued but needs to be shortened, it calculates a revised target for the team batting second. The mathematical formula is based on analysis of past completed matches. Used in all international matches since 1999, and in most domestic leagues.
Dugout
term taken from association football, a sheltered place just outside the boundary ropes where a teams non-active players and staff sit. Dugouts are a common feature of Twenty20 matches, but pavilions are usually used for any longer forms of the game.
E
Eagle-Eye
see ball tracking
Economical
A bowler who concedes very few runs from their over(s), i.e. has a low economy rate. The opposite of expensive.
Economy rate
The average number of runs conceded per over delivered by an individual bowler. Lower values are better.
Edge (or snick or nick)
A slight deviation of the ball off the edge of the bat. Top, bottom, inside and outside edges denote the four edges of the bat. The notional four edges are due to the bat being either vertical (inside/outside edge), or horizontal (top/bottom edge). See also leading edge.
Eleven
another name for one cricket team, which is made of eleven players.
End
An area of the ground directly behind one of the stumps, used to designate what end a bowler is bowling from (e.g. the pavilion end). The bowlers take turns delivering alternating overs from the two ends of the pitch.
End of an innings
The batting sides innings ends when a wicket falls or a batsman retires and the batting side has no further not out batsmen able to bat, or when the batting sides captain declares.
Expensive
a bowler who concedes a large number of runs from their over(s), i.e. has a high economy rate. The opposite of economical.
Express pace
Fast bowling at speeds above 150 km/h
Extra
(called a sundry in Australia) A run awarded to the batting team that is not credited to a specific batsman. These are recorded separately on the scorecard. There are five types: byes, leg byes, wides, no-balls and penalties. Wides and no-balls are also recorded as runs conceded in the bowling analysis, the others are not attributed to the bowler.
Extra cover
see cover
F
Fall
a verb used to indicate the dismissal of a batsman, e.g "The fourth wicket fell for the addition of only three runs".
Fall of wicket
the batting teams score at which a batsman gets out. Often abbreviated to "FOW" in scorecards.
Farm the strike (also shepherd the strike or farm the bowling)
of a batsman, contrive to receive the majority of the balls bowled, usually because they are the more skilled of the two batsmen in facing the bowling style of the bowler. This usually results in the batsman contriving to make an even number of runs (for example, by waiting until certain of hitting a 4 or 6, or by running only 2 when they could have run 3) so that they will again face the bowler. For the same reason, they will try to score a single from the last ball of the over, whereas their weaker partner would not attempt to score from it.
Fast bowling
a style of bowling in which the ball is delivered at high speeds, typically over 90 mph (145 km/h). Fast bowlers also use swing.
Fast leg theory
see bodyline
Feather
a faint edge.
Featherbed
A soft, slow pitch of predictable bounce. Such pitches are considered to be good for batting on, offering little, if any, help for a bowler.
Fence
The boundary. Sometimes used in the phrase find(s) the fence, which is said of a ball that is hit to the boundary.
-fer
a suffix to any number, meaning the number of wickets taken by a team or bowler. (See also fifer/five-fer)
Ferret
see rabbit.
Field
1. (noun) a large grass turf area on which the sport is played, forming part of the wider ground. Typically oval, but a wide variety of other shapes are permitted provided they are at least 130 yards (120 m) in diameter. At (or near) the centre of the field is the pitch, and the edge of the field is the boundary, marked by a boundary rope.
2. (verb) the act of fielding.
Fielder (also, more traditionally, fieldsman)
a player on the fielding side who is neither the bowler nor the wicket-keeper, in particular one who has just fielded the ball.
Fielding
Recovering the ball after each delivery. One of the three major skills of cricket (the others being batting and bowling), all members of the team take part in the fielding. If the batsman has hit the ball in the air, the fielders can attempt a catch, to dismiss the batsman. Otherwise they attempt to prevent the ball crossing the boundary, bring it under control, then return it to the vicinity of the stumps. This is done as quickly as possible, to either stop the batsmen from scoring runs or to effect a run out.
Fill-up game
when a match finished early a further game was sometimes started to fill in the available time and to entertain the paying spectators.
Find the gap(s)
to play a shot or series of shots along the ground, in the gaps between fielders. The least risky way of scoring runs quickly, but it requires good technique to be able to do consistently. The opposite, finding the man [at some fielding position], is often used when the ball is hit straight to a fielder through the air and is caught.
Fine
of a position on the field behind the batsman, closer to the line of the pitch (wicket-to-wicket); the opposite of square.
Finger spin
a form of spin bowling in which the ball is made rotate by the action of the bowlers fingers (contrast with wrist spin). For a right-handed bowler this produces off spin, whereas the same technique by a left-handed bowler produces left arm orthodox spin.
First-class cricket
the senior form of the game; usually county, state or international. First-class matches consist of two innings per side and are usually played over three or more days.
First change
the third bowler used in an innings. As the first bowler to replace either of the opening pair this bowler is the first change that the captain makes to the attack.
First eleven
the best team of (eleven) players at a club, who will be chosen by the selector for the most important or high-profile games. Other players are used if members of the first eleven are unavailable or in less important fixtures. Some large clubs have enough players to operate a hierarchy of teams, in which case the others are known as the second eleven, third eleven etc.
First innings points
In first-class competitions with a league table to determine standings, in addition to points awarded for winning or tieing a match, a team is also awarded points for taking a first innings lead, i.e. scoring more than their opponents in the first innings.
Fishing
being tempted into throwing the bat at a wider delivery outside off-stump and missing, reaching for a wide delivery and missing. See also waft.
Five-wicket haul (also five-for, five-fer, fifer, shortened to 5WI or FWI, or colloquially a Michelle, in honour of the actress Michelle Pfeiffer.)
five or more wickets taken by a bowler in an innings, considered a very good performance. The term five-for is an abbreviation of the usual form of writing bowling statistics, e.g., a bowler who takes 5 wickets and concedes 117 runs is said to have figures of "5 for 117" or "5–117". A five-wicket haul is traditionally regarded as a special moment for a bowler, just like a century for a batsman.
Flamingo shot
A type of shot whereby a batter flicks deliveries from outside off-stump through mid-wicket. It is done when the opposition plays a heavy off-side field and then bowl outside off. This shot was originally invented by Kevin Pietersen. It is also played by James Taylor and by New Zealand tailender Trent Boult
Flash
to wield the bat aggressively, often hitting good line and length deliveries indiscriminately.
Flat pitch
a pitch which is advantageous to the batsmen and offers little or no help to the bowlers, due to predictable bounce. Also known as a "flat deck"
Flat hit
an aerial shot hit with significant power by the batsman which travels fast enough to make the ballistic trajectory of the ball appear flat
Flat throw
a ball thrown by the fielder which is almost parallel to the ground. Considered to be a hallmark of good fielding if the throw is also accurate because flat throws travel at a fast pace.
Flat-track bully
a batsman high in the batting order who is very good only when the pitch is not giving the bowlers much help. Track is Australian slang for the pitch. When the track is said to be flat it is at its easiest for the batsman, but these are the only conditions under which some players can dominate. When conditions are tough, or when it really counts – they dont.
Flick
a gentle movement of the wrist to move the bat, often associated with shots on the leg side.
Flight
a delivery which is thrown up at a more arched trajectory by a spinner. Considered to be good bowling. Also loop.
Flipper
a leg spin delivery with under-spin, so it bounces lower than normal, invented by Clarrie Grimmett.
Floater
a delivery bowled by a spinner that travels in a highly arched path appearing to float in the air.
Fly slip
a position deeper than the conventional slips, between the slips and third man.
Follow on
a team which bats first in the second innings, after having batted second in the first innings, is said to have followed on. The captain of the team batting first in the first innings may direct the team batting second to follow on if it leads by a certain margin after the first innings; this margin is currently 200 runs in a five-day game, and 150 runs in a three- or four-day game.
Follow through
a bowlers body actions after the release of the ball to stabilise their body.
Footmarks
On a grass pitch, the bowler creates a rough patch where they lands their foot and follow through after delivering the ball. The rough patch can become cratered and becomes more abrasive as the match continues and more players step on it. The abrasive surface means that the ball will increasingly grip more if it lands in the footmarks. Bowlers, particularly spinners, will aim the ball there as it will turn more sharply, and is more likely to get irregular bounce from such areas, making it more difficult for the opposition batsmen.
Footwork
the necessary feet movements that a batsman has to take so as to be at a comfortable distance from where the ball has pitched, just right to hit the ball anywhere they desire, negating any spin or swing that a bowler attempts to extract after bouncing.
Form
The quality of a players recent performances. A player who is in form has played well in recent games, so is likely to do so again. Those that are out of form have struggled recently. A run of poor form may result in the player being dropped from the team.
Forty-Five (on the one)
An uncommon fielding position akin to a short third-man, roughly halfway between the pitch and the boundary. Also used for a short backward square leg (at 45° behind square defending a single).
Forward defence (or forward defensive)
A common defensive shot, played with the batsmans weight on the front foot and a straight bat held stationary close to the pad. The bat is angled downwards to deflect the ball into the ground in front of the batsman, preventing a catch. Considered a safe way of defending against a delivery which threatens to hit the stumps, but is unlikely to score any runs.
Four
a shot that reaches the boundary after touching the ground, which scores four runs to the batting side.
Four wickets (also 4WI)
Also four-for four or more wickets taken by a bowler in an innings, considered a good performance. Mostly used in One Day Internationals and it is a rarer feat in T20 Cricket.
Fourth stump
a position or line one stumps width outside the off stump, i.e. where the wickets fourth stump would be positioned if it existed. Generally refers to the line or pitch of a delivery. A fourth stump line is roughly synonymous with the corridor of uncertainty.
Free hit
a penalty given in some forms of cricket when a bowler bowls a no-ball. The bowler must bowl another delivery, and the batsman cannot be dismissed by the bowler from that delivery. Between the no-ball and the free hit, the fielders may not change positions (unless the batsmen changed ends on the no-ball).
French cricket
an informal form of the game, typically played by children. The term "playing French Cricket" can imply that a batsman has not attempted to move their feet and looks ungainly because of this.
French Cut (also referred to as a Chinese Cut, Surrey Cut, or Harrow Drive)
term for an unintentionally poorly executed shot which results in an inside edge where the ball narrowly misses hitting the stumps. Such unintentional shots can frequently fool the wicket keeper and may often fortunately result in runs.
Fritz
To be out stumped following a rebound from the wicketkeepers pads on to the stumps.
Front foot
all opposites of back foot1. (of a batsman) in the batting stance, the foot that is closest to the bowler and furthest from the stumps.
2. (of a shot) played with the batmans weight primarily on that foot.
3. (of a bowler) during the bowling action, the last foot to contact the ground before the ball is released.
Front foot contact
during the bowling action, the position of the bowler when their front foot lands on the ground.
Fruit Salad
when a bowler delivers a different type of delivery each time, rather than bowling a constant speed, length and angle. Fruit salad bowling is used most commonly in Twenty20 to prevent batsmen from getting comfortable.
(Full) face of the bat
references the front, or flat side, of the bat, particularly where the manufacturers insignia is written. A batsman is said to have given a bowler or a delivery faced the full face of the bat if they have middled the ball and driven it straight down the wicket, such that the bowler clearly sees the manufacturers insignia.
Full length
a delivery that pitches closer to the batsman than a ball pitching on a good length, but further away than a half-volley.
Full pint
when a stump is knocked completely out of the ground by a delivery.
Full toss (also full bunger in Australia)
a delivery that reaches the batsman on the full, i.e. without bouncing. Usually considered a bad delivery to bowl as the batsman has a lot of time to see the ball and play an attacking shot. Also, it does not have a chance to change direction off the ground, making it the ultimate crime for a spin or seam bowler.
Furniture
another term for the stumps.
G
Gardening
a batsman prodding at the pitch with their bat between deliveries, ostensibly to flatten a bump in the pitch. May be used to inspect features of a deteriorating pitch (e.g. footmarks), or to calm the batsmans nerves. Sometimes considered to be time-wasting.
Genuine number 11
a consistently poor batsman, rightfully relegated to the bottom of the batting order.
Getting ones eye in
a batsman playing low-risk defensive shots when first in, while they assess the conditions and bowlers, before attempting riskier scoring shots.
Give (it) the treatment
of a batsman, to hit a poorly bowled ball well, often for a boundary.
Given man
A skilled guest player added to a weak team to produce a more even contest, often borrowed from the opposition. Given men were most frequent in the early history of cricket, such as Gentlemen v Players games, with the goal of making the result less predictable and to encourage betting on the outcome. The term and concept are rarely used today.
Glance
a shot that goes fine on the leg side. Typically played by flicking a ball which is heading for the batsmans hips or thigh.
Glove
1. (noun) Batting gloves, hand protection worn by a batsman as part of their kit. Padding is mostly on the outside, to defend against impacts by the ball while gripping the handle of the bat.
2. (noun) Wicket-keepers gloves, webbed catching gloves worn by a wicket-keeper. The padding is on the inside, to absorb the impact of a caught ball. No other member of the fielding team is allowed to use gloves.
3. (verb) Touch the ball with a batting glove while the glove is in contact with the bat. In that circumstance, the rules consider the glove to be part of the bat, so the batsman can score runs or be caught if they glove the ball.
Glovemanship (also glovework)
the skills of wicketkeeping, used when either praising or criticising a performance
Golden duck
a dismissal for nought (zero), from the first ball faced in a batsmans innings. ( cf Duck, Diamond duck, Platinum duck )
Golden pair (also King pair)
a dismissal for nought (zero) runs off the first ball faced in each of a batsmans two innings of a two-innings match (see this list of Pairs in test and first-class cricket).
Good length
the ideal place for a stock delivery to pitch in its trajectory from the bowler to the batsman. It makes the batsman uncertain whether to play a front-foot or back-foot shot. A good length differs from bowler to bowler, based on the type, height and speed of the bowler. The "good length" is not necessarily the best length to bowl, as a bowler may wish to bowl short or full to exploit a batsmans weaknesses.
Googly (also wrongun or bosie)
a deceptive spinning delivery by a wrist spin bowler which spins the opposite direction to the stock delivery. For a right-hander bowler and a right-handed batsman, a googly will turn from the off side to the leg side. Developed by Bosanquet around 1900.
Gouging
causing intentional damage to the pitch or ball.
Gozza
term in Australian cricket meaning a batsman who is out on the first ball they receive. See also Golden duck.
Grafting
batting defensively with strong emphasis on not getting out, often under difficult conditions.
Grass
to drop a catch, letting the ball fall onto the grass of the field.
Green top
a pitch with an unusually high amount of visible grass, that might be expected to assist the pace bowlers in particular.
Grip
the rubber casings used on the handle of the bat; alternatively, how the bowler holds the ball and how the batsman holds the bat.
Ground
1. (noun) A collective term for the pitch, field, pavilion and any associated amenities, such as seating for spectators. Large grounds with substantial spectator facilities may be referred to as stadiums.
2. (verb) Turn and touch the bat onto the ground surface behind the popping crease after the batsman has left the crease in the action of taking a shot or starting a run; to run the bat in the process of completing a run in order to be safe.
3. (noun) The batsmans safe area on the pitch. The batsman is in their ground when a part of the body (usually the foot) or the bat is touching the surface behind the popping crease. They have left their ground if they have advanced down the pitch in the action of taking a shot or starting a run. The batsman has made their ground if able to ground their bat or touch the surface behind the popping crease with a part of the body before a fielder can break the wicket for a run out.
Groundsman (or curator)
a person responsible for maintaining the cricket field and preparing the pitch.
Grubber
a delivery that barely bounces.
(Taking) guard
the batsman aligning their bat according with a stump (or between stumps) chosen behind them. Typically, the batsman marks the position of the bat on the pitch. The marking(s) give the batter an idea as to where they are standing in relation to the stumps.
Gully
a close fielder near the slip fielders, at an angle to a line between the two sets of stumps of about 100 to 140 degrees.
H
Hack
a batsman of generally low skill with an excessively aggressive approach to batting, commonly with a preference towards lofted cross bat shots. A poor defensive stance and lack of defensive strokes are also features of a hack. Can also be used for one particular stroke
Half century
an individual score of 50 runs or more, but less than 100 (century). Reasonably significant landmark for a batsman and more so for the lower order and the tail-enders.
Half-tracker
another term for a long hop. So called because the ball roughly bounces halfway down the pitch.
Half-volley
a delivery that bounces just short of the block hole. Usually easy to drive or glance away.
Handled the ball
occurs when a batsman touches the ball with their hands (when they are not gripping the bat) while the ball is still live. No longer a method of dismissal in its own right, but can result in the batsman being given out.
Harrow drive
see French cut
Hat-trick
a bowler taking a wicket off each of three consecutive deliveries that they bowl in a single match (whether in the same over or split up in two consecutive overs, or two overs in two different spells, or even spread across two innings of a test match or first-class cricket game).
Hat-trick ball
a delivery bowled after taking two wickets with the previous two deliveries. The captain will usually set a very attacking field for a hat-trick ball, to maximise the chances of the bowler taking a hat-trick. The bowler is said to be on a hat-trick before they deliver the hat-trick ball.
Hawk-Eye
see ball tracking
Heavy Roller
a very heavy cylinder of metal used by the ground staff, to improve a wicket for batting.
Helicopter shot
a batting shot played by flicking the ball through the air on the leg side, often to avoid close fielders and attempt to hit a boundary. Well-timed helicopter shots off of fast deliveries can often result in six runs, sometimes toward the cow corner. Helicopter shots are differentiated from slogs and hoicks in that they are somewhat more refined in that they require timing to come off.
Helmet
Protective headgear worn by batsmen facing pace bowling or fielders located very close to the batsman. Cricket helmets consist of a hard padded hemisphere protecting the brain case, a front brim, and a large metal grill over the face and jaw, with gaps smaller than the diameter of the ball.
High score
the most runs scored by the batsman in a single innings. see also Best bowling
Hip Clip
a trademark shot of Brian Lara involving a flick of the wrist to whip a ball, at hip height, at right angles past the fielder at square leg.
Hit the ball twice
A batsman is out hit the ball twice if having struck the ball once with their person or bat, they strike the ball a second time with their person or bat (but not a hand not holding the bat). A batsman is not out if the second strike was for the sole purpose of guarding their wicket. A batsman may not strike the ball a second time to prevent it from being caught (out obstructing the field).
Hit wicket
a batsman getting out by dislodging the bails of the wicket behind them either with their bat or body as they try to play the ball or set off for a run.
Hoick
an unrefined shot played to the leg side usually across the line of the ball.
Hold up an end
A batsman who is intentionally restricting their scoring and concentrating on defence while their batting partner scores runs at the other end; or, a bowler who is bowling defensively to restrict runs at their end while their bowling partner tries to take wickets at the other end.
Hole out
To be dismissed by being caught, usually referring to a catch from a lofted shot (or attempt thereof) in the outfield or forward from the wicket, rather than being caught behind by the wicketkeeper, in the slips cordon, or a leg trap fielder from edges or gloved balls.
Hoodoo
a bowler is said to have the hoodoo on a batsman when they have got them out many times in their career. (See rabbit II.)
Hook
a shot, similar to a pull, but played so that the ball is struck when it is above the batsmans shoulder.
Hoop
a particularly large amount of swing.
Hot Spot
a technology used in television coverage to evaluate snicks and bat-pad catches. The batsman is filmed with an infrared camera, and friction caused by the strike of the ball shows up as a white "hot spot" on the picture.
Howzat or Hows that?
See appeal
Hundred
1. See century
2. 100-ball cricket, a limited overs format with modified rules, such as each team batting for up to 100 legal balls, to speed up the game for television broadcasters.
3. The Hundred, a domestic competition in England that uses the 100-ball format, introduced in 2021.
Hutch
the pavilion or dressing room. Non-specialist batsmen or tail enders can be known as rabbits, so when they are given out they return to the hutch.
I
In
of a batsman, presently batting.
In/out field
The in/out field is the group of fielders close to the batsman or closer to the boundary respectively.
An in/out field is a field setting, usually with 5 close fielders and 3 on the boundary, designed to force batsmen into errors by trying to deny the opportunity to score singles while saving easy boundaries.
Incoming batsman
the batsman next to come in in the listed batting order. The incoming batsman is the one who is out when a "timed out" occurs.
Inswing or in-swinger
a delivery that curves into the batsman in the air from off to leg.
In-cutter
a delivery that moves into the batsman after hitting the surface.
Infield
the region of the field that lies inside the 30-yard circle (27 m) or, in the days before defined circles, the area of the field close to the wicket bounded by an imaginary line through square leg, mid on, mid off and cover point.
Innings
one players or one teams turn to bat (or bowl). Unlike in baseball, the cricket term "innings" is both singular and plural.
Inside edge
The edge of the bat facing the batsmens legs. The ball then generally goes on the stumps, legs or at Forty-Five.
Inside-out
when a batsman opens the chest and plays a ball, usually aggressively and often dancing down the pitch, toward the covers.
Its (just) not cricket
An idiomatic expression, objecting to actions that the speaker feels are unsporting, unfair, or against the spirit of the game
J
Jack
a number eleven batsman. From the jack playing card, which ranks immediately after the number ten in each suit.
Jaffa (also corker)
an exceptionally well bowled, practically unplayable delivery, usually but not always from a fast bowler.
Jayadevans system
an unsuccessful proposal for a rain rule, as an alternative to the Duckworth-Lewis system. Has never been used in professional cricket.
Jockstrap (also jock strap)
underwear for male cricketers, designed to securely hold a cricket box in place when batting or wicket keeping.
Just not cricket
see its (just) not cricket
K
Keeper
see wicket-keeper
Keep wicket
1. For a wicketkeeper to field.
2. "Keep wickets in hand", which is when a batting team tries to have relatively few of its batters get out. Usually done so that more run-scoring risks can be taken near the end of their innings in a limited-overs match, since they will have many batters left and only a few balls left.
King pair
see Golden pair.
Knock
a batsmans innings. A batsman who makes a high score in an innings can be said to have had a "good knock".
Knuckle ball
A type of delivery where the fast bowler holds the ball on the knuckles of their index and middle finger. A type of slower ball.
Kolpak
an overseas player who plays in English domestic cricket under the Kolpak ruling.
Kwik cricket
(or quick cricket) An informal version of cricket with sped-up rules, specifically designed to introduce children to the sport.
L
Lappa
The Indian version of the hoik. Comes from the English lap, an old term for a stroke somewhere between a pull and a sweep.
Laws
The laws of cricket are a set of rules established by the Marylebone Cricket Club (MCC) and apply to cricket worldwide. Cricket is one of the few sports for which the governing principles are referred to as Laws rather than as Rules or Regulations. Note that regulations to supplement and/or vary the laws may be agreed for particular competitions.
LBW
see leg before wicket
Leading edge
The ball hitting the front edge of the bat as opposed to its face, when playing a straight-bat shot. Often results in an easy catch for the bowler or a skier for someone else.
Leave
The action of the batsman not attempting to play at the ball. They may do this by holding the bat above their body. However, there is a clause in the LBW rules making them more susceptible to getting out this way. A batsman leaving the ball may also not claim any leg byes.
Left arm
A bowler who bowls the ball with their left hand is, by convention, called a left-arm or left arm bowler (rather than left hand or left-handed). (Contrast "left hand batsman".)
Left-arm orthodox spin
the style of spin bowling produced by left-arm finger spin; the left-arm equivalent of off spin.
Left-arm unorthodox spin
the style of spin bowling produced by left-arm wrist spin; the left-arm equivalent of leg spin. Formerly called Chinaman bowling, after Ellis Achong, a West Indian of Chinese descent, but that term is now considered derogatory.
Left hand
A batsman who bats left-handed is said to be a left-hand bat. (Contrast "left arm bowler".)
Leg before wicket (LBW)
a way of dismissing the batsman. In brief, the batsman is out if, in the opinion of the umpire, the ball hits any part of the batsmans body (usually the leg) before hitting or missing the bat and would have gone on to hit the stumps.
Leg break
a spin bowling delivery which turns from the leg side to the off side of a right-handed batsman. The stock delivery of a leg spin bowler.
Leg bye
Extras taken after a delivery hits any part of the body of the batsman other than the bat or the gloved hand that holds the bat. If the batsman makes no attempt to play the ball with the bat or evade the ball that hits them, leg byes may not be scored.
Leg cutter
A break delivery bowled by a fast or medium-pace bowler with similar action to a spin bowler, but at a faster pace. The ball breaks from the leg side to the off side of the batsman.
Leg glance
A delicate shot played at a ball aimed slightly on the leg side, using the bat to flick the ball as it passes the batsman, deflecting towards the square leg or fine leg area.
Leg side
the half of the field to the rear of the batsman as they take strike (also known as the on side).
Leg slip
a fielding position equivalent to a slip, but on the leg side.
Leg spin
the style of spin bowling produced by right-handed wrist spin. So called because the stock delivery is a leg break. Common variations include the googly, top spinner, and flipper.
Leg theory
A tactic sometimes used by the fielding side. The bowler aims for a line on leg stump and more fielders than usual are placed on the leg side, particularly short catching positions. This prevents the batsman from playing shots on the off side. The goal is to slow the scoring and frustrate the batsman into an opportunity for a catch. See also fast leg theory and Bodyline. The opposite of off theory.
Leggie
1. a leg spin bowler
2. a leg break delivery
Length
the place along the pitch where a delivery bounces (see short pitched, good length, half-volley, full toss).
Life
a batsman being reprieved because of a mistake by the fielding team, through dropping a catch, missing a run-out chance or the wicket-keeper missing a stumping.
Light
see bad light
Limited overs match
a one-innings match where each side may only face a set number of overs. Another name for one-day cricket.
Line (also see Line and length)
the deviation of the point along the pitch where a delivery bounces from the line from wicket-to-wicket (to the leg side or the off side).
Line and length bowling
bowling so that a delivery pitches on a good length and just outside off stump. This forces the batsman to play a shot as the ball may hit the stumps.
List A cricket
the limited-overs equivalent of first-class cricket.
Lob bowling
An obsolete form of underarm bowling, in which the ball was lobbed high in the air, falling back to the pitch on a steep trajectory. A common variation aimed to drop the ball directly onto the stumps, without bouncing. Lob bowling was popular in the 19th century but is now illegal.
Loft(ed shot)
a type of shot where the ball is hit in the air
Lolly
a ball that a batsman is easily able to hit, or a ball that a fielder is easily able to catch
Long hop
a delivery that is much too short to be a good length delivery, but without the sharp lift of a bouncer. Usually considered a bad delivery to bowl as the batsman has a lot of time to see the ball and play an attacking shot.
Long off
a fielding position close the boundary on the off side, located in front of, and relatively close to, the line of the wicket.
Long on
a fielding position close the boundary on the leg side, located in front of, and relatively close to, the line of the wicket.
Long stop
or very fine leg a fielding position on the boundary directly behind the wicket-keeper, to recover any byes or wides that evade the keeper. Occasionally seen at village cricket level but obsolete in professional cricket.
Look for two
running a single with urgency, the batsman convey the sense (to each other, to the crowd, to commentators) that they will attempt a second run, though no commitment is expected until after the turn. See push, two.
Loop
the curved path of the ball bowled by a spinner.
Loosener
a poor delivery bowled at the start of a bowlers spell.
Lost ball
A ball which cannot be retrieved, due to being lost or otherwise out of reach (e.g. hit into a river or over a fence). The umpire calls dead ball to stop play, and the batsman is credited with any runs completed (generally a boundary) or in progress at the time dead ball is called. A replacement ball is selected, preferably a used one in similar condition to the lost ball.
Lower order (colloquially the tail)
the batsmen who bat at between roughly number 8 and 11 in the batting order and who may have some skill at batting, but are generally either specialist bowlers or wicket-keepers with limited batting ability. Such batsmen are known as lower order batsmen or tail-enders.
Lunch
the first of the two intervals taken during a full days play, which usually occurs at lunchtime at about 12:30 pm (local time).
M
Maiden over
an over in which no runs are scored off the bat, and no wides or no-balls are bowled. Considered a good performance for a bowler, maiden overs are tracked as part of a bowling analysis.
Makers name
the full face of the bat, where the manufacturers logo is normally located. Used particularly when referring to a batsmans technique when playing a straight drive, so that the opposing bowler has full view of the batsmans bat label (e.g., "Sachin Tendulkar played a beautiful on-drive for four, giving it plenty of makers name...").
Man of the match
an award which may be given to the highest scoring batsman, leading wicket taker or best overall performer in a match. Man of the series is the same over a whole series.
Manhattan
also called the Skyline. It is a bar graph of the runs scored off each over in a one-day game, with dots indicating the overs in which wickets fell. The name is alternatively applied to a bar graph showing the number of runs scored in each innings in a batsmans career. So called because the bars supposedly resemble the skyscrapers that dominate the skyline of Manhattan.
Mankad
the running-out by the bowler of a non-striking batsman who leaves their crease before the bowler has released the ball. It is named after Vinoo Mankad, an Indian bowler, who controversially used this method in a Test match. This is relatively common in indoor cricket and is noted separately from run outs, though almost unheard of in first-class cricket.
Manufacturer
typically, the firm responsible for producing a batsmans cricket bat. Some popular manufacturers include Kookaburra Sport, Gray-Nicolls, and Sanspareils Greenlands.
Marillier shot
see paddle scoop
Marylebone Cricket Club (MCC)
the cricket club that owns Lords cricket ground in London NW8. It is the custodian of the laws of cricket.
Match fixing
Illegally arranging the outcome of a match in advance, such as playing deliberately poorly to ensure a loss. Most commonly fixing is due to bribes or collusion with gamblers who bet on the result, see betting controversies in cricket. Fixing outcomes other than the final result is known as spot fixing. This is not only against the rules of cricket, but often against the law of the country where it is played; some cricketers have gone to jail for their involvement in match fixing scandals.
Match referee
an official whose role is to ensure that the spirit of the game is upheld. They have the power to fine players and/or teams for unethical play.
Maximum
see six.
Meat of the bat
the thickest part of the bat, from which the most energy is imparted to the ball.
Mecca of cricket
this usually refers to the Lords cricket ground, also known as the Home of Cricket.
Medium-pace
a bowler who bowls slower than a pace bowler, but faster than a spin bowler. Speed is important to the medium-pacer, but they try and defeat the batsman with the movement of the ball, rather than the pace at which it is bowled. Medium-pacers either bowl cutters or rely on the ball to swing in the air. They usually bowl at about 55–70 mph (90–110 km/h).
Michelle
See five-wicket haul.
Middle of the bat
the area of the face of the bat that imparts maximum power to a shot if that part of the bat hits the ball. Also known as the "meat" of the bat. Effectively the same as the sweet spot; however, a shot that has been "middled" usually means one that is hit with great power as well as timing.
Middle order
the batsmen who bat at between roughly number 5 and 7 in the batting order. Often includes all-rounders and the wicket-keeper.
Mid-off
a fielding position intended to save a run on the off side, located in front of, and relatively close to, the line of the wicket.
Mid-on
a fielding position intended to save a run on the leg side, located in front of, and relatively close to, the line of the wicket.
Mid-wicket
a fielding position intended to save a run on the leg side, located between mid-on and square leg.
Military medium
medium-pace bowling that lacks the speed to trouble the batsman. Often has derogatory overtones, suggesting the bowling is boring, innocuous, or lacking in variety, but can also be a term of praise, suggesting a military regularity and lack of unintended variation. A good military medium bowler will pitch the ball on the same perfect line and length for six balls an over, making it very hard for the batsman to score runs.
Milking (or milk the bowling)
Scoring a steady stream of easy runs at a moderate rate with little risk of getting out, by taking advantage of poor bowling or gaps in the field. Particularly common against spin bowling in limited overs matches, due to the circle restrictions.
Mine
see call
Mis-field
a fielder failing to collect the ball cleanly, often fumbling a pick-up or dropping a catch.
Mullygrubber
(Australian) see grubber
N
Negative bowling
a persistent line of bowling down the leg-side of a batsman to stymie the batsman from scoring (particularly in Test matches).
Nelson
A score of 111, either for a team or for an individual batsman. Superstition holds that this score is unlucky and more likely to lead to a dismissal, due to its resemblance to the three stumps of the wicket (as if the bails had been removed). Tradition holds that the bad luck may be averted if spectators stand on one leg. Scores of 222 and 333 are known as double Nelson and triple Nelson respectively; these are sometimes also considered unlucky.
Nervous nineties
The period of a batsmans innings when their score is between 90 and 99. Many players bat nervously in this period because they are worried about getting out before they reach a century. The fielding captain often sets an attacking field to increase the psychological pressure. Some players are particularly prone to getting out in the nervous nineties.
Nets
a pitch surrounded on three sides by netting, used by for practice by batsman and bowler.
Net run rate (NRR)
In a match, the average run rate scored by a team minus the average run rate scored against them. In a series, a teams NRR is (total runs scored) / (total overs received) – (total runs conceded) / (total overs bowled).
New ball
In professional cricket, a new ball is used at the beginning of each innings. In timed matches, the fielding captain has the option of taking another new ball after 80 overs have passed. A new ball is generally harder and shinier than an old ball, moves faster through the air (favouring pace bowling) and may swing after a few overs of polishing. In contrast, an old ball will be softer, rougher, and will typically have been polished on one side but not the other, favouring reverse swing and spin bowling.
Nibble
A small amount of movement by the ball off the seam.
Nick
Another term for edge or snick.
Nightwatchman
(in a first-class game) a lower order batsman sent in when the light is dimming to play out the remaining overs of the day in order to protect more valuable batsmen for the next days play.
No
batsmans call not to run. See Push, Yes, Two, Wait.
No-ball
an illegal delivery; the batting side is awarded one extra, the bowler must deliver another ball in the over, and the batsman cannot be dismissed by the bowler on a no-ball. Most usually a front-foot no-ball, in which the bowler oversteps the popping crease; other reasons include bowling a full toss above waist height (see beamer), throwing, having more than two fielders (excluding the wicketkeeper) behind square on the leg side, or breaking the return crease in the delivery stride.
No mans land
an area of the field where a fielder cannot save a single, nor stop a boundary. Occasionally used to catch a batsman who mistimes a shot.
No result
the outcome of a limited overs match in which each team does not face the minimum number of overs required for a result to be recorded, usually due to a rain delay. This is generally equivalent to a draw, but differs in the recording of some statistics.
Non-striker
the batsman standing at the bowling end.
Nothing shot
An overly-tentative shot by the batsman: neither a committed attempt to hit the ball, nor a deliberate leave. This often results in an edge, beating the bat, or playing on.
Not out
1. a batsman who is in and has not yet been dismissed, particularly when play has ceased.
2. the call of the umpire when turning down an appeal for a wicket.
Nurdle
To score runs, usually in singles, by using low-risk shots to gently nudge the ball into vacant areas of the field.
O
Obstructing the field
An extremely rare method of dismissal. The batsman is given out if they wilfully interfere with the fielding, such as blocking a run out or preventing a fielder from taking a catch. Since 2017 obstructing the field includes the offence of handled the ball, which was previously considered a separate method of dismissal.
Occupying the crease
The act of a batsman staying in for a long time, without trying to score many runs. This tires the bowler and may frustrate the fielding side in a timed match, but requires skilled defensive batting technique. It is particularly prized among opening batsmen or when batting for a draw.
ODI
see One Day International
Odds match
a match in which one side has more players than the other. Generally, the extra players were allowed to field as well as bat and so the bowling side had more than 11 fielders.
Off break
an off spin delivery which, for a right-arm bowler and a right-handed batsman, will turn from the off side to the leg side (usually into the batsman).
Off cutter
A cutter which, for a right-handed batsman, turns from the off side to the leg side. The pace bowling equivalent of an off break.
Off side
the half of the pitch in front of the batsmans body as they take strike. For a right-handed batsman this is the right half of the pitch, looking up the wicket towards the bowler, and the left half for the left-handed batsman. The opposite of leg side.
Off spin
the style of spin bowling produced by right-arm finger spin. So called because the stock delivery is an off break. Common variations include the arm ball and doosra.
Off the mark
when the first run is scored by a batsman, it is said that the batsman is off the mark. If a batsman gets out without scoring, it is said that the batsman failed to get off the mark.
Off theory
A tactic sometimes used by the fielding side. The bowler aims for a line wide of off stump and most fielders are placed on the off side. This prevents the batsman from playing shots on the leg side, whilst most of the off side is covered by fielders. The goal is to slow the scoring and frustrate the batsmen into an opportunity for a catch. The opposite of leg theory.
Offer the light
Under historical rules, offering the light was the act of the umpires giving the batsmen the choice of whether or not to leave the field during times of bad light. Offering the light has disappeared from the game since 2010, the decision of whether or not to leave the field for bad light is made solely by the umpires.
Olympic
Five consecutive ducks. The term alludes to the five interlocking Olympic rings. See also Audi.
On side
see leg side. The opposite of off side.
On a length
a delivery bowled on a good length.
On strike
the batsman currently facing the bowling attack is said to be on strike.
On the [shot name]
used to describe the type of shot that fielders are placed in order to intercept. For example, "three men on the hook" means three fielders who are placed behind square leg to catch the ball if the hook shot is used. "On the drive" is a similar term used for any type of drive, so generally within a straight V in front of the batsman.
On the up
a batsman playing a shot, usually a drive, to a ball that is quite short and has already risen to knee height or more as the shot is played.
One-day cricket
an abbreviated form of the game, with just one innings per team, usually with a limited number of overs and played over one day.
One Day International (ODI)
a match between two national sides limited to 50 overs per innings, played over at most one day.
One down
a batsman who bats at No. 3 i.e. when the team is "down by one wicket", a crucial position in the teams batting innings.
One short
when a batsman fails to make contact with the ground beyond the popping crease, and turns back for an additional run.
Opener
1. an opening batsman
2. an opening bowler
Opening batsman
One of the two batsmen who are in at the start of the innings. They must face the opening bowlers and new ball, so require good defensive technique to avoid getting out, especially in a timed match. In a limited overs match an opening batsman must also score quickly during the power play.
Opening bowler
One of the two bowlers who begin the innings using the new ball. They are usually the fastest or most aggressive bowlers in the side.
Opposite number
A player on the other team who plays the same role. Often used for captains and wicket keepers.
Orthodox
1. shots played in the accepted "textbook" manner, and batsmen who play in this manner.
2. see left-arm orthodox spin.
Out
1. the state of a batsman who has been dismissed.
2. the word sometimes spoken while raising the index finger by the umpire when answering an appeal for a wicket in the affirmative.
Out dipper
a dipper that curves away from the batsman before pitching.
Outswing
a delivery that curves away from the batsman.
Outfield
the part of the field lying outside the 30-yard (27 m) circle measured from the centre of the pitch or, less formally, the part of the pitch furthest from the wickets.
Over
the delivery of six consecutive legal balls by one bowler.
Over rate
the average number of overs bowled per hour.
Over the wicket
a right-arm bowler passing to the left of the non-strikers stumps in their run-up, and vice versa for a left-arm bowler. Compare with around the wicket.
Overarm
the action of bowling with the arm swinging from behind the body over the head, releasing the ball on the down swing without bending the elbow. This type of bowling is the only type normally allowed in all official cricket matches. Compare with underarm.
Overpitched delivery
a delivery that is full pitched but not a yorker, bouncing just in front of the batsman. Considered a poor delivery, as it easy for the batsman to get the middle of the bat to the ball. An overpitched ball is often a half-volley.
Overthrows
the scoring of extra runs due to an errant throw from a fielder. Also known as buzzers. Occasionally used erroneously for any runs scored after a fielder misfields the ball. Also refers to the throw itself as the ball goes far past or "over" the intended target.
P
Pace bowling (also fast bowling)
a style of bowling in which the ball is delivered at high speeds, typically over 90 mph (145 km/h). A pace bowler (or paceman) often will also use swing.
Pads
protective equipment for batsmen and wicket-keepers, covering the legs.
Pad away or pad-play
to intentionally use the pads deflect the ball away from the wicket. Using the pad instead of the bat removes the danger of being caught by close fielders, but would only be used when there is no risk of being dismissed LBW (for example, if the ball pitched on the leg side).
Paddle sweep
A very fine sweep, almost just a tickle of the delivery pitched on or outside leg stump.
Paddle scoop
(also known as a scoop shot, ramp shot, Dilscoop or Marillier shot)
A shot in which the batsman drops on one knee and scoops the ball over their shoulder or head, using the bat as ramp to redirect the momentum of the ball. Considered unorthodox and risky, but can result in a boundary behind the wicketkeeper or in the fine leg region.
Pair
a "pair of spectacles" (0–0) or a "pair of ducks". A batsmans score of nought (zero) runs in both innings of a two-innings match (see this list of Pairs in test and first-class cricket).
Par score
during the second innings of a limited overs match, the par score is the target the Duckworth-Lewis method calculates if the match were to be halted now (e.g. due to rain). Updated after each ball, the par score can be used to judge whether the chasing team is ahead or behind the run rate and wickets they will probably require to win, which is of interest even in uninterrupted matches.
Partnership
the number of runs scored between a pair of batsmen before one of them gets dismissed. This also includes the deliveries faced and time taken.
Part-time bowler (or part-timer)
a specialist batsman (or even a wicketkeeper) who is not known for bowling, but has adequate skills to occasionally bowl a few overs. Captains use part-timers to provide some variation in their attack, and to subject batsmen to bowlers they would not have prepared to face.
Pavilion
a building or grandstand that contains the players dressing rooms, located adjacent to the field. At larger grounds, the pavilion usually incorporates seating (and other facilities) for members of the home club to watch the game.
Peach
a delivery bowled by a fast bowler considered unplayable, usually a really good delivery that a batsman gets out to, or one that is too good that the batsmen cannot even edge.
Pea roller
A now illegal delivery where the ball is rolled along the ground rather than bowled over-arm.
Pegs
(slang) the stumps.
Perfume ball
a bouncer on or just outside off-stump that passes within inches of the batsmans face. So called because the ball is close enough to the batsmans face that they can supposedly smell it.
Pfeiffer
see five-wicket haul
Pick
Of a batsman, to correctly identify which variation a bowler (often a spin bowler) has delivered.
Pick of the bowlers
the bowler who performed the best, whether over the course of an innings or a match.
Picket fences
an over in which one run is scored off each delivery. It looks like picket fences 111111, hence the name.
Pie Chucker (or Pie Thrower)
A poor bowler, usually of slow to medium pace whose deliveries are flighted so much as to appear similar to a pie in the air. Considered easy to score off by batsmen – see Buffet Bowling.
Pinch hitter/Slogger
a lower order batsman promoted up the batting order to increase the run rate. The term, if not the precise sense, is borrowed from baseball.
Pitch
1. the rectangular surface in the centre of the field where most of the action takes place, usually made of earth or clay. It is 22 yards in length. Also known as the "deck", as used in the phrase "hit the deck", which is when a bowler makes the ball bounce more off the pitch.
2. of the ball, to bounce before reaching the batsman after delivery.
3. the spot where the ball pitches (sense 2).
Pitch (It) Up
to bowl a delivery on a fuller length.
Pitch map
a diagram showing where a number of balls, usually from a particular bowler, have pitched. Compare beehive.
Placement
the ball when it is hit such that it bisects or trisects the fielders placed on the field. The ball usually ends up being a four.
Platinum duck
a player dismissed without even facing a ball – most likely by being run out as the non-striker. Also sometimes referred to as a Diamond Duck.
Play and miss
when a batsman aims to hit the ball with the bat but does not make contact; a swing and a miss
Playing on
for the batsman to hit the ball with their bat but only succeed in diverting it onto the stumps. The batsman is thus out bowled. Also known as "dragging on" or "chopping on"
Playing time
the set of rules specifying when the match begins, which intervals and/or drinks breaks occur, how long play can continue etc. These vary widely depending on the type of match (a Twenty20 lasts less than 3 hours, whilst a Test match takes up to five days) and may be adjusted by prior agreement to account for local climate, the specific competition etc.
Plumb
of a dismissal by LBW: indisputable, obvious. Of a wicket, giving true bounce.
Point
A fielding position square of the batsmans off side.
Point of release
the position of the bowler at the moment when the ball is released.
Pongo
(used primarily by UK county players) a very high volume of run-making, or batting assault.
Popper
a ball that rises sharply from the pitch when bowled (pops up).
Popping crease
One of two lines in the field defined as being four feet in front of and parallel to that ends bowling crease where the wickets are positioned. A batsman who does not have either the bat or some part of their body touching the ground behind the popping crease is considered out of their ground and is in danger of being dismissed run out or stumped.
Powerplay
a block of overs that in One Day Internationals offer a temporary advantage to the batting side.
Pro20
South African form of Twenty20
Pro40
a professional limited overs competition that was played in England from 1969–2009, with 40 overs per side
Projapoti
A delivery by a pace bowler which minimises rotation of the ball, causing it to move erratically in flight. Based on the same aerodynamic principles as baseballs knuckleball.
Protected area
An area of the pitch defined as two feet wide down the middle of the pitch and beginning five feet from each popping crease. A bowler is not allowed to trespass this area in their follow-through or the bowler is given a warning. Three such warnings will immediately bar them from bowling for the rest of the innings.
Pull
a shot played to the leg side to a short-pitched delivery, between mid-wicket and backward square-leg.
Pursuit
Synonym of runchase.
Push
batsmans call for a run, urging their partner to look for two. See No, Yes, Two, Wait.. 2. Can also refer to a bowler pushing the ball through, meaning they bowl it quicker (usually said of a spinner). See darting the ball in. }}
Put down
1. Put down the wicket 2. To put down (drop) a catch; often said with the batsman as the object i.e. the fielder has put "him" down.
Q
Quarter seam
a flush join between pieces of leather on the surface of the ball, running at right angles to the main stitched seam
Quick
Traditionally, a quick bowler was one who completed their over in a short space of time. In more recent years, it has been used as a synonym for a fast or pace bowler. (Paradoxically, a quick bowler in the traditional sense was often also a slow bowler, that is, a bowler who delivered slow deliveries. A fast or pace bowler was rarely also a quick bowler in the traditional sense, because they took a longer time to complete an over.)
Quick cricket
see kwik cricket
Quick single
When the two batsmen run very quickly between the wickets to score a run
Quota
In a limited overs match, the maximum number of overs that may be bowled by each bowler. Equal to the total number of overs in the innings divided by five (rounded up if necessary). For a One Day International the quota is 10 overs, while for a T20 match the quota is 4 overs.
Quotient (or runs per wicket ratio)
A quantity used as a league table tie-breaker in some first class tournaments. Defined as the number of runs scored per wicket lost when batting, divided by the number of runs conceded per wicket taken when bowling. The equivalent in limited overs tournaments is the net run rate.
R
Ramp shot
see paddle scoop
Rabbit
1. A particularly poor batsman, who is, invariably, a specialist bowler. While most lower-order batsmen would be expected to occasionally score some useful runs, a rabbit is expected to be dismissed cheaply almost every time. Another term, ferret, refers to a batsman even worse than a rabbit. The term comes from hunting with ferrets where the hunter "sends in a ferret after the rabbits".
2. A higher order batsman who is out frequently to the same bowler is referred to as that bowlers rabbit or bunny.
Rain delay
A halt in game proceedings due to rain, but not yet a wash out.
Rain rule
Any of various methods of determining which team wins a rain-shortened one-day match. The current preferred method is the Duckworth–Lewis method.
Red cherry
A nickname for the red cricket ball. See cherry.
Referral
(or review) An invocation of the Umpire Decision Review System, referring the decision to the third umpire for review.
Release or point of release
The moment in a bowling action when the bowler lets go of the ball.
Required run rate
(or asking rate) The run rate needed by the batting team to win, in the second innings of a limited overs match. Calculated as the number of runs required for victory (not a tie), divided by the number of overs remaining. Often shown on the scoreboard and updated automatically after each delivery. A high required run rate necessitates more aggressive batting.
Reserve day
A vacant day in a touring schedule which can be used to replay or reconvene a match which is washed out. Mostly seen in the latter stages of major limited-overs tournaments.
Rest day
A non-playing day in the middle of a multiple day game. These were once common, but are seldom seen in the modern era.
Result
The final outcome of a match. Possible results are a win/loss, a draw, or a tie. Alternatively, rain delays may lead to an outcome of no result, or a wash out may lead to the match being abandoned before it begins.
Retire
for a batsman to voluntarily leave the field during their innings, usually because of injury. A player who retires through injury/illness ("retired hurt/ill") may return in the same innings at the fall of a wicket, and continue where they left off. A player who is uninjured ("retired out") may return only with the opposing captains consent.
Reverse sweep
a right-handed batsman sweeping the ball like a left-handed batsman and vice versa.
Reverse swing
the art of swinging the ball contrary to how a conventionally swung ball moves in the air; i.e. movement away from the rough side. Many theories as to how this may occur. Usually happens with an older ball than conventional swing, but not always, atmospheric conditions and bowler skill also being important factors. It has been espoused that once the rough side becomes extremely rough a similar effect to that of a dimpled golf ball may cause it to move more quickly through the air than the shiny side of the ball. Invented by Pakistani fast bowler Sarfaraz Nawaz and later perfected by the likes of Imran Khan, Wasim Akram and Waqar Younis.
Review
see referral
Rib tickler
A ball bowled short of a length that bounces up higher than expected and strikes the batsman in the midriff (usually the side) and hits several ribs.
Right arm
a bowler who bowls the ball with their right hand is, by convention, called a right-arm or right arm bowler (rather than right hand or right-handed). (Contrast "right hand batsman".)
Right hand
a batsman who bats right-handed is a right hand bat. (Contrast "right arm bowler".)
Ring field
A field which is set primarily to save singles, consisting of fieldsmen in all or most of the primary positions forward of the wicket, on or about the fielding circle (or where it would be).
Road
A very hard and flat pitch, good for batting on. Synonyms such as street, highway, etc. may sometimes be used in the same context.
Rogers
The second XI of a club or county. From the Warwickshire and New Zealand player Roger Twose.
Roller
A cylindrical implement used to flatten the pitch before play or between sessions. Usually there are two rollers available, a heavy roller and a light roller, with the choice of which to use being given to the captain of the batting side.
Rotate the strike
to look to make singles wherever possible, in order to ensure that both batsmen are continually facing deliveries and making runs. The opposite of farming the strike.
Rough
a worn-down section of the pitch, often due to bowlers footmarks, from which spinners are able to obtain more turn.
Round the wicket
see around the wicket
Roundarm bowling
the type of bowling action in which the bowlers outstretched arm is perpendicular to their body when they release the ball. Round arm bowling is legal in cricket.
Royal Duck
a dismissal for nought (zero), from the first ball faced in the game.
Run
The basic unit of scoring: the team with the most runs wins the match (with rare exceptions if a rain rule is required). The two batsmen can score a run by both running from one end of the pitch to the other, grounding their bat or foot beyond the popping crease, before the fielding team can recover the ball and return it for a run out. This usually requires the striker to play a shot, directing the ball away from the fielders, in which case the run is credited to the striker. Multiple runs can be scored from a single delivery, or by hitting a boundary (either a four or a six). The batting team (but not an individual batsman) is credited with one or more runs if there are extras or penalty runs.
Run chase
The act/task of the team batting second (in a limited-overs match) or batting fourth (in an unlimited overs match), trying to win a match by batting and surpassing the runs accumulated by the opponent.
Run out
dismissal by a member of the fielding side breaking the wicket while the batsman is outside their crease making a run.
Run rate
the average number of runs scored per over.
Run up
see approach.
Runner
a player from the batting side who assists an injured batsman by running between the wickets. The runner wears and carry the same equipment, and can be run out. Since 2011, runners have not been permitted in international cricket, but can be used at lower grades.
Runs per wicket ratio
See quotient.
Runscorer or run scorer
a batsman who is prolific at scoring runs, particularly if they hold a record for doing so.
S
Safe
The batsmen are safe when in their ground, or starting from a position between the popping creases it have made their ground before a fielder can break the wicket.
Sandshoe crusher
See toe crusher
Sawn off
A batsman who has been wrongly or unluckily given out by an umpire.
Scoop shot
see paddle scoop
Scoreboard
A large mechanical or electronic display that indicates the current score, controlled by the scorer. A basic scoreboard lists the number of runs scored, wickets taken, overs completed, and (in the second innings of a limited overs match) the target required. More sophisticated scoreboards provide far more information, such as individual scores for each batsman, the bowling analysis, required run rate, par score etc.
Scorer
A person officially responsible for recording the scoring and detailed statistics of the game, usually ball-by-ball.
Scramble seam
a variation employed in seam bowling where the bowler makes the plane of the balls seam tumble, rather than stable at a narrow angle to the direction of the delivery (seam up), or nearly perpendicular to the direction of the delivery (cross-seam). The hope is that the ball will bounce unpredictably depending on whether the balls seam contacts the ground.
Seam
1. (noun) the raised stitching running around the circumference of the ball.
2. (verb) for a ball to deviate off the pitch because it has bounced on its seam.
Seam bowling
a bowling style which uses the uneven conditions of the ball – specifically the raised seam – to make it deviate upon bouncing off the pitch. Contrast with swing bowling.
Seamer
a seam bowler
Season
the period of each year when cricket is played. Varies substantially between countries.
Selector
a person who is delegated with the task of choosing players for a cricket team. Typically, the term is used in the context of player selection for national, provincial and other representative teams at the professional levels of the game, where a "panel of selectors" acts under the authority of the relevant national or provincial cricket administrative body.
Sent in
a team which bats first after losing the toss is said to have been "sent in" by the opposing captain.
Series
A set of matches played a few days apart between the same two teams in the same format, but usually at different locations. An international tour often includes a Test series, an ODI series and a T20 series, each of between two and five matches.
Session
A period of play, from start to lunch, lunch to tea and tea until stumps.
Shelled a Dolly
dropped a really easy catch (Dolly).
Shepherd the strike (also farm the strike)
of a batsman, contrive to receive the majority of the balls bowled, often to protect a weaker batting partner. Typically, involves declining taking singles early in overs, and trying to take singles late in overs.
Shooter
a delivery that skids after pitching (i.e. doesnt bounce as high as would be expected), usually at a quicker pace, resulting in a batsman unable to hit the ball cleanly.
Short
1. (of a fielding position) close to the batsman, but not so close as silly; the opposite of deep.
2. (of a delivery) see short-pitched.
Short-pitched
a delivery that bounces relatively close to the bowler. The intent is to make the ball bounce well above waist height (a bouncer). A slow or low-bouncing short-pitched ball is known as a long hop.
Short of a length
describes a short-pitched delivery that is not so short as a bouncer.
Short run
a would-be run that does not count because, when turning for an additional run, one of the batsmen failed to ground some part of their body or bat behind the popping crease.
Shot
the act of the batsman hitting the ball with their bat.
Short Stop
When the wicket keeper stands upfront, the fielder placed right behind the wicket keeper is called a Short Stop. When the fielder stands outside the 30-yard circle, he is called a Long Stop.
Shoulder Arms
Rather than risk playing the ball, the batsman lifts the bat high above their shoulder to keep their bat and hand out of harms way.
Side on
1. A side on bowler has back foot, chest and hips aligned towards the batsman at the instant of back foot contact.
2. A batsman is side on if their hips and shoulders are facing at ninety degrees to the bowler.
Sight screen
a large board placed behind the bowler, beyond the boundary, used to provide contrast to the ball, thereby aiding the striker in seeing the ball when it is delivered. Typically, coloured white to contrast a red ball, or black to contrast a white ball.
Silly
a modifier to the names of some fielding positions to denote that they are unusually close to the batsman, most often silly mid-off, silly mid-on, silly midwicket and silly point.
Single
a run scored by the batsmen physically running once only between the wickets.
Single wicket
A one-vs-one version of cricket, in which the two competitors bat and bowl against each other, while neutral participants field for both. Each inning consists of a single wicket and a limited number of overs (usually two or three). Currently only played informally and rarely seen, the format was once highly popular and played professionally, particularly from 1750-1850.
Sitter
a very easy catch. A fielder who misses such a catch is said to have dropped a sitter.
Six (or Sixer)
a shot which passes over or touches the boundary without having bounced or rolled, so called because it scores six runs to the batting side.
Skiddy
A pace bowler who typically obtains a low-bounce on their delivery is described as skiddy. The opposite is slingy
Skier
(pronounced SKY-ə) A mistimed shot hit almost straight up in the air, to the sky. Usually results in the batsman being caught out. Occasionally however the fielder positions themself perfectly to take the catch but misses it or drops it. Such an error is considered very embarrassing for the fielder.
Skipper
An informal term for the captain, from the nautical skipper. Sometimes shortened to skip, particularly as a nickname.
Skyline
alternative name for Manhattan.
Slash
a cut, but played aggressively or possibly recklessly – a cut (q.v.) being a shot played square on the off side to a short-pitched delivery wide of off stump. So called because the batsman makes a "cutting" motion as they play the shot.
Sledging
verbal exchanges between players on opposing sides. This ranges from banter between friends to verbal abuse or a psychological tactic to gain advantage of the opposition by undermining their confidence or breaking their concentration. Considered in some cricketing countries to be against the spirit of the game, but acceptable in others.
Slice
a kind of cut shot played with the bat making an obtuse angle with the batsman.
Slider
a wrist spinners delivery where backspin is put on the ball.
Slingy
A pace bowler who typically obtains a high-bounce on their delivery, perhaps because of their unusual height. The opposite is skiddy
Slip
a close fielder behind the batsman, next to the wicket-keeper on the off-side. Also ("in the slips", "at first slip") the positions occupied by such fielders. Commonly there will be two or three slips in an attacking field (although there is no limit and a captain may use more), and one or none in a defensive field. A specialist slip fieldsman may be known as a slipper.
Slip catching cradle
a large piece of training equipment, used for practising the quick-reaction catches needed by a fielder in the slips.
Slog
a powerful shot, in which the batsman hits the ball high and long in an attempt to reach the boundary. Regarded as a risky shot, it will often result in a six or a four, but also has a high chance of getting out. Used by batsmen with more strength than skill, or who need to score runs quickly (see death overs).
Slog overs
see death overs.
Slog sweep
a type of slog in which a sweep shot hit hard and in the air, over the same boundary as for a hook. Used exclusively against spin bowlers.
Slower ball
a medium-pace delivery bowled by a fast bowler. Designed to deceive the batsman into playing the ball too early and skying it to a fielder. Has several variations.
Slow left armer
a left-arm, orthodox, finger spin bowler; the left-handed equivalent of an off spinner (see off spin).
Snick (also edge)
a slight deviation of the ball off the edge of the bat.
Snickometer
a television graphic, also available for use by the third umpire, used to assess on a replay whether or not the batsman has snicked the ball. The graphic juxtaposes a slow motion replay with a sound oscilloscope, and is used to assess whether a sharp sound was recorded at the same moment as the ball passes the bat. Sometimes shortened to snicko.
Soft hands (batting) (also soft bat)
To bat with soft hands (or play with a soft bat) is to hold the bat loosely or with relaxed hands so that it absorbs the balls momentum, meaning that the ball does not rebound sharply off it when the shot is played.
Soft hands (fielding)
To catch the ball with soft hands is to relax the hands and follow through the motion of the ball in the air, allowing the ball to hit the hands gently rather than risking it bouncing out of the hands.
Specialist
a player selected in the team primarily for a single skill, i.e. not an all-rounder or a wicket-keeper-batsman. Such players can be referred to as specialist batsmen, specialist bowlers, or specialist wicketkeepers.
Spectacles
another word for a pair. From the appearance of two ducks on the scorecard as 0–0.
Spell
1. the number of continuous overs a bowler bowls before being relieved.
Spider Graph (also Wagon Wheel)
a graphical chart that represents the trajectory of the ball from each scoring stroke, including its direction, distance travelled, and (where the technology allows) elevation and bounces. Each scoring stroke is represented by a coloured line, typically colour-coded by the number of runs from the stroke. The Spider Graph is a more detailed version of the traditional Wagon Wheel graphic; some commentators use the two terms interchangeably, while others use Wagon Wheel only for the less-detailed version.
Spin bowling
a style of bowling in which a spin bowler ("spinner") attempts to deceive the batsman by imparting spin on the ball using either their fingers or their wrist. Spin bowling is most effective when the ball is travelling relatively slowly, and so most spinners bowl at a pace between 40 and 55 mph.
Spirit of cricket (or spirit of the game)
A nebulous concept of good conduct, encompassing fair play, sportsmanship, mutual respect and acceptance of the umpires decisions. Considered an integral part of the sport. Since 2000 a preamble to the Laws of cricket instructs all participants to act within the spirit of cricket.
Splice
the joint between the handle and the blade of a bat; the weakest part of the bat. If the ball hits the splice it is likely to dolly up for an easy catch.
Square
1. of a position on the field, perpendicular to the line of the pitch; the opposite of fine.
2. the area in the middle of the ground where the pitches are prepared.
3. an imaginary line extending the crease to the boundary on the leg side; it is illegal to have more than two fielders behind square.
Square leg
1. a fielding position on the on side approximately at right angles to the batsman
2. a person who fields at that position
Square-cut
A cut shot, played square, i.e. perpendicular to the bowlers delivery.
Stance (also batting stance)
the posture of a batsman holding their bat when facing a delivery.
Stand (noun)
A synonym for partnership.
Standing up
position adopted by a Wicket-keeper, close to the stumps, when a slow (or, occasionally, medium pace) bowler is operating.
Start
a batsman is said to have a start when they successfully avoid being dismissed for very few runs; in Australia, this is generally understood to mean a score of twenty runs. Once a batsman survives this initial period and becomes established, batting generally becomes easier as they have settled into a rhythm and have adapted to the playing conditions and are less vulnerable, so they are then expected to convert their starts into big scores.
Steaming in
a bowler taking a fast run-up to bowl is said to be steaming in.
Sticky dog
a drying wicket that is exceedingly difficult to bat on. Uncommon if not non-existent in recent years due to the routine covering of pitches.
Sticky wicket
a difficult wet pitch.
Stock ball
See stock delivery
Stock bowler
a bowler whose role is to restrict scoring rather than to take wickets. Usually called upon to bowl numerous overs at a miserly run rate while strike bowlers rest between spells or attempt to take wickets from the other end.
Stock delivery or stock ball)
A bowlers standard type of delivery; the one they bowl most frequently. Bowlers usually have a single stock delivery and one or more variations.
Stodger
a batsman who makes it their job to defend and to score at a mediocre rate. This style is prone to derogatory comments but also compliments on resilience and technique.
Stonewaller
An extreme example of a blocker.
Straight
Close to the imaginary line between the two sets of stumps (cf. wicket-to-wicket). Used as an adjective, to indicate the direction of a shot, as a modifier to a fielding position, or to describe the line of a delivery.
Straight bat
the bat when held vertically, or when swung through a vertical arc
Straight up-and-down
pejorative term for a fast- or medium-paced bowler who cannot swing or seam the ball.
Stranded
a batsman is said to be stranded on their score if they narrowly miss scoring a century or similar milestone because their teams innings ends, rather than because they were dismissed.
Strangled
a form of dismissal whereby a batsman, in trying to play a glance very fine to a leg-side ball, gets an inside edge which is caught by the wicket-keeper.
Street
a pitch which is easy for batsmen and difficult for bowlers. Sometimes called a road, highway, and various other synonyms for street.
Strike
the position as batsman facing the bowler, as opposed to non-striker. Often, Keep [the] strike, to arrange runs on the last ball of an over so as to face the first ball of the next. Farm the strike or Shepherd the strike: to keep doing this to protect a less skilful batsman.
Strike bowler
an attacking bowler whose role is to take wickets rather than to restrict scoring. Usually a fast bowler or attacking spinner who bowls in short spells to attacking field settings.
Strike rate
1. (batting) a percentage equal to the number of runs scored by a batsman divided by the number of balls faced.
2. (bowling) the average number of deliveries bowled before a bowler takes a wicket.
Striker
the batsman who faces the deliveries bowled.
Stroke
an attempt by the batsman to play at a delivery.
Stump
one of the three vertical posts making up the wicket. Starting from the off side, they are off stump, middle stump and leg stump.
Stumped
(or stumping) a method of dismissing a batsman, in which the wicketkeeper breaks the batsmans wicket with the ball while the batsman is outside their crease but has not attempted a run
Stumps
1. the plural of stump
2. in a timed match lasting more than one day, the end of a days play if the match is not yet complete. For example, at stumps, Team A were leading by.... See also draw stumps.
Stump-cam
A small television camera inside middle stump to provide images of play close to the stumps, particularly when a batsman is bowled out.
Substitute
a player able to replace another on the fielding side. A substitute fielder may carry out normal fielding duties but is not allowed to bat, bowl or keep wicket.
Sun ball
A method of bowling where the ball is intentionally bowled at a great height and a sluggish pace. This is done to interrupt the batsmans field of vision using the suns rays, often causing disastrous consequences such as blunt strikes to the head.
Sundry
see extra
Supersub
Under experimental One-Day International rules played between July 2005 and February 2006, the supersub was a substitute player able to come on and replace any player at any point during the game, with the substitute able to take over the substituted players batting and bowling duties – as distinct from a traditional substitute, who can field but is not permitted to bat, bowl or keep wicket.
Super Over
A method for breaking a tie used in some limited overs matches. Each team plays one more over with nominated batsmen (who may already have been dismissed in the main game), or until two wickets have been lost. The team that scores the most runs in their super over wins. The rules vary between competitions if the scores are still tied after the super over, but most commonly a boundary count is used.
Surrey Cut
see French Cut
Sweep
a shot played to a good length slow delivery. The batsman gets down on one knee and "sweeps" the ball to the leg side.
Sweet spot
the small area on the face of the bat that gives maximum power for minimum effort when the ball is hit with it. Also known as the "middle" or "meat" of the bat. A shot that is struck with the sweet spot is referred to as being "well timed" (see timing).
Swerve
less common term for the drift used by spin bowlers
Swing
a bowling style usually employed by fast and medium-pace bowlers. The fielding side will polish the ball on one side of the seam only; as the innings continues, the ball will become worn on one side, but shiny on the other. When the ball is bowled with the seam upright, the air will travel faster over the shiny side than the worn side. This makes the ball swing (curve) in the air. Conventional swing would mean that the ball curves in the air away from the shiny side. (see also reverse swing).
Swish
a rapid or careless attacking stroke by the batsman.
Switch hit
a shot played by a batsman who reverses both their stance and their grip during the bowlers run-up, so that a right-handed batsman would play the shot as an orthodox left-hander. The shot was popularised by England batsman Kevin Pietersen, prompting some discussion about its impact on the rules, e.g. for lbw decisions in which it is necessary to distinguish between off and leg stumps.
T
Tail
common colloquialism for the lower order of a batting line-up. If the tail wags it means the lower order has scored significantly well, perhaps enough to salvage a win. A "long tail" is a batting lineup which contains more bowlers than usual, and is considered weaker than normal.
Tail-ender
a player who bats towards the end of the batting order, usually a specialist bowler or wicket-keeper with relatively poor batting skills.
Tampering
scratching, scuffing, or otherwise unnaturally altering the cricket ball outside of its normal wear and tear. When this is done, it is usually by the fielding team, to give their bowler an edge so that the ball might spin or seam more effectively. This is an illegal act in the game.
Tape ball
An ersatz cricket ball produced by wrapping a tennis ball in electrical tape. Common in informal games on the Indian subcontinent.
Target
The score that the team batting last has to score to beat their opponents. This is one run more than what the team batting first managed; or, in limited overs cricket, an adjusted value determined by a rain rule.
Tea
the second of the two intervals during a full days play is known as the tea interval, due to its timing at about tea-time. In matches lasting only an afternoon, the tea interval is usually taken between innings.
Teesra
A back spin delivery by a finger spin bowler.
Ten-wicket match
A two-innings match in which a bowler takes ten or more wickets in total.
Test cricket (also Test match)
The highest level of the sport. Consists of timed matches that last up to five days, with two innings per side. Played between senior international teams which have been granted Test status.
Textbook shot
A shot played by the batsmen with perfect orthodox technique, exactly as shown in textbooks on batting.
Third man
position behind the wicket-keeper on the off-side, beyond the slip and gully areas
Third umpire
an off-field umpire, equipped with a television monitor, whose assistance the two on-field umpires can seek when in doubt.
Through the gate
"bowled through the gate": dismissed with a ball that passes between the bat and the pads before hitting the wicket.
Throwing
of a bowler, an illegal bowling action in which the arm is straightened during the delivery. Also known as chucking
Tice
An old name for a yorker.
Tickle
An edge to the wicket-keeper or slips. Alternatively a delicate shot usually played to third man or fine leg.
Tie
the result of a cricket match in which the two teams scores are equal and the team batting last is all out; or in a limited overs match, the allotted overs have been played. Not to be confused with a draw, in which neither team wins but the scores are not equal.
Tied down
A batsman or batting team having their run-making restricted by the bowling side.
Timber
the (wooden) stumps. Achieving a Bowled dismissal is to have "hit the timber", or simply "Timber!".
Timed match
a match whose duration is based on a set amount of time rather than a set number of overs. Timed matches usually have a draw as a potential result, in addition to the win/loss or tie that can be achieved in limited overs cricket. All first-class cricket is currently played under a timed format.
Timed out
A batsman is ruled timed out following the fall of a wicket if they do not occupy the crease within a set time. The new batsman is then out just as if they had been bowled, stumped, run out or caught.
Timeless match
a match which is played until both teams have completed their allotted innings or overs, regardless of how many days are required. Many early first-class matches were played in a timeless format, but the need for advanced scheduling has meant that timeless matches are seldom played today.
Timing
the art of striking the ball so that it hits the bats sweet spot. A "well-timed" shot imparts great speed to the ball but appears effortless.
Toe-crusher
A yorker bowled with inswing, aimed at the batsmans toes.
Ton
see century.
Top edge
the ball hitting the top edge of the bat when a batsman plays a cross-bat shot
Top order
the batsmen batting in the top 4 in the batting order. These are generally the most skilled batsmen in the team, equipped with the technique and temperament to continue batting for long periods, often for hours or a whole day.
Top spin
forward rotation on the ball, causing it to increase speed immediately after pitching.
Toss
the traditional flipping of a coin to determine which captain will have the right to choose whether to bat or field.
Tour
An organised itinerary of matches requiring travel away from the teams usual base. Used especially in international cricket for the representative team of one nation playing a series of matches in another nation.
Tour match
any match on a tour which does not have full international status; most typically matches played as a warm-up between the travelling international team and a local club or composite team.
Track
another term for the pitch.
Triggered
when the Umpire gives a Batsman out LBW almost immediately with little consideration for any other factor than the ball hitting the pads in front of the stumps. The batsman is said to have been "triggered" or "trigd" because the Umpire has an itchy trigger finger, ready to raise it to give the batsman out easily.
Trimmer
a high-quality fast bowling delivery, especially one that results in a dismissal of a batsman by removing the bails without hitting the stumps
Trundler
a reliable, steady medium-pace bowler who is not especially good, but is not especially bad either. See also military medium.
Turn
manner in which a batsman grounds the bat at the end of a run, change directions, and prepare to take another run. The correct execution is for the batsman to turn towards the side of the field the ball was played to in order to judge whether another run is possible. See Turn blind, Two.
Also the amount a spin ball changes directions – turns/spins – after hitting the pitch. For example, "That leg spin turned a lot."
Turn blind
turn by a batsman facing to the side of field away from that to which the ball was played, deprecated as it increases the danger of a run out.
Tweaker
An informal (often affectionate) term for a spin bowler.
Twelfth man
Traditionally, the first substitute player who fields when a member of the fielding side is injured. In Test matches, twelve players are named to a team prior to the match, with the final reduction to eleven occurring immediately prior to play commencing on the first day. This gives the captain some flexibility in team selection, dependent on the conditions (e.g. a spin bowler may be named to the team, but omitted if the captain feels that the pitch is not suitable for spin bowling).
Twenty20 (or T20)
a form of limited overs cricket in which each team has one innings with a maximum length of twenty overs.
Two
batsmans call for a probable two runs, requiring their partner to commit to a fast turn. See No, Push, Yes, Wait.
U
Umpire
An official who enforces the laws and adjudicates play. One umpire stands behind the wicket at the non-strikers end, while a second (usually) stands at square leg, with the positions alternating for each over. The two on-field umpires use a system of arm signals to indicate decisions to the players and the scorer. Televised matches usually also have a third umpire to adjudicate on replays and the umpire decision review system.
Umpire Decision Review System (UDRS)
(or Decision Review System, DRS)Official system to challenge certain decisions made by an umpire. The third umpire then reviews the decision using various technological aids, such as slow motion television replays, ball tracking, a snickometer etc. A batsman may challenge a decision of out, and the captain of the fielding side may challenge a not out decision. Teams are limited in the number of unsuccessful reviews per inning; if they have reached that limit they are permitted no further reviews. The umpires themselves can review run out, caught and no ball decisions, or whether a shot has scored a boundary. Only available in televised games, because the necessary equipment is provided by the broadcaster.
Umpires call
Outcome of the Umpire Decision Review System if the third umpire finds the evidence inconclusive, or within the stated margin of error. The original umpires decision stands, but the review does not count towards the team limit on unsuccessful reviews.
Uncapped
a player who has never played at the international level (see "cap (sense 2)".)
Underarm
the action of bowling with the arm swinging from behind the body in a downswing arc and then releasing the ball on the up swing without bending the elbow. This type of bowling is now illegal in formal cricket, but commonly played in informal types of cricket. Compare with overarm.
Under-spin (also back-spin)
backward rotation on the ball, causing it to decrease speed immediately after pitching.
Unorthodox
1. a shot played not in the accepted "textbook" manner, often with a degree of improvisation.
2. a left arm spin bowler who spins the ball with their wrist in a similar manner to a right-arm leg-spin bowler. This imparts spin in the same direction as a right-handed off-spin bowler. See: Left-arm unorthodox spin.
Unplayable delivery
a ball that is impossible for the batsman to deal with; used to imply that the batsman was out more through the skill of the bowler than through their own error.
Upper Cut
A typical shot played against a short ball or bouncer. Here the batsman makes a cut above their head and the ball usually goes to the third-man area. First used in International cricket by Sachin Tendulkar
Uppish
A shot that gains a risky amount of height, opening up the possibility of the batsman being caught.
V
V
See Vee.
Variation
Any delivery by the bowler which is not their stock ball. Used to make the bowling less predictable, which may surprise or deceive the batsman.
Vee
1. an unmarked, loosely defined V-shaped area on the ground at which the batsman stands at the apex. The two sides of the "V" go through the mid-off and mid-on regions, similar to the fair area of a baseball field, or the area where fielders "on the drive" are placed. Most shots played into this region are straight-batted shots, which dont involve the risks associated with playing across the line.
2. the V-shaped joint between the lower end of the handle and the blade of the bat (see also splice).
Very fine leg
see long stop
Village
Adjective, roughly equivalent to amateurish, indicating that the standard of play or organisation is typical for village cricket. Pejorative if applied to a professional.
Village cricket
Amateur cricket, particularly in England & Wales, with players typically drawn from a single village or suburb. The level of organisation varies: sometimes games are part of an official league, sometimes only friendly matches. The standard of play is below that of club cricket and often includes beginners to the sport.
W
Waft
A loose non-committal shot, usually played to a ball pitched short of length and well wide of the off stump.
Wag
when the tail (the lower order of the batting line-up) scores more runs than it is expected to, it is said to have wagged (as an extension of the animal tail metaphor)
Wagon wheel
1. a graphic which divides the field into six sectors (looking like the spoked wheel of a wagon), indicating how many runs a batsman has scored with shots into each sector.
2. see spider graph
Wait
a call by a batsman which defers the decision for a few seconds, before being followed by a call of yes or no. Used when it is unclear whether a fielder will reach the ball before it passes them. The batting partner should not begin a run, but remain ready to do so (including backing up) until the follow-up call.
Walk
of a batsman, to walk off the pitch, knowing or believing that they are out, rather than waiting for an umpire to give them out (the umpire is required to intervene if they were not about to give an out decision). Generally considered to be sportsmanlike behaviour, but rarer in international cricket than domestic cricket.
Walking in
fielders will, unless fielding close in, usually "walk in" a few paces just before the bowler bowls in order to be alert if ball is hit in their direction.
Walking wicket
a very poor batsman, particularly tail-end batsmen, who is usually a specialist bowler.
Wash out
a cricket match, or a specific day of a cricket match, which is abandoned with either no play or very little play due to rain.
Wearing wicket
On a turf pitch, typically consisting of dry/dead grass on the top, the soil can be loosened because of the players, stepping on it during play, and rough, abrasive patches can form. This means that as the pitch wears, or becomes worn, balls that land in these rough areas will grip the surface more and turn more drastically, thereby becoming more helpful to spin bowling. Uneven bounce can also result.
Whites
predominantly white or cream coloured clothing worn by players during matches with a red ball. Usually consists of long trousers, a short- or long-sleeved polo shirt, and optionally a knitted jumper or sleeveless sweater. Professional limited overs matches with a white ball instead use coloured uniforms, known as pyjamas.
Wicket
1. a set of stumps and bails;
2. the pitch; or
3. the dismissal of a batsman. Numerous phrases exist using this definition, such as "throwing away ones wicket", which means to get out too easily, or "valuing/putting a price on ones wicket", which is largely the opposite
Wicket-keeper
the player on the fielding side who stands immediately behind the batting end wicket. A specialist position, used throughout the game. The wicket-keeper is the only player in the fielding side allowed to wear gloves and external leg guards, under Law 40.
Wicket-keeper-batsman
a wicket-keeper who is also a very good batsman, capable of opening the batting or making good scores in the top order.
Wicket maiden
a maiden over in which the bowler also dismisses a batsman. A double wicket maiden if two wickets are taken, and so on.
Wicket-to-wicket (or stump-to-stump)
an imaginary line connecting the two wickets, also a style of straight, un-varied bowling.
Wickets in hand
The number of wickets remaining in the innings for the batting side. For example, a team which has lost four of its ten wickets is said to have six wickets in hand.
Wide
a delivery that passes illegally wide of the wicket, scoring an extra for the batting side. A wide does not count as one of the six valid deliveries that must be made in each over – an extra ball must be bowled for each wide.
Wisden
Wisden Cricketers Almanack, or simply Wisden, colloquially the (Yellow) Bible of Cricket, is a cricket reference book published annually since 1864 in the UK. In 1998 an Australian and in 2012 an Indian edition was launched.
Womens cricket
Cricket played between teams consisting solely of women. First recorded in 1745, it was administered separately from mens cricket until 2005. There are almost no differences in the rules.
Worm
a plot of either the cumulative runs scored, or the progressive run rate achieved by a team (the y-axis) against the over number (x-axis) in limited-overs cricket.
Wrist spin
a form of spin bowling in which the ball is made to rotate by the position and/or movement of the bowlers wrist (contrast with finger spin). For a right-handed bowler this produces leg spin, whereas the same technique by a left-handed bowler produces left arm unorthodox spin.
Wrong foot
when the bowling foot is the front foot the delivery is said to be bowled off the wrong foot. Such a bowler is said to bowl off the wrong foot.
Wrong footed
when the batsman is initially moving either back or forward to a delivery and then has to suddenly change which foot they use (back or front), they are said to have been wrong-footed. Usually applies to spin bowling.
Wrong un
another name for a googly; most common in Australia.
Y
Yes
batsmans call for a run. See No, Push, Two, Wait.
(The) Yips
The yips are occasionally experienced by bowlers suffering from a loss of confidence. A psychological condition whereby the bowler is unable to sufficiently relax when delivering the ball – often holding the ball too long before release, losing flight, turn and accuracy in the process. Bowlers have been known to suffer from the yips for as little as a few overs, up to the course of an entire season or more.
Yorker
a (usually fast) delivery that is pitched very close to the batsman </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Minutes and explain in detail? [/INST] Minutes, also known as minutes of meeting (abbreviation MoM), protocols or, informally, notes, are the instant written record of a meeting or hearing. They typically describe the events of the meeting and may include a list of attendees, a statement of the activities considered by the participants, and related responses or decisions for the activities.
Etymology
The name "minutes" possibly derives from the Latin phrase minuta scriptura (literally "small writing") meaning "rough notes".
Creation
Minutes may be created during the meeting by a typist or court reporter, who may use shorthand notation and then prepare the minutes and issue them to the participants afterwards. Alternatively, the meeting can be audio recorded, video recorded, or a groups appointed or informally assigned secretary may take notes, with minutes prepared later. Many government agencies use minutes recording software to record and prepare all minutes in real-time.
Purpose
Minutes are the official written record of the meetings of an organization or group. They are not transcripts of those proceedings. Using Roberts Rules of Order Newly Revised (RONR), the minutes should contain mainly a record of what was done at the meeting, not what was said by the members. The organization may have its own rules regarding the content of the minutes.
For most organizations or groups, it is important for the minutes to be terse and only include a summary of the decisions. A verbatim report (transcript) is typically not useful. Unless the organizations rules require it, a summary of the discussions in a meeting is neither necessary nor appropriate.The minutes of certain groups, such as a corporate board of directors, must be kept on file and are important legal documents. Minutes from board meetings are kept separately from minutes of general membership meetings within the same organization. Also, minutes of executive sessions may be kept separately. Committees are not required to keep formal minutes although less formal notes may be taken. For committees, their formal records are the reports submitted to their parent body.
Format
The format of the minutes can vary depending on the standards established by an organization, although there are general guidelines. Roberts Rules of Order contains a sample set of minutes.Generally, minutes begin with the name of the body holding the meeting (e.g., a board) and may also include the place, date, list of people present, and the time that the chair called the meeting to order.Since the primary function of minutes is to record the decisions made, all official decisions must be included. If a formal motion is proposed and seconded, then (regardless whether it passes) this is recorded. The voting tally may also be included. The part of the minutes dealing with a routine motion might note merely that a particular motion was "moved by Ann and passed". It is not strictly necessary to include the name of the person who seconds a motion. Where a tally is included, it is sufficient to record the number of people voting for and against a motion, but requests by participants to note their votes by name may be allowed. If a decision is made by roll-call vote, then all of the individual votes are recorded by name. If it is made by general consent without a formal vote, then this fact may be recorded.
The minutes may end with a note of the time that the meeting was adjourned.Minutes are sometimes submitted by the person who is responsible for them (often the secretary) at a subsequent meeting for review. The traditional closing phrase is "Respectfully submitted" (although this is no longer common), followed by the officers signature, his or her typed (or printed) name, and his or her title.Usually, one of the first items in an order of business or an agenda for a meeting is the reading and approval of the minutes from the previous meeting. If the members of the group agree (usually by unanimous consent) that the written minutes reflect what happened at the previous meeting, then they are approved, and the fact of their approval is recorded in the minutes of the current meeting. If there are significant errors or omissions, then the minutes may be redrafted and submitted again at a later date. Minor changes may be made immediately using the normal amendment procedures, and the amended minutes may be approved "as amended". It is normally appropriate to send a draft copy of the minutes to all the members in advance of the meeting so that the meeting is not delayed by a reading of the draft.
See also
Diary
Gazette
References
Citations
Sources
Further reading
American Institute of Parliamentarians (2014) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Macrostomia and explain in detail? [/INST] Macrostomia refers to a mouth that is unusually wide. The term is from the Greek prefix makro- meaning "large" and from Greek στόμα, "mouth".Macrostomia is characterized as a physical abnormality that causes clefts to form on the face of affected individuals. These clefts can form on either or both sides of the face, but they are most commonly seen on the right cheek and have a higher rate of occurrence in males. Macrostomia is very irregular and on average occurs only once in every 150,000 to 300,000 live births. Its unusual for macrostomia to occur on its own and it is included as a symptom for many diseases including craniofacial microsomia. The clefts result from improper development and fusion of the mandibular and maxillary processes. The clefts cause problems with facial muscle development. The origin of macrostomia is not yet fully understood: it could have multiple causes.
Phenotypic variations
There are 4 distinct variations of macrostomia. Classifications are a complete lateral facial cleft, simple macrostomia, macrostomia with diastasis of the facial musculature, and isolated facial musculature diastasis. Each has a different physical appearance with varying levels of severity.The cleft associated with macrostomia is associated with improper or failed fusion of the mandibular and maxillary processes during embryonic development. This can lead to a variety of abnormalities involving skin, subcutaneous tissue, facial muscles, and the mucous membrane. The severity of each abnormality can vary from minor to severe. Environmental contaminants may play a role in causing macrostomia. Many affected individuals were found in Lagos, an industrial area of Nigeria, where water supplies are known to be contaminated by improper disposal of industrial and domestic waste.
Complete lateral facial cleft
The facial cleft runs from the corner of the mouth towards the bottom of the ear. The outside of the ear on the affected side of the face appears as normal and a region of soft tissue connects the cleft to the right lateral posterior hard palate. Internally there is no soft palate.
Simple macrostomia
This version of macrostomia is less severe because it does not affect the facial muscles and is not associated with any soft tissue or bone deformities. A small cleft(s) extends from the mouth and can be repaired surgically.
Diastasis of the facial musculature
Clefts in this variant are slightly more severe than the ones seen in simple macrostomia. It also does not have bone deformities, but it does include minor soft tissue deformities. The defining feature is muscle diastasis which is separation of the masseter </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Anisakis and explain in detail? [/INST] Anisakis (a·nuh·saa·keez)
is a genus of parasitic nematodes that have life cycles involving fish and marine mammals. They are infective to humans and cause anisakiasis. People who produce immunoglobulin E in response to this parasite may subsequently have an allergic reaction, including anaphylaxis, after eating fish infected with Anisakis species.
Etymology
The genus Anisakis was defined in 1845 by Félix Dujardin as a subgenus of the genus Ascaris Linnaeus, 1758. Dujardin did not make explicit the etymology, but stated that the subgenus included the species in which the males have unequal spicules ("mâles ayant des spicules inégaux"); thus, the name Anisakis is based on anis- (Greek prefix for different) and akis (Greek for spine or spicule). Two species were included in the new subgenus, Ascaris (Anisakis) distans Rudolphi, 1809 and Ascaris (Anisakis) simplex Rudolphi, 1809.
Life cycle
Anisakis species have complex life cycles which pass through a number of hosts through the course of their lives. Eggs hatch in seawater, and larvae are eaten by crustaceans, usually euphausids. The infected crustaceans are subsequently eaten by fish or squid, and the nematodes burrow into the wall of the gut and encyst in a protective coat, usually on the outside of the visceral organs, but occasionally in the muscle or beneath the skin. The life cycle is completed when an infected fish is eaten by a marine mammal, such as a whale, seal, sea lion, dolphin or another animal like a seabird or shark. The nematode excysts in the intestine, feeds, grows, mates, and releases eggs into the seawater in the hosts feces. As the gut of a marine mammal is functionally very similar to that of a human, Anisakis species are able to infect humans who eat raw or undercooked fish.
Reproduction
Sexual reproduction occurs once the larvae have reached a definitive host and completed their final molt, at which point sexual organs are developed. Females are capable of producing 2.6 million eggs, depending on species and size at sexual maturity.The known diversity of the genus has increased greatly since mid-1980s with the advent of modern genetic techniques in species identification. Each final host species was discovered to have its own biochemically and genetically identifiable "sibling species" of Anisakis, which is reproductively isolated. This finding has allowed the proportion of different sibling species in a fish to be used as an indicator of population identity in fish stocks.
Morphology
Anisakis share the common features of all nematodes: the vermiform body plan, round in cross section, and a lack of segmentation. The body cavity is reduced to a narrow pseudocoel. The mouth is located anteriorly and surrounded by projections used in feeding and sensation, with the anus slightly offset from the posterior. The squamous epithelium secretes a layered cuticle to protect the body from digestive acids.
As with all parasites with a complex life cycle involving a number of hosts, details of the morphology vary depending on the host and life cycle stage. In the stage which infects fish, Anisakis species are found in a distinctive "watch-spring coil" shape. They are roughly 2 cm long when uncoiled. When in the final host, anisakids are longer, thicker, and more sturdy, to deal with the hazardous environment of a mammalian gut.
Health implications
Anisakids pose a risk to human health through intestinal infection with worms from the eating of underprocessed fish, and through allergic reactions to chemicals left by the worms in fish flesh.
Anisakiasis
Anisakiasis is a human parasitic infection of the gastrointestinal tract caused by the consumption of raw or undercooked seafood containing larvae of the nematode Anisakis simplex. The first case of human infection by a member of the family Anisakidae was reported in the Netherlands by Van Thiel, who described the presence of a marine nematode in a patient suffering from acute abdominal pain. It is frequently reported in areas of the world where fish is consumed raw, lightly pickled, or salted. The areas of highest prevalence are Scandinavia (from cod livers), Japan (after eating sashimi), the Netherlands (by eating infected fermented herrings (maatjes)), Spain (from eating anchovies and other fish marinated in escabeche), and along the Pacific coast of South America (from eating ceviche). The frequency in the United States is unknown, because the disease is not reportable and can go undetected or be mistaken for other illnesses. Anisakiasis was first recognized in the 1960s.
During the 1970s, about 10 cases per year were reported in the literature. The frequency is
probably much higher, due to home preparation of raw or undercooked fish dishes. In Japan,
more than 1,000 cases are reported annually. Development of better diagnostic tools and greater awareness has led to more frequent reporting of anisakiasis.
Within a few hours of ingestion, the parasitic worm tries to burrow though the intestinal wall, but since it cannot penetrate it, it gets stuck and dies. The presence of the parasite triggers an immune response; immune cells surround the worms, forming a ball-like structure that can block the digestive system, causing severe abdominal pain, malnutrition, and vomiting. Occasionally, the larvae are regurgitated. If the larvae pass into the bowel or large intestine, a severe eosinophilic granulomatous response may also occur one to two weeks following infection, causing symptoms mimicking Crohns disease.Diagnosis can be made by gastroscopic examination, during which the 2-cm larvae are visually observed and removed, or by histopathologic examination of tissue removed at biopsy or during surgery.
Raising consumer and producer awareness about the existence of anisakid worms in fish is a critical and effective prevention strategy. Anisakiasis can be easily prevented by adequate cooking at temperatures greater than 60 °C or freezing. The FDA recommends all shellfish and fish intended for raw consumption be blast frozen to −35 °C or below for 15 hours or be regularly frozen to −20 °C or below for seven days. Salting and marinating will not necessarily kill the parasites, as in Italy where two-thirds of cases were attributed to anchovies marinated in lemon or vinegar. Humans are thought to be more at risk of anisakiasis from eating wild fish rather than farmed fish. Many countries require all types of fish with potential risk intended for raw consumption to be previously frozen to kill parasites. The mandate to freeze herring in the Netherlands has virtually eliminated human anisakiasis in the Netherlands.
Allergic reactions
Even when the fish is thoroughly cooked, Anisakis larvae pose a health risk to humans. Anisakids (and related species such as the sealworm, Pseudoterranova species, and the codworm Hysterothylacium aduncum) release a number of biochemicals into the surrounding tissues when they infect a fish. They are also often consumed whole, accidentally, inside a fillet of fish.
Acute allergic manifestations, such as urticaria and anaphylaxis, may occur with or without accompanying gastrointestinal symptoms. The frequency of allergic symptoms in connection with fish ingestion has led to the concept of gastroallergic anisakiasis, an acute IgE-mediated generalized reaction. Occupational allergy, including asthma, conjunctivitis, and contact dermatitis, has been observed in fish processing workers.
Sensitivization and allergy are determined by skin-prick test and detection of specific antibodies against Anisakis. Hypersensitivity is indicated by a rapid rise in levels of IgE in the first several days following consumption of infected fish. A 2018 review of cases in France has shown that allergic cases were more commonly found, although the number of human Anisakis infections was decreasing.
Treatment
For the worm, humans are a dead-end host. Anisakis and Pseudoterranova larvae cannot survive in humans, and eventually die. In some cases, the infection resolves with only symptomatic treatment. In other cases, however, infection can lead to small bowel obstruction, which may require surgery, although treatment with albendazole alone (avoiding surgery) has been reported to be successful. Intestinal perforation (an emergency) is also possible.
Occurrence
Larval anisakids are common parasites of marine and anadromous fish (e.g. salmon, sardine), and can also be found in squid and cuttlefish </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Muscle weakness and explain in detail? [/INST] Muscle weakness is a lack of muscle strength. Its causes are many and can be divided into conditions that have either true or perceived muscle weakness. True muscle weakness is a primary symptom of a variety of skeletal muscle diseases, including muscular dystrophy and inflammatory myopathy. It occurs in neuromuscular junction disorders, such as myasthenia gravis. Muscle weakness can also be caused by low levels of potassium and other electrolytes within muscle cells. It can be temporary or long-lasting (from seconds or minutes to months or years). The term myasthenia is from my- from Greek μυο meaning "muscle" + -asthenia ἀσθένεια meaning "weakness".
Types
Neuromuscular fatigue can be classified as either "central" or "peripheral" depending on its cause. Central muscle fatigue manifests as an overall sense of energy deprivation, while peripheral muscle fatigue manifests as a local, muscle-specific inability to do work.
Neuromuscular fatigue
Nerves control the contraction of muscles by determining the number, sequence, and force of muscular contraction. When a nerve experiences synaptic fatigue it becomes unable to stimulate the muscle that it innervates. Most movements require a force far below what a muscle could potentially generate, and barring pathology, neuromuscular fatigue is seldom an issue.For extremely powerful contractions that are close to the upper limit of a muscles ability to generate force, neuromuscular fatigue can become a limiting factor in untrained individuals. In novice strength trainers, the muscles ability to generate force is most strongly limited by nerve’s ability to sustain a high-frequency signal. After an extended period of maximum contraction, the nerve’s signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ‘stop listening’ and gradually cease to move, often lengthening. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout. Part of the process of strength training is increasing the nerves ability to generate sustained, high frequency signals which allow a muscle to contract with their greatest force. It is this "neural training" that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrillar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force.
Central fatigue
Central fatigue is a reduction in the neural drive or nerve-based motor command to working muscles that results in a decline in the force output. It has been suggested that the reduced neural drive during exercise may be a protective mechanism to prevent organ failure if the work was continued at the same intensity. There has been a great deal of interest in the role of serotonergic pathways for several years because its concentration in the brain increases with motor activity. During motor activity, serotonin released in synapses that contact motoneurons promotes muscle contraction. During high level of motor activity, the amount of serotonin released increases and a spillover occurs. Serotonin binds to extrasynaptic receptors located on the axon initial segment of motoneurons with the result that nerve impulse initiation and thereby muscle contraction are inhibited.
Peripheral muscle fatigue
Peripheral muscle fatigue during physical work is an inability for the body to supply sufficient energy or other metabolites to the contracting muscles to meet the increased energy demand. This is the most common case of physical fatigue—affecting a national average of 72% of adults in the work force in 2002. This causes contractile dysfunction that manifests in the eventual reduction or lack of ability of a single muscle or local group of muscles to do work. The insufficiency of energy, i.e. sub-optimal aerobic metabolism, generally results in the accumulation of lactic acid and other acidic anaerobic metabolic by-products in the muscle, causing the stereotypical burning sensation of local muscle fatigue, though recent studies have indicated otherwise, actually finding that lactic acid is a source of energy.The fundamental difference between the peripheral and central theories of muscle fatigue is that the peripheral model of muscle fatigue assumes failure at one or more sites in the chain that initiates muscle contraction. Peripheral regulation therefore depends on the localized metabolic chemical conditions of the local muscle affected, whereas the central model of muscle fatigue is an integrated mechanism that works to preserve the integrity of the system by initiating muscle fatigue through muscle derecruitment, based on collective feedback from the periphery, before cellular or organ failure occurs. Therefore, the feedback that is read by this central regulator could include chemical and mechanical as well as cognitive cues. The significance of each of these factors will depend on the nature of the fatigue-inducing work that is being performed.Though not universally used, "metabolic fatigue" is a common alternative term for peripheral muscle weakness, because of the reduction in contractile force due to the direct or indirect effects of the reduction of substrates or accumulation of metabolites within the muscle fiber. This can occur through a simple lack of energy to fuel contraction, or through interference with the ability of Ca2+ to stimulate actin and myosin to contract.
Lactic acid hypothesis
It was once believed that lactic acid build-up was the cause of muscle fatigue. The assumption was lactic acid had a "pickling" effect on muscles, inhibiting their ability to contract. The impact of lactic acid on performance is now uncertain, it may assist or hinder muscle fatigue.Produced as a by-product of fermentation, lactic acid can increase intracellular acidity of muscles. This can lower the sensitivity of contractile apparatus to calcium ions (Ca2+) but also has the effect of increasing cytoplasmic Ca2+ concentration through an inhibition of the chemical pump that actively transports calcium out of the cell. This counters inhibiting effects of potassium ions (K+) on muscular action potentials. Lactic acid also has a negating effect on the chloride ions in the muscles, reducing their inhibition of contraction and leaving K+ as the only restricting influence on muscle contractions, though the effects of potassium are much less than if there were no lactic acid to remove the chloride ions. Ultimately, it is uncertain if lactic acid reduces fatigue through increased intracellular calcium or increases fatigue through reduced sensitivity of contractile proteins to Ca2+.
Pathophysiology
Muscle cells work by detecting a flow of electrical impulses from the brain which signals them to contract through the release of calcium by the sarcoplasmic reticulum. Fatigue (reduced ability to generate force) may occur due to the nerve, or within the muscle cells themselves. New research from scientists at Columbia University suggests that muscle fatigue is caused by calcium leaking out of the muscle cell. This causes there to be less calcium available for the muscle cell. In addition an enzyme is proposed to be activated by this released calcium which eats away at muscle fibers.Substrates within the muscle generally serve to power muscular contractions. They include molecules such as adenosine triphosphate (ATP), glycogen and creatine phosphate. ATP binds to the myosin head and causes the ‘ratchetting’ that results in contraction according to the sliding filament model. Creatine phosphate stores energy so ATP can be rapidly regenerated within the muscle cells from adenosine diphosphate (ADP) and inorganic phosphate ions, allowing for sustained powerful contractions that last between 5–7 seconds </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Extremities and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Accident and explain in detail? [/INST] An accident is an unintended, normally unwanted event that was not directly caused by humans. The term accident implies that nobody should be blamed, but the event may have been caused by unrecognized or unaddressed risks. Most researchers who study unintentional injury avoid using the term accident and focus on factors that increase risk of severe injury and that reduce injury incidence and severity. For example, when a tree falls down during a wind storm, its fall may not have been caused by humans, but the trees type, size, health, location, or improper maintenance may have contributed to the result. Most car wrecks are not true accidents; however English speakers started using that word in the mid-20th century as a result of media manipulation by the US automobile industry.
Types
Physical and non-physical
Physical examples of accidents include unintended motor vehicle collisions, falls, being injured by touching something sharp or hot, or bumping into something while walking.
Non-physical examples are unintentionally revealing a secret or otherwise saying something incorrectly, accidental deletion of data, or forgetting an appointment.
Accidents by activity
Accidents during the execution of work or arising out of it are called work accidents. According to the International Labour Organization (ILO), more than 337 million accidents happen on the job each year, resulting, together with occupational diseases, in more than 2.3 million deaths annually.
In contrast, leisure-related accidents are mainly sports injuries.
Accidents by vehicle
Vehicle collisions are not usually accidents; they are mostly caused by preventable causes such as drunk driving and intentionally driving too fast </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Pasteurellosis and explain in detail? [/INST] Pasteurellosis is an infection with a species of the bacterial genus Pasteurella, which is found in humans and other animals.
Pasteurella multocida (subspecies P. m. septica and P. m. multocida) is carried in the mouth and respiratory tract of various animals, including pigs. It is a small, Gram-negative bacillus with bipolar staining by Wayson stain. In animals, it can originate in fulminant septicaemia (chicken cholera), but is also a common commensal.
Until taxonomic revision in 1999, Mannheimia spp. were classified as Pasteurella spp., and infections by organisms now called Mannheimia spp., as well as by organisms now called Pasteurella spp., were designated as pasteurellosis. The term "pasteurellosis" is often still applied to mannheimiosis, although such usage has declined.
Types
The several forms of the infection are:
Skin/subcutaneous tissue disease is a septic phlegmon that develops classically in the hand and forearm after a cat bite. Inflammatory signs are very rapid to develop; in 1 or 2 hours, edema, severe pain, and serosanguineous exudate appear. Fever, moderate or very high, can be seen, along with vomiting, headache, and diarrhea. Lymphangitis is common. Complications are possible, in the form of septic arthritis, osteitis, or evolution to chronicity.
Sepsis is very rare, but can be as fulminant as septicaemic plague, with high fever, rigors, and vomiting, followed by shock and coagulopathy.
Pneumonia disease is also rare and appears in patients with some chronic pulmonary pathology. It usually presents as bilateral consolidating pneumonia, sometimes very severe.
Zoonosis, pasteurellosis can be transmitted to humans through cats.Other locations are possible, such as septic arthritis, meningitis, and acute endocarditis, but are very rare.
Diagnosis
Diagnosis is made with isolation of Pasteurella multocida in a normally sterile site (blood, pus, or cerebrospinal fluid).
Treatment
As the infection is usually transmitted into humans through animal bites, antibiotics usually treat the infection, but medical attention should be sought if the wound is severely swelling. Pasteurellosis is usually treated with high-dose penicillin if severe. Either tetracycline or chloramphenicol provides an alternative in beta-lactam-intolerant patients </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Failed back syndrome and explain in detail? [/INST] Failed back syndrome or post-laminectomy syndrome is a condition characterized by chronic pain following back surgeries. Many factors can contribute to the onset or development of FBS, including residual or recurrent spinal disc herniation, persistent post-operative pressure on a spinal nerve, altered joint mobility, joint hypermobility with instability, scar tissue (fibrosis), depression, anxiety, sleeplessness, spinal muscular deconditioning and even Cutibacterium acnes infection. An individual may be predisposed to the development of FBS due to systemic disorders such as diabetes, autoimmune disease and peripheral blood vessels (vascular) disease.
Common symptoms associated with FBS include diffuse, dull and aching pain involving the back or legs. Abnormal sensibility may include sharp, pricking, and stabbing pain in the extremities. The term "post-laminectomy syndrome" is used by some doctors to indicate the same condition as failed back syndrome.
The treatments of post-laminectomy syndrome include physical therapy, microcurrent electrical neuromuscular stimulator, minor nerve blocks, transcutaneous electrical nerve stimulation (TENS), behavioral medicine, non-steroidal anti-inflammatory (NSAID) medications, membrane stabilizers, antidepressants, spinal cord stimulation, and intrathecal morphine pump. Use of epidural steroid injections may be minimally helpful in some cases. The targeted anatomic use of a potent anti-inflammatory anti-TNF therapeutics is being investigated.
The number of spinal surgeries varies around the world. The United States and the Netherlands report the highest number of spinal surgeries, while the United Kingdom and Sweden report the fewest. Recently, there have been calls for more aggressive surgical treatment in Europe. Success rates of spinal surgery vary for many reasons.
Cause
Patients who have undergone one or more operations on the lumbar spine and continue to experience pain afterward can be divided into two groups. The first group comprise those in whom surgery was not actually indicated or the surgery performed was not likely to achieve the desired result, and those in whom surgery was indicated but which technically did not achieve the intended result. Patients whose pain complaints are of a radicular nature have a better chance for a good outcome than those whose pain complaints are limited to pain in the back.
The second group includes patients who had incomplete or inadequate operations. Lumbar spinal stenosis may be overlooked, especially when it is associated with disc protrusion or herniation. Removal of a disc, while not addressing the underlying presence of stenosis, can lead to disappointing results. Occasionally operating on the wrong level occurs, as does failure to recognize an extruded or sequestered disc fragment. Inadequate or inappropriate surgical exposure can lead to other problems in not getting to the underlying pathology. Hakelius reported a 3% incidence of serious nerve root damage.In 1992, Turner et al. published a survey of 74 journal articles which reported the results after decompression for spinal stenosis. Good to excellent results were on average reported by 64% of the patients. There was, however, a wide variation in outcomes reported. There was a better result in patients who had a degenerative spondylolisthesis. A similarly designed study by Mardjekto et al. found that a concomitant spinal arthrodesis (fusion) had a greater success rate. Herron and Trippi evaluated 24 patients, all with degenerative spondylolisthesis treated with laminectomy alone. At follow-up varying between 18 and 71 months after surgery, 20 out of the 24 patients reported a good result. Epstein reported on 290 patients treated over a 25-year period. Excellent results were obtained in 69% and good results in 13%. These optimistic reports do not correlate with "return to competitive employment" rates, which for the most part are dismal in most spinal surgery series.In the past two decades there has been a dramatic increase in fusion surgery in the U.S.: in 2001 over 122,000 lumbar fusions were performed, a 22% increase from 1990 in fusions per 100,000 population, increasing to an estimate of 250,000 in 2003, and 500,000 in 2006. In 2003, the national bill for the hardware for fusion alone was estimated to have soared to $2.5 billion a year.
For patients with continued pain after surgery which is not due to the above complications or conditions, interventional pain physicians speak of the need to identify the "pain generator" i.e. the anatomical structure responsible for the patients pain. To be effective, the surgeon must operate on the correct anatomic structure, but is often not possible to determine the source of the pain. The reason for this is that many patients with chronic pain often have disc bulges at multiple spinal levels and the physical examination and imaging studies are unable to pinpoint the source of pain. In addition, spinal fusion itself, particularly if more than one spinal level is operated on, may result in "adjacent segment degeneration". This is thought to occur because the fused segments may result in increased torsional and stress forces being transmitted to the intervertebral discs located above and below the fused vertebrae. This pathology is one reason behind the development of artificial discs as a possible alternative to fusion surgery. But fusion surgeons argue that spinal fusion is more time-tested, and artificial discs contain metal hardware that is unlikely to last as long as biological material without shattering and leaving metal fragments in the spinal canal. These represent different schools of thought. (See discussion on disc replacement infra.)
Another highly relevant consideration is the increasing recognition of the importance of "chemical radiculitis" in the generation of back pain. A primary focus of surgery is to remove "pressure" or reduce mechanical compression on a neural element: either the spinal cord, or a nerve root. But it is increasingly recognized that back pain, rather than being solely due to compression, may instead entirely be due to chemical inflammation of the nerve root. It has been known for several decades that disc herniations result in a massive inflammation of the associated nerve root. In the past five years increasing evidence has pointed to a specific inflammatory mediator of this pain. This inflammatory molecule, called tumor necrosis factor-alpha (TNF), is released not only by the herniated or protruding disc, but also in cases of disc tear (annular tear), by facet joints, and in spinal stenosis. In addition to causing pain and inflammation, TNF may also contribute to disc degeneration. If the cause of the pain is not compression, but rather is inflammation mediated by TNF, then this may well explain why surgery might not relieve the pain, and might even exacerbate it, resulting in FBSS.
Role of sacroiliac joint (SIJ) in lower back pain (LBP)
A 2005 review by Cohen concluded,The SI joint is a real yet underappreciated pain generator in an estimated 15% to 25% of patients with axial LBP. Studies by Ha, et al., show that the incidence of SI joint degeneration in post-lumbar fusion surgery is 75% at 5 years post-surgery, based on imaging. Studies by DePalma and Liliang, et al., demonstrate that 40–61% of post-lumbar fusion patients were symptomatic for SI joint dysfunction based on diagnostic blocks.
Smoking
Recent studies have shown that cigarette smokers will routinely fail all spinal surgery, if the goal of that surgery is the decrease of pain and impairment. Many surgeons consider smoking to be an absolute contraindication to spinal surgery. Nicotine appears to interfere with bone metabolism through induced calcitonin resistance and decreased osteoblastic function. It may also restrict small blood vessel diameter leading to increased scar formation.There is an association between cigarette smoking, back pain and chronic pain syndromes of all types.In a report of 426 spinal surgery patients in Denmark, smoking was shown to have a negative effect on fusion and overall patient satisfaction, but no measurable influence on the functional outcome.There is a validation of the hypothetical assumption that postoperative smoking cessation helps to reverse the impact of cigarette smoking on outcome after spinal fusion. If patients cease cigarette smoking in the immediate post operative period, there is a positive impact on success.Regular smoking in adolescence was associated with low back pain in young adults. Pack-years of smoking showed an exposure-response relationship among girls.A recent study suggested that cigarette smoking adversely affects serum hydrocodone levels. Prescribing physicians should be aware that in some cigarette smokers, serum hydrocodone levels might not be detectable.In a study from Denmark reviewing many reports in the literature, it was concluded that smoking should be considered a weak risk indicator and not a cause of low back pain. In a multitude of epidemiologic studies, an association between smoking and low back pain has been reported, but variations in approach and study results make this literature difficult to reconcile.
In a massive study of 3482 patients undergoing lumbar spine surgery from the National Spine Network, comorbidities of (1) smoking, (2) compensation, (3) self reported poor overall health and (4) pre-existing psychological factors were predictive in a high risk of failure. Followup was carried out at 3 months and one year after surgery. Pre-operative depressive disorders tended not to do well.Smoking has been shown to increase the incidence of post operative infection as well as decrease fusion rates. One study showed 90% of post operative infections occurred in smokers, as well as myonecrosis (muscle destruction) around the wound.
Pathology
Before the advent of CT scanning, the pathology in failed back syndrome was difficult to understand. Computerized tomography in conjunction with metrizamide myelography in the late 1960s and 1970s allowed direct observation of the mechanisms involved in post operative failures. Six distinct pathologic conditions were identified:
Recurrent or persistent disc herniation
Spinal stenosis
Post operative infection
Epidural post-operative fibrosis
Adhesive arachnoiditis
Nerve Injury
Recurrent or persistent disc herniation
Removal of a disc at one level can lead to disc herniation at a different level at a later time. Even the most complete surgical excision of the disc still leaves 30–40% of the disc, which cannot be safely removed. This retained disc can re-herniate sometime after surgery. Virtually every major structure in the abdomen and the posterior retroperitoneal space has been injured, at some point, by removing discs using posterior laminectomy/discectomy surgical procedures. The most prominent of these is a laceration of the left internal iliac vein, which lies in close proximity to the anterior portion of the disc. In some studies, recurrent pain in the same radicular pattern or a different pattern can be as high as 50% after disc surgery. Many observers have noted that the most common cause of a failed back syndrome is caused from recurrent disc herniation at the same level originally operated. A rapid removal in a second surgery can be curative. The clinical picture of a recurrent disc herniation usually involves a significant pain-free interval. However, physical findings may be lacking, and a good history is necessary. The time period for the emergence of new symptoms can be short or long. Diagnostic signs such as the straight leg raise test may be negative even if real pathology is present. The presence of a positive myelogram may represent a new disc herniation, but can also be indicative of a post operative scarring situation simply mimicking a new disc. Newer MRI imaging techniques have clarified this dilemma somewhat. Conversely, a recurrent disc can be difficult to detect in the presence of post op scarring. Myelography is inadequate to completely evaluate the patient for recurrent disc disease, and CT or MRI scanning is necessary. Measurement of tissue density can be helpful.Even though the complications of laminectomy for disc herniation can be significant, a recent series of studies involving thousands of patients published under auspices of Dartmouth Medical School concluded at four-year follow-up that those who underwent surgery for a lumbar disc herniation achieved greater improvement than nonoperatively treated patients in all primary and secondary outcomes except work status.
Spinal stenosis
Spinal stenosis can be a late complication after laminectomy for disc herniation or when surgery was performed for the primary pathologic condition of spinal stenosis.
In the Maine Study, among patients with lumbar spinal stenosis completing 8- to 10-year follow-up, low back pain relief, predominant symptom improvement, and satisfaction with the current state were similar in patients initially treated surgically or nonsurgically. However, leg pain relief and greater back-related functional status continued to favor those initially receiving surgical treatment.A large study of spinal stenosis from Finland found the prognostic factors for ability to work after surgery were ability to work before surgery, age under 50 years, and no prior back surgery. The very long-term outcome (mean follow-up time of 12.4 years) was excellent-to-good in 68% of patients (59% women and 73% men). Furthermore, in the longitudinal follow-up, the result improved between 1985 and 1991. No special complications were manifested during this very long-term follow-up time. The patients with total or subtotal block in preoperative myelography achieved the best result. Furthermore, patients with block stenosis improved their result significantly in the longitudinal follow-up. The postoperative stenosis seen in computed tomography (CT) scans was observed in 65% of 90 patients, and it was severe in 23 patients (25%). However, this successful or unsuccessful surgical decompression did not correlate with patients subjective disability, walking capacity or severity of pain. Previous back surgery had a strong worsening effect on surgical results. This effect was very clear in patients with total block in the preoperative myelography. The surgical result of a patient with previous back surgery was similar to that of a patient without previous back surgery when the time interval between the last two operations was more than 18 months.Post-operative MRI findings of stenosis are probably of limited value compared to symptoms experienced by patients. Patients perception of improvement had a much stronger correlation with long-term surgical outcome than structural findings seen on postoperation magnetic resonance imaging. Degenerative findings had a greater effect on patients walking capacity than stenotic findings.Postoperative radiologic stenosis was very common in patients operated on for lumbar spinal stenosis, but this did not correlate with clinical outcome. The clinician must be cautious when reconciling clinical symptoms and signs with postoperative computed tomography findings in patients operated on for lumbar spinal stenosis.A study from Georgetown University reported on one-hundred patients who had undergone decompressive surgery for lumbar stenosis between 1980 and 1985. Four patients with postfusion stenosis were included. A 5-year follow-up period was achieved in 88 patients. The mean age was 67 years, and 80% were over 60 years of age. There was a high incidence of coexisting medical diseases, but the principal disability was lumbar stenosis with neurological involvement. Initially there was a high incidence of success, but recurrence of neurological involvement and persistence of low-back pain led to an increasing number of failures. By 5 years this number had reached 27% of the available population pool, suggesting that the failure rate could reach 50% within the projected life expectancies of most patients. Of the 26 failures, 16 were secondary to renewed neurological involvement, which occurred at new levels of stenosis in eight and recurrence of stenosis at operative levels in eight. Reoperation was successful in 12 of these 16 patients, but two required a third operation. The incidence of spondylolisthesis at 5 years was higher in the surgical failures (12 of 26 patients) than in the surgical successes (16 of 64). Spondylolisthetic stenosis tended to recur within a few years following decompression. Because of age and associated illnesses, fusion may be difficult to achieve in this group.
Post operative infection
A small minority of lumbar surgical patients will develop a post operative infection. In most cases, this is a bad complication and does not bode well for eventual improvement or future employability. Reports from the surgical literature indicate an infection rate anywhere from 0% to almost 12%. The incidence of infection tends to increase as the complexity of the procedure and operating time increase. Usage of metal implants (instrumentation) tends to increase the risk of infection. Factors associated with an increased infection include diabetes mellitus, obesity, malnutrition, smoking, previous infection, rheumatoid arthritis, and immunodeficiency.
Previous wound infection should be considered as a contraindication to any further spinal surgery, since the likelihood of improving such patients with more surgery is small.
Antimicrobial prophylaxis (giving antibiotics during or after surgery before an infection begins) reduces the rate of surgical site infection in lumbar spine surgery, but a great deal of variation exists regarding its use. In a Japanese study, utilizing the Centers for Disease Control recommendations for antibiotic prophylaxis, an overall rate of 0.7% infection was noted, with a single dose antibiotic group having 0.4% infection rate and multiple dosage antibiotic infection rate of 0.8%. The authors had previously used prophylactic antibiotics for 5 to 7 postoperative days. Based on the Centers for Disease Control and Prevention guideline, their antibiotic prophylaxis was changed to the day of surgery only. It was concluded there was no statistical difference in the rate of infection between the two different antibiotic protocols. Based on the CDC guideline, a single dose of prophylactic antibiotic was proven to be efficacious for the prevention of infection in lumbar spine surgeries.
Epidural post-operative fibrosis
Epidural scarring following a laminectomy for disc excision is a common feature when re-operating for recurrent sciatica or radiculopathy. When the scarring is associated with a disc herniation and/or recurrent spinal stenosis, it is relatively common, occurring in more than 60% of cases. For a time, it was theorized that placing a fat graft over the dural could prevent post operative scarring. However, initial enthusiasm has waned in recent years. In an extensive laminectomy involving 2 or more vertebra, post operative scarring is the norm. It is most often seen around the L5 and S1 nerve roots.
Adhesive arachnoiditis
Fibrous scarring can also be a complication within the subarachnoid space. It is notoriously difficult to detect and evaluate. Prior to the development of magnetic resonance imaging, the only way to ascertain the presence of arachnoiditis was by opening the dura. In the days of CT scanning and Pantopaque and later, Metrizamide myelography, the presence of arachnoiditis could be speculated based on radiographic findings. Often, myelography prior to the introduction of Metrizamide was the cause of arachnoiditis. It can also be caused by the long term pressure brought about with either a severe disc herniation or spinal stenosis. The presence of both epidural scarring and arachnoiditis in the same patient are probably quite common.
Arachnoiditis is a broad term denoting inflammation of the meninges and subarachnoid space. A variety of causes exist, including infectious, inflammatory, and neoplastic processes.
Infectious causes include bacterial, viral, fungal, and parasitic agents. Noninfectious inflammatory processes include surgery, intrathecal hemorrhage, and the administration of intrathecal (inside the dural canal) agents such as myelographic contrast media, anesthetics (e.g. chloroprocaine), and steroids (e.g. Depo-Medrol, Kenalog). Lately iatrogenic arachnoiditis has been attributed to misplaced Epidural Steroid Injection therapy when accidentally administered intrathecally. The preservatives and suspension agents found in all steroid injectates, which arent indicated for epidural administration by the U.S. Food & Drug Administration due to reports of severe adverse events including arachnoiditis, paralysis and death, have now been directly linked to the onset of the disease following the initial stage of chemical meningitis.
Neoplasia includes the hematogenous spread of systemic tumors, such as breast and lung carcinoma, melanoma, and non-Hodgkin lymphoma. Neoplasia also includes direct seeding of the cerebrospinal fluid (CSF) from primary central nervous system (CNS) tumors such as glioblastoma multiforme, medulloblastoma, ependymoma, and choroid plexus carcinoma. Strictly speaking, the most common cause of arachnoiditis in failed back syndrome is not infectious or from cancer. It is due to non-specific scarring secondary to the surgery or the underlying pathology.
Nerve injury
Laceration of a nerve root, or damage from cautery or traction can lead to chronic pain, however this can be difficult to determine. Chronic compression of the nerve root by a persistent agent such as disc, bone (osteophyte) or scarring can also permanently damage the nerve root. Epidural scarring caused by the initial pathology or occurring after the surgery can also contribute to nerve damage. In one study of failed back patients, the presence of pathology was noted to be at the same site as the level of surgery performed in 57% of cases. The remaining cases developed pathology at a different level, or on the opposite side, but at the same level as the surgery was performed. In theory, all failed back patients have some sort of nerve injury or damage which leads to a persistence of symptoms after a reasonable healing time.
Diagnosis
Avoiding post-laminectomy/laminotomy syndrome
Smaller procedures that do not remove bone (such as Endoscopic Transforaminal Lumbar Discectomy and Reconfiguration) do not cause post laminectomy/laminotomy syndrome.
Management
Failed back syndrome (FBS) is a well-recognized complication of surgery of the lumbar spine. It can result in chronic pain and disability, often with disastrous emotional and financial consequences to the patient. Many patients have traditionally been classified as "spinal cripples" and are consigned to a life of long-term narcotic treatment with little chance of recovery. Despite extensive work in recent years, FBS remains a challenging and costly disorder.
Opioids
A study of chronic pain patients from the University of Wisconsin found that methadone is most widely known for its use in the treatment of opioid dependence, but methadone also provides effective analgesia. Patients who experience inadequate pain relief or intolerable side effects with other opioids or who suffer from neuropathic pain may benefit from a transition to methadone as their analgesic agent. Adverse effects, particularly respiratory depression and death, make a fundamental knowledge of methadones pharmacological properties essential to the provider considering methadone as analgesic therapy for a patient with chronic pain.
Patient selection
Patients who have sciatic pain (pain in the back, radiating down the buttock to the leg) and clear clinical findings of an identifiable radicular nerve loss caused by a herniated disc will have a better post operative course than those who simply have low back pain. If a specific disc herniation causing pressure on a nerve root cannot be identified, the results of surgery are likely to be disappointing. Patients involved in workers compensation, tort litigation or other compensation systems tend to fare more poorly after surgery. Surgery for spinal stenosis usually has a good outcome, if the surgery is done in an extensive manner, and done within the first year or so of the appearance of symptoms.Oaklander and North define the Failed Back Syndrome as a chronic pain patient after one or more surgical procedure to the spine. They delineated these characteristics of the relation between the patient and the surgeon:
The patient makes increasing demands on the surgeon for pain relief. The surgeon may feel a strong responsibility to provide a remedy when the surgery has not achieved the desired goals.
The patient grows increasingly angry at the failure and may become litigious.
There is an escalation of narcotic pain medication which can be habituating or addictive.
In the face of expensive conservative treatments which are likely to fail, the surgeon is persuaded to attempt further surgery, even though this is likely to fail as well.
The probability of returning to gainful employment decreases with increasing length of disability.
The financial incentives to remain disabled may be perceived as outweighing the incentive to recover.In the absence of a financial source for disability or workers compensation, other psychological features may limit the ability of the patient to recover from surgery. Some patients are simply unfortunate, and fall into the category of "chronic pain" despite their desire to recover and the best efforts of the physicians involved in their care. Even less invasive forms of surgery are not uniformly successful; approximately 30,000–40,000 laminectomy patients obtain either no relief of symptomatology or a recurrence of symptoms. Another less invasive form of spinal surgery, percutaneous disc surgery, has reported revision rates as high as 65%. It is no surprise, therefore, that FBSS is a significant medical concern which merits further research and attention by the medical and surgical communities.
Total disc replacement
Lumbar total disc replacement was originally designed to be an alternative to lumbar arthrodesis (fusion). The procedure was met with great excitement and heightened expectations both in the United States and Europe. In late 2004, the first lumbar total disc replacement received approval from the U.S. Food and Drug Administration (FDA). More experience existed in Europe. Since then, the initial excitement has given way to skepticism and concern. Various failure rates and strategies for revision of total disc replacement have been reported.The role of artificial or total disc replacement in the treatment of spinal disorders remains ill-defined and unclear. Evaluation of any new technique is difficult or impossible because physician experience may be minimal or lacking. Patient expectations may be distorted. It has been difficult to establish clear cut indications for artificial disc replacement. It may not be a replacement procedure or alternative to fusion, since recent studies have shown that 100% of fusion patients had one or more contraindications to disc replacement. The role of disc replacement must come from new indications not defined in todays literature or a relaxation of current contraindications.A study by Regan found the result of replacement was the same at L4-5 and L5-S1 with the CHARITE disc. However, the ProDisc II had more favorable results at L4-5 compared with L5-S1.A younger age was predictive of a better outcome in several studies. In others it has been found to be a negative predictor or of no predictive value. Older patients may have more complications.Prior spinal surgery has mixed effects on disc replacement. It has been reported to be negative in several studies. It has been reported to have no effect in other studies. Many studies are simply inconclusive. Existing evidence does not allow drawing definite conclusions about the status of disc replacement at present.
Electrical stimulation
Many failed back patients are significantly impaired by chronic pain in the back and legs. Many of these will be treated with some form of electrical stimulation. This can be either a transcutaneous electrical nerve stimulation device placed on the skin over the back or a nerve stimulator implanted into the back with electrical probes which directly touch the spinal cord. Also, some chronic pain patients utilize fentanyl or narcotic patches. These patients are generally severely impaired and it is unrealistic to conclude that application of neurostimulation will reduce that impairment. For example, it is doubtful that neurostimulation will improve the patient enough to return to competitive employment. Neurostimulation is palliative. TENS units work by blocking neurotransmission as described by the pain theory of Melzack and Wall. Success rates for implanted neurostimulation has been reported to be 25% to 55%. Success is defined as a relative decrease in pain.
Chiropractic
Limited case series have shown improvement for patients with failed back surgery who were managed with chiropractic care.
Prognosis
Under rules promulgated by Titles II and XVI of the United States Social Security Act, chronic radiculopathy, arachnoiditis and spinal stenosis are recognized as disabling conditions under Listing 1.04 A (radiculopathy), 1.04 B (arachnoiditis) and 1.04 C (spinal stenosis).
Return to work
In a groundbreaking Canadian study, Waddell et al. reported on the value of repeat surgery and the return to work in workers compensation cases. They concluded that workers who undergo spinal surgery take longer to return to their jobs. Once two spinal surgeries are performed, few if any ever return to gainful employment of any kind. After two spinal surgeries, most people in the workers comp system will not be made better by more surgery. Most will be worse after a third surgery.
Episodes of back pain associated with on the job injuries in the workers compensation setting are usually of short duration. About 10% of such episodes will not be simple, and will degenerate into chronic and disabling back pain conditions, even if surgery is not performed.It has been hypothesized that job dissatisfaction and individual perception of physical demands are associated with an increased time of recovery or an increased risk of no recovery at all. Individual psychological and social work factors, as well as worker-employer relations are also likely to be associated with time and rates of recovery.A Finnish study of return to work in patients with spinal stenosis treated by surgery found that: (1) none of the patients who had retired before the operation returned to work afterward. (2) The variables that predicted postoperative ability to work for women were: being fit to work at the time of operation, age < 50 years at the time of operation, and duration of lumbar spinal stenosis symptoms < 2 years. (3) For men, these variables were: being fit to work at the time of operation, age < 50 years at the time of operation, no prior surgery, and the extent of the surgical procedure equal to or less than one laminectomy. Womens and mens working capacity do not differ after lumbar spinal stenosis operation. If the aim is to maximize working capacity, then, when a lumbar spinal stenosis operation is indicated, it should be performed without delay. In lumbar spinal stenosis patients who are > 50 years old and on sick leave, it is unrealistic to expect that they will return to work. Therefore, after such an extensive surgical procedure, re-education of patients for lighter jobs could improve the chances of these patients returning to work.In a related Finnish study, a total of 439 patients operated on for lumbar spinal stenosis during the period 1974–1987 was re-examined and evaluated for working and functional capacity approximately 4 years after the decompressive surgery. The ability to work before or after the operation and a history of no prior back surgery were variables predictive of a good outcome. Before the operation 86 patients were working, 223 patients were on sick leave, and 130 patients were retired. After the operation 52 of the employed patients and 70 of the unemployed patients returned to work. None of the retired patients returned to work. Ability to work preoperatively, age under 50 years at the time of operation and the absence of prior back surgery predicted a postoperative ability to work.A report from Belgium noted that patients reportedly return to work an average of 12 to 16 weeks after surgery for lumbar disc herniation. However, there are studies that lend credence to the value of an earlier stimulation for return to work and performance of normal activities after a limited discectomy. At follow-up assessment, it was found that no patient had changed employment because of back or leg pain.
The sooner the recommendation is made to return to work and perform normal activities, the more likely the patient is to comply. Patients with ongoing disabling back conditions have a low priority for return to work. The probability of return to work decreases as time off work increases. This is especially true in Belgium, where 20% of individuals did not resume work activities after surgery for a disc herniation of the lumbar spine.
In Belgium, the medical advisers of sickness funds have an important role legally in the assessment of working capacity and medical rehabilitation measures for employees whose fitness for work is jeopardized or diminished for health reasons. The measures are laid down in the sickness and invalidity legislation. They are in accordance with the principle of preventing long-term disability. It is apparent from the authors experience that these measures are not adapted consistently in medical practice. Most of the medical advisers are focusing purely on evaluation of corporal damage, leaving little or no time for rehabilitation efforts. In many other countries, the evaluation of work capacity is done by social security doctors with a comparable task.In a comprehensive set of studies carried out by the University of Washington School of Medicine, it was determined that the outcome of lumbar fusion performed on injured workers was worse than reported in most published case series. They found 68% of lumbar fusion patients still unable to return to work two years after surgery. This was in stark contrast to reports of 68% post-op satisfaction in many series. In a follow-up study it was found that the use of intervertebral fusion devices rose rapidly after their introduction in 1996. This increase in metal usage was associated with a greater risk of complication without improving disability or re-operation rates.
Research
The identification of tumor necrosis factor-alpha (TNF) as a central cause of inflammatory spinal pain now suggests the possibility of an entirely new approach to selected patients with FBSS. Specific and potent inhibitors of TNF became available in the U.S. in 1998, and were demonstrated to be potentially effective for treating sciatica in experimental models beginning in 2001. Targeted anatomic administration of one of these anti-TNF agents, etanercept, a patented treatment method, has been suggested in published pilot studies to be effective for treating selected patients with chronic disc-related pain and FBSS. The scientific basis for pain relief in these patients is supported by the many current review articles. In the future new imaging methods may allow non-invasive identification of sites of neuronal inflammation, thereby enabling more accurate localization of the "pain generators" responsible for symptom production. These treatments are still experimental.
If chronic pain in FBSS has a chemical component producing inflammatory pain, then prior to additional surgery it may make sense to use an anti-inflammatory approach. Often this is first attempted with non-steroidal anti-inflammatory medications, but the long-term use of Non-steroidal anti-inflammatory drugs (NSAIDS) for patients with persistent back pain is complicated by their possible cardiovascular and gastrointestinal toxicity; and NSAIDs have limited value to intervene in TNF-mediated processes. An alternative often employed is the injection of cortisone into the spine adjacent to the suspected pain generator, a technique known as "epidural steroid injection". Although this technique began more than a decade ago for FBSS, the efficacy of epidural steroid injections is now generally thought to be limited to short term pain relief in selected patients only. In addition, epidural steroid injections, in certain settings, may result in serious complications. Fortunately there are now emerging new methods that directly target TNF. These TNF-targeted methods represent a highly promising new approach for patients with chronic severe spinal pain, such as those with FBSS. Ancillary approaches, such as rehabilitation, physical therapy, anti-depressants, and, in particular, graduated exercise programs, may all be useful adjuncts to anti-inflammatory approaches. In addition, more invasive modalities, such as spinal cord stimulation, may offer relief for certain patients with FBSS, but these modalities, although often referred to as "minimally invasive", require additional surgery, and have complications of their own.
Worldwide perspective
A report from Spain noted that the investigation and development of new techniques for instrumented surgery of the spine is not free from conflicts of interest. The influence of financial forces in the development of new technologies and its immediate application to spine surgery, shows the relationship between the published results and the industry support. Authors who have developed and defended fusion techniques have also published new articles praising new spinal technologies. The author calls spinal surgery the "American Stock and Exchange" and "the bubble of spine surgery". The scientific literature doesnt show clear evidence in the cost-benefit studies of most instrumented surgical interventions of the spine compared with the conservative treatments. It has not been yet demonstrated that fusion surgery and disc replacement are better options than the conservative treatment. Its necessary to point out that at present "there are relationships between the industry and back pain, and there is also an industry of the back pain". Nonetheless, the "market of the spine surgery" is growing because patients are demanding solutions for their back problems. The tide of scientific evidence seems to go against the spinal fusions in the degenerative disc disease, discogenic pain and in specific back pain. After decades of advances in this field, the results of spinal fusions are mediocre. New epidemiological studies show that "spinal fusion must be accepted as a non proved or experimental method for the treatment of back pain". The surgical literature on spinal fusion published in the last 20 years establishes that instrumentation seems to slightly increase the fusion rate and that instrumentation doesnt improve the clinical results in general. We still are in need of randomized studies to compare the surgical results with the natural history of the disease, the placebo effect, or conservative treatment. The European Guidelines for lumbar chronic pain management show "strong evidence" indicating that complex and demanding spine surgery where different instrumentation is used, is not more effective than a simple, safer and cheaper posterolateral fusion without instrumentation. Recently, the literature published in this field is sending a message to use "minimally invasive techniques"; – the abandonment of transpedicular fusions. Surgery in general, and usage of metal fixation should be discarded in most cases.In Sweden, the national registry of lumbar spine surgery reported in the year 2000 that 15% of patients with spinal stenosis surgery underwent a concomitant fusion. Despite the traditionally conservative approach to spinal surgery in Sweden, there have been calls from that country for a more aggressive approach to lumbar procedures in recent years.
Cherkin et al., evaluated worldwide surgical attitudes. There were twice the number of surgeons per capita in the United States compared to the United Kingdom. Numbers were similar to Sweden </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Necatoriasis and explain in detail? [/INST] Necatoriasis is the condition of infection by Necator hookworms, such as Necator americanus. This hookworm infection is a type of helminthiasis (infection) which is a type of neglected tropical disease.
Signs and symptoms
When adult worms attach to the villi of the small intestine, they suck on the hosts blood, which may cause abdominal pain, diarrhea, cramps, and weight loss that can lead to anorexia. Heavy infections can lead to the development of iron deficiency and hypochromic microcytic anemia. This form of anemia in children can give rise to physical and mental retardation. Infection caused by cutaneous larvae migrans, a skin disease in humans, is characterized by skin ruptures and severe itching.
Cause
Necatoriasis is caused by N. americanus. N. americanus can be divided into two areas – larvae and adult stage. The third stage larvae are guided to human skin by following thermal gradients. Typically, the larvae enter through the hands and feet following contact with contaminated soil. A papular, pruritic, itchy rash will develop around the site of entry into the human host. This is also known as "ground itch". Generally, migration through the lungs is asymptomatic but a mild cough and pharyngeal irritation may occur during larval migration in the airways. Once larvae break through the alveoli and are swallowed, they enter the gastrointestinal tract and attach to the intestinal mucosa where they mature into adult worms. The hookworms attach to the mucosal lining using their cutting plates which allows them to penetrate blood vessels and feed on the hosts blood supply. Each worm consumes 30 μl of blood per day. The major issue results from this intestinal blood loss which can lead to iron-deficiency anemia in moderate to heavy infections. Other common symptoms include epigastric pain and tenderness, nausea, exertional dyspnea, pain in lower extremities and in joints, sternal pain, headache, fatigue, and impotence. Death is rare in humans.
Diagnosis
The standard method for diagnosing necatoriasis is through identification of N. americanus eggs in a fecal sample using a microscope. Eggs can be difficult to visualize in a lightly infected sample so a concentration method is generally used such as flotation or sedimentation. However, the eggs of A. duodenale and N. americanus cannot be distinguished; thus, the larvae must be examined to identify these hookworms. Larvae cannot be found in stool specimens unless the specimen was left at ambient temperature for a day or more.The most common technique used to diagnose a hookworm infection is to take a stool sample, fix it in 10% formalin, concentrate it using the formalin-ethyl acetate sedimentation technique, and then create a wet mount of the sediment for viewing under a microscope.
Prevention
Education, improved sanitation, and controlled disposal of human feces are critical for prevention. Nonetheless, wearing shoes in endemic areas helps reduce the prevalence of infection.
Treatment
An infection of N. americanus parasites can be treated by using benzimidazoles: albendazole or mebendazole. A blood transfusion may be necessary in severe cases of anemia. Light infections are usually left untreated in areas where reinfection is common. Iron supplements and a diet high in protein will speed the recovery process. In a case study involving 56-60 men with Trichuris trichiura and/or N. americanus infections, both albendazole and mebendazole were 90% effective in curing T. trichiura. However, albendazole had a 95% cure rate for N. americanus, while mebendazole only had a 21% cure rate. This suggests albendazole is most effective for treating both T. trichiura and N. americanus.Cryotherapy by application of liquid nitrogen to the skin has been used to kill cutaneous larvae migrans, but the procedure has a low cure rate and a high incidence of pain and severe skin damage, so it now is passed over in favor of suitable pharmaceuticals </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Tick-borne encephalitis and explain in detail? [/INST] Tick-borne encephalitis (TBE) is a viral infectious disease involving the central nervous system. The disease most often manifests as meningitis, encephalitis or meningoencephalitis. Myelitis and spinal paralysis also occurs. In about one third of cases sequelae, predominantly cognitive dysfunction, persists for a year or more.The number of reported cases has been increasing in most countries. TBE is posing a concerning health challenge to Europe, as the number of reported human cases of TBE in all endemic regions of Europe have increased by almost 400% within the last three decades.The tick-borne encephalitis virus is known to infect a range of hosts including ruminants, birds, rodents, carnivores, horses, and humans. The disease can also be spread from animals to humans, with ruminants and dogs providing the principal source of infection for humans.
Signs and symptoms
The disease is most often biphasic. After an incubation period of approximately one week (range: 4–28 days) from exposure (tick bite) non-specific symptoms occurs. These symptoms are fever, malaise, headache, nausea, vomiting and myalgias that persist for about 5 days. Then, after approximately one week without symptoms, some of the infected develop neurological symptoms, i.e. meningitis, encephalitis or meningoencephalitis. Myelitis also occurs with or without encephalitis.Sequelae persists for a year or more in approximately one third of people who develop neurological disease. Most common long-term symptoms are headache, concentration difficulties, memory impairment and other symptoms of cognitive dysfunction.Mortality depends on the subtype of the virus. For the European subtype mortality rates are 0.5% to 2% for people who develop neurological disease.In dogs, the disease also manifests as a neurological disorder with signs varying from tremors to seizures and death.In ruminants, neurological disease is also present, and animals may refuse to eat, appear lethargic, and also develop respiratory signs.
Cause
TBE is caused by tick-borne encephalitis virus, a member of the genus Flavivirus in the family Flaviviridae. It was first isolated in 1937. Three virus sub-types also exist: European or Western tick-borne encephalitis virus (transmitted by Ixodes ricinus), Siberian tick-borne encephalitis virus (transmitted by I. persulcatus), and Far-Eastern tick-borne encephalitis virus, formerly known as Russian spring summer encephalitis virus (transmitted by I. persulcatus).The former Soviet Union conducted research on tick-borne diseases, including the TBE viruses.
Transmission
It is transmitted by the bite of several species of infected woodland ticks, including Ixodes scapularis, I. ricinus and I. persulcatus, or (rarely) through the non-pasteurized milk of infected cows.Infection acquired through goat milk consumed as raw milk or raw cheese (Frischkäse) has been documented in 2016 and 2017 in the German state of Baden-Württemberg. None of the infected had neurological disease.
Diagnosis
Detection of specific IgM and IgG antibodies in patients sera combined with typical clinical signs, is the principal method for diagnosis. In more complicated situations, e.g. after vaccination, testing for presence of antibodies in cerebrospinal fluid may be necessary. It has been stated that lumbar puncture always should be performed when diagnosing TBE and that pleocytosis in cerebrospinal fluid should be added to the diagnostic criteria.PCR (polymerase chain reaction) method is rarely used, since TBE virus RNA is most often not present in patient sera or cerebrospinal fluid at the time of neurological symptoms.
Prevention
Prevention includes non-specific (tick-bite prevention, tick checks) and specific prophylaxis in the form of a vaccination. Tick-borne encephalitis vaccines are very effective and available in many disease endemic areas and in travel clinics. Trade names are Encepur N and FSME-Immun CC.
Treatment
There is no specific antiviral treatment for TBE. Symptomatic brain damage requires hospitalization and supportive care based on syndrome severity. Anti-inflammatory drugs, such as corticosteroids, may be considered under specific circumstances for symptomatic relief. Tracheal intubation and respiratory support may be necessary.
Epidemiology
As of 2011, the disease was most common in Central and Eastern Europe, and Northern Asia. About ten to twelve thousand cases are documented a year but the rates vary widely from one region to another. Most of the variation has been the result of variation in host population, particularly that of deer. In Austria, an extensive vaccination program since the 1970s reduced the incidence in 2013 by roughly 85%.In Germany, during the 2010s, there have been a minimum of 95 (2012) and a maximum of 584 cases (2018) of TBE (or FSME as it is known in German) </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Dyscalculia and explain in detail? [/INST] Dyscalculia () is a disability resulting in difficulty learning or comprehending arithmetic, such as difficulty in understanding numbers, learning how to manipulate numbers, performing mathematical calculations, and learning facts in mathematics. It is sometimes colloquially referred to "math dyslexia", though this analogy can be misleading as they are distinct syndromes.Dyscalculia is associated with dysfunction in the region around the intraparietal sulcus and potentially also the frontal lobe. Dyscalculia does not reflect a general deficit in cognitive abilities or difficulties with time, measurement, and spatial reasoning. Estimates of the prevalence of dyscalculia range between 3 and 6% of the population. In 2015 it was established that 11% of children with dyscalculia also have ADHD. Dyscalculia has also been associated with Turner syndrome and people who have spina bifida.Mathematical disabilities can occur as the result of some types of brain injury, in which case the term acalculia is used instead of dyscalculia which is of innate, genetic or developmental origin.
Signs and symptoms
The earliest appearance of dyscalculia is typically a deficit in subitizing, the ability to know, from a brief glance and without counting, how many objects there are in a small group. Children as young as five can subitize six objects, especially looking at a die. However, children with dyscalculia can subitize fewer objects and even when correct take longer to identify the number than their age-matched peers. Dyscalculia often looks different at different ages. It tends to become more apparent as children get older; however, symptoms can appear as early as preschool. Common symptoms of dyscalculia are having difficulty with mental math, trouble analyzing time and reading an analog clock, struggle with motor sequencing that involves numbers, and often counting on fingers when adding numbers.
Common symptoms
Dyscalculia is characterized by difficulties with common arithmetic tasks. These difficulties may include:
Difficulty reading analog clocks
Difficulty stating which of two numbers is larger
Sequencing issues
Inability to comprehend financial planning or budgeting, sometimes even at a basic level; for example, estimating the cost of the items in a shopping basket or balancing a checkbook
Visualizing numbers as meaningless or nonsensical symbols, rather than perceiving them as characters indicating a numerical value (hence the misnomer "math dyslexia")
Difficulty with multiplication, subtraction, addition, and division tables, mental arithmetic, etc.
Inconsistent results in addition, subtraction, multiplication and division
When writing, reading and recalling numbers, mistakes may occur in the areas such as: number additions, substitutions, transpositions, omissions, and reversals
Poor memory (retention and retrieval) of math concepts; may be able to perform math operations one day, but draw a blank the next; may be able to do book work but then fails tests
Ability to grasp math on a conceptual level, but an inability to put those concepts into practice
Difficulty recalling the names of numbers, or thinking that certain different numbers "feel" the same (e.g. frequently interchanging the same two numbers for each other when reading or recalling them)
Problems with differentiating between left and right
A "warped" sense of spatial awareness, or an understanding of shapes, distance, or volume that seems more like guesswork than actual comprehension
Difficulty with time, directions, recalling schedules, sequences of events, keeping track of time, frequently late or early
Difficulty reading maps
Difficulty working backwards in time (e.g. What time to leave if needing to be somewhere at X time)
Difficulty reading musical notation
Difficulty with choreographed dance steps
Having particular difficulty mentally estimating the measurement of an object or distance (e.g., whether something is 3 or 6 meters (10 or 20 feet) away)
Inability to grasp and remember mathematical concepts, rules, formulae, and sequences
Inability to concentrate on mentally intensive tasks
Mistaken recollection of names, poor name/face retrieval; may substitute names beginning with same letter.
Some people with dyscalculia also have aphantasia that affects how they can "see" in their minds eye. However, people without dyscalculia also report having this – but it is more prevalent in those with dyslexia and other learning disabilities.
Persistence in children
Although many researchers believe dyscalculia to be a persistent disorder, evidence on the persistence of dyscalculia remains mixed. For instance, in a study done by Mazzocco and Myers (2003), researchers evaluated children on a slew of measures and selected their most consistent measure as their best diagnostic criterion: a stringent 10th-percentile cut-off on the TEMA-2. Even with their best criterion, they found dyscalculia diagnoses for children longitudinally did not persist; only 65% of students who were ever diagnosed over the course of four years were diagnosed for at least two years. The percentage of children who were diagnosed in two consecutive years was further reduced. It is unclear whether this was the result of misdiagnosed children improving in mathematics and spatial awareness as they progressed as normal, or that the subjects who showed improvement were accurately diagnosed, but exhibited signs of a non-persistent learning disability.
Persistence in adults
There are very few studies of adults with dyscalculia who have had a history of it growing up, but such studies have shown that it can persist into adulthood. It can affect major parts of an adults life. Most adults with dyscalculia have a hard time processing math at a 4th grade level. For 1st-4th grade level, many adults will know what to do for the math problem, but they will often get them wrong because of "careless errors", although they are not careless when it comes to the problem. The adults cannot process their errors on the math problems or may not even recognize that they have made these errors. Visual-spatial input, auditory input, and touch input will be affected due to these processing errors. Dyscalculics may have a difficult time adding numbers in a column format because their mind can mix up the numbers, and it is possible that they may get the same (wrong) answer twice due to their mind processing the problem incorrectly. Dyscalculics can have problems determining differences in different coins and their size or giving the correct amount of change and if numbers are grouped together, it is possible that they cannot determine which has less or more. If a dyscalculic is asked to choose the greater of two numbers, with the lesser number in a larger font than the greater number, they may take the question literally and pick the number with the bigger font. Adults with dyscalculia have a tough time with directions while driving and with controlling their finances, which causes difficulties on a day-to-day basis.
College students or other adult learners
College students particularly may have a difficult time due to the fast pace and change in difficulty of the work they are given. As a result of this, students may develop much anxiety and frustration. After dealing with their anxiety for a long time, students can become averse to math and try to avoid it as much as possible, which may result in lower grades in math courses. (That said, students with dyscalculia can also do exceptionally well in writing, reading, and speaking.)
Causes
Both domain-general and domain-specific causes have been put forth. With respect to pure developmental dyscalculia, domain-general causes are unlikely as they should not impair ones ability in the numerical domain without also affecting other domains such as reading.Two competing domain-specific hypotheses about the causes of developmental dyscalculia have been proposed – the magnitude representation (or number module deficit hypothesis) and the access deficit hypothesis.
Magnitude representation deficit
Dehaenes "number sense" theory suggests that approximate numerosities are automatically ordered in an ascending manner on a mental number line. The mechanism to represent and process non-symbolic magnitude (e.g., number of dots) is often known as the "approximate number system" (ANS), and a core deficit in the precision of the ANS, known as the "magnitude representation hypothesis" or "number module deficit hypothesis", has been proposed as an underlying cause of developmental dyscalculia.In particular, the structural features of the ANS is theoretically supported by a phenomenon called the "numerical distance effect", which has been robustly observed in numerical comparison tasks. Typically developing individuals are less accurate and slower in comparing pairs of numbers closer together (e.g., 7 and 8) than further apart (e.g., 2 and 9). A related "numerical ratio effect" (in which the ratio between two numbers varies but the distance is kept constant, e.g., 2 vs. 5 and 4 vs. 7) based on the Webers law has also been used to further support the structure of the ANS. The numerical ratio effect is observed when individuals are less accurate and slower in comparing pairs of numbers that have a larger ratio (e.g., 8 and 9, ratio = 8/9) than a smaller ratio (2 and 3; ratio = 2/3). A larger numerical distance or ratio effect with comparison of sets of objects (i.e., non-symbolic) is thought to reflect a less precise ANS, and the ANS acuity has been found to correlate with math achievement in typically developing children and also in adults.More importantly, several behavioral studies have found that children with developmental dyscalculia show an attenuated distance/ratio effect than typically developing children. Moreover, neuroimaging studies have also provided additional insights even when behavioral difference in distance/ratio effect might not be clearly evident. For example, Gavin R. Price and colleagues found that children with developmental dyscalculia showed no differential distance effect on reaction time relative to typically developing children, but they did show a greater effect of distance on response accuracy. They also found that the right intraparietal sulcus in children with developmental dyscalculia was not modulated to the same extent in response to non-symbolic numerical processing as in typically developing children. With the robust implication of the intraparietal sulcus in magnitude representation, it is possible that children with developmental dyscalculia have a weak magnitude representation in the parietal region. Yet, it does not rule out an impaired ability to access and manipulate numerical quantities from their symbolic representations (e.g., Arabic digits).
Moreover, findings from a cross-sectional study suggest that children with developmental dyscalculia might have a delayed development in their numerical magnitude representation by as much as five years. However, the lack of longitudinal studies still leaves the question open as to whether the deficient numerical magnitude representation is a delayed development or impairment.
Access deficit hypothesis
Rousselle & Noël propose that dyscalculia is caused by the inability to map preexisting representations of numerical magnitude onto symbolic Arabic digits. Evidence for this hypothesis is based on research studies that have found that individuals with dyscalculia are proficient on tasks that measure knowledge of non-symbolic numerical magnitude (i.e., non-symbolic comparison tasks) but show an impaired ability to process symbolic representations of number (i.e., symbolic comparison tasks). Neuroimaging studies also report increased activation in the right intraparietal sulcus during tasks that measure symbolic but not non-symbolic processing of numerical magnitude. However, support for the access deficit hypothesis is not consistent across research studies.
Diagnosis
At its most basic level, dyscalculia is a learning disability affecting the normal development of arithmetic skills.A consensus has not yet been reached on appropriate diagnostic criteria for dyscalculia. Mathematics is a specific domain that is complex (i.e. includes many different processes, such as arithmetic, algebra, word problems, geometry, etc.) and cumulative (i.e. the processes build on each other such that mastery of an advanced skill requires mastery of many basic skills). Thus dyscalculia can be diagnosed using different criteria, and frequently is; this variety in diagnostic criteria leads to variability in identified samples, and thus variability in research findings regarding dyscalculia.
Other than using achievement tests as diagnostic criteria, researchers often rely on domain-specific tests (i.e. tests of working memory, executive function, inhibition, intelligence, etc.) and teacher evaluations to create a more comprehensive diagnosis. Alternatively, fMRI research has shown that the brains of the neurotypical children can be reliably distinguished from the brains of the dyscalculic children based on the activation in the prefrontal cortex. However, due to the cost and time limitations associated with brain and neural research, these methods will likely not be incorporated into diagnostic criteria despite their effectiveness.
Types
Research on subtypes of dyscalculia has begun without consensus; preliminary research has focused on comorbid learning disorders as subtyping candidates. The most common comorbidity in individuals with dyscalculia is dyslexia. Most studies done with comorbid samples versus dyscalculic-only samples have shown different mechanisms at work and additive effects of comorbidity, indicating that such subtyping may not be helpful in diagnosing dyscalculia. But there is variability in results at present.Due to high comorbidity with other disabilities such as dyslexia and ADHD, some researchers have suggested the possibility of subtypes of mathematical disabilities with different underlying profiles and causes. Whether a particular subtype is specifically termed "dyscalculia" as opposed to a more general mathematical learning disability is somewhat under debate in the scientific literature.
Semantic memory: This subtype often coexists with reading disabilities such as dyslexia and is characterized by poor representation and retrieval from long-term memory. These processes share a common neural pathway in the left angular gyrus, which has been shown to be selective in arithmetic fact retrieval strategies and symbolic magnitude judgments. This region also shows low functional connectivity with language-related areas during phonological processing in adults with dyslexia. Thus, disruption to the left angular gyrus can cause both reading impairments and difficulties in calculation. This has been observed in individuals with Gerstmann syndrome, of which dyscalculia is one of constellation of symptoms.
Procedural concepts: Research by Geary has shown that in addition to increased problems with fact retrieval, children with math disabilities may rely on immature computational strategies. Specifically, children with mathematical disabilities showed poor command of counting strategies unrelated to their ability to retrieve numeric facts. This research notes that it is difficult to discern whether poor conceptual knowledge is indicative of a qualitative deficit in number processing or simply a delay in typical mathematical development.
Working memory: Studies have found that children with dyscalculia showed impaired performance on working memory tasks compared to neurotypical children. Furthermore, research has shown that children with dyscalculia have weaker activation of the intraparietal sulcus during visuospatial working memory tasks. Brain activity in this region during such tasks has been linked to overall arithmetic performance, indicating that numerical and working memory functions may converge in the intraparietal sulcus. However, working memory problems are confounded with domain-general learning difficulties, thus these deficits may not be specific to dyscalculia but rather may reflect a greater learning deficit. Dysfunction in prefrontal regions may also lead to deficits in working memory and other executive function, accounting for comorbidity with ADHD.Studies have also shown indications of causes due to congenital or hereditary disorders, but evidence of this is not yet concrete.
Treatment
To date, very few interventions have been developed specifically for individuals with dyscalculia. Concrete manipulation activities have been used for decades to train basic number concepts for remediation purposes. This method facilitates the intrinsic relationship between a goal, the learners action, and the informational feedback on the action. A one-to-one tutoring paradigm designed by Lynn Fuchs and colleagues which teaches concepts in arithmetic, number concepts, counting, and number families using games, flash cards, and manipulables has proven successful in children with generalized math learning difficulties, but intervention has yet to be tested specifically on children with dyscalculia. These methods require specially trained teachers working directly with small groups or individual students. As such, instruction time in the classroom is necessarily limited. For this reason, several research groups have developed computer adaptive training programs designed to target deficits unique to dyscalculic individuals.Software intended to remediate dyscalculia has been developed. While computer adaptive training programs are modeled after one-to-one type interventions, they provide several advantages. Most notably, individuals are able to practice more with a digital intervention than is typically possible with a class or teacher. As with one-to-one interventions, several digital interventions have also proven successful in children with generalized math learning difficulties. Räsänen and colleagues have found that games such as The Number Race and Graphogame-math can improve performance on number comparison tasks in children with generalized math learning difficulties.Several digital interventions have been developed for dyscalculics specifically. Each attempts to target basic processes that are associated with maths difficulties. Rescue Calcularis was one early computerized intervention that sought to improve the integrity of and access to the mental number line. Other digital interventions for dyscalculia adapt games, flash cards, and manipulables to function through technology.While each intervention claims to improve basic numerosity skills, the authors of these interventions do admit that repetition and practice effects may be a factor involved in reported performance gains. An additional criticism is that these digital interventions lack the option to manipulate numerical quantities. While the previous two games provide the correct answer, the individual using the intervention cannot actively determine, through manipulation, what the correct answer should be. Butterworth and colleagues argued that games like The Number Bonds, which allows an individual to compare different sized rods, should be the direction that digital interventions move toward. Such games use manipulation activities to provide intrinsic motivation toward content guided by dyscalculia research. One of these serious games is Meister Cody – Talasia, an online training that includes the CODY Assessment – a diagnostic test for detecting dyscalculia. Based on these findings, Dybuster Calcularis was extended by adaptation algorithms and game forms allowing manipulation by the learners. It was found to improve addition, subtraction and number line tasks, and was made available as Dybuster Calcularis.A study used transcranial direct current stimulation (TDCS) to the parietal lobe during numerical learning and demonstrated selective improvement of numerical abilities that was still present six months later in typically developing individuals. Improvement were achieved by applying anodal current to the right parietal lobe and cathodal current to the left parietal lobe and contrasting it with the reverse setup. When the same research group used tDCS in a training study with two dyscalculic individuals, the reverse setup (left anodal, right cathodal) demonstrated improvement of numerical abilities.
Epidemiology
Dyscalculia is thought to be present in 3–6% of the general population, but estimates by country and sample vary somewhat. Many studies have found prevalence rates by gender to be equivalent. Those that find gender difference in prevalence rates often find dyscalculia higher in females, but some few studies have found prevalence rates higher in males.
History
The term dyscalculia was coined in the 1940s, but it was not completely recognized until 1974 by the work of Czechoslovakian researcher Ladislav Kosc. Kosc defined dyscalculia as "a structural disorder of mathematical abilities." His research proved that the learning disability was caused by impairments to certain parts of the brain that control mathematical calculations and not because symptomatic individuals were mentally handicapped. Researchers now sometimes use the terms "math dyslexia" or "math learning disability" when they mention the condition. Cognitive disabilities specific to mathematics were originally identified in case studies with patients who experienced specific arithmetic disabilities as a result of damage to specific regions of the brain. More commonly, dyscalculia occurs developmentally as a genetically linked learning disability which affects a persons ability to understand, remember, or manipulate numbers or number facts (e.g., the multiplication tables). The term is often used to refer specifically to the inability to perform arithmetic operations, but is also defined by some educational professionals and cognitive psychologists such as Stanislas Dehaene and Brian Butterworth as a more fundamental inability to conceptualize numbers as abstract concepts of comparative quantities (a deficit in "number sense"), which these researchers consider to be a foundational skill upon which other mathematics abilities build. Symptoms of dyscalculia include the delay of simple counting, inability to memorize simple arithmetic facts such as adding, subtracting, etc. There are few known symptoms because little research has been done on the topic.
Etymology
The term dyscalculia dates back to at least 1949.Dyscalculia comes from Greek and Latin and means "counting badly". The prefix "dys-" comes from Greek and means "badly". The root "calculia" comes from the Latin "calculare", which means "to count"; it is also a cognate of "calculation" and "calculus".
See also
References
Further reading
Abeel, Samantha (2003). My thirteenth winter: a memoir. New York: Orchard Books. ISBN 978-0-439-33904-9. OCLC 51536704.
Ardila A, Rosselli M (December 2002). "Acalculia and dyscalculia" (PDF). Neuropsychol Rev. 12 (4): 179–231. doi:10.1023/a:1021343508573. PMID 12539968. S2CID 2617160.
Tony Attwood (2002). Dyscalculia in Schools: What it is and What You Can Do. First & Best in Education Ltd. ISBN 978-1-86083-614-5. OCLC 54991398.
Butterworth, Brian; Yeo, Dorian (2004). Dyscalculia Guidance: Helping Pupils with Specific Learning Difficulties in Maths. London: NferNelson. ISBN 978-0-7087-1152-1. OCLC 56974589.
Campbell, Jamie I. D. (2004). Handbook of Mathematical Cognition. Psychology Press (UK). ISBN 978-1-84169-411-5. OCLC 644354765.
Brough, Mel; Henderson, Anne; Came, Fil (2003). Working with dyscalculia: recognising dyscalculia: overcoming barriers to learning in maths. Santa Barbara, Calif: Learning Works. ISBN 978-0-9531055-2-6. OCLC 56467270.
Chinn, Stephen J. (2004). The Trouble with Maths: A Practical Guide to Helping Learners with Numeracy Difficulties </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Foster care and explain in detail? [/INST] Foster care is a system in which a minor has been placed into a ward, group home (residential child care community, treatment center, etc.), or private home of a state-certified caregiver, referred to as a "foster parent" or with a family member approved by the state. The placement of the child is normally arranged through the government or a social service agency. The institution, group home, or foster parent is compensated for expenses unless with a family member. In some states, relative or "Kinship" caregivers of children who are wards of the state are provided with a financial stipend.
The state, via the family court and child protective services agency, stand in loco parentis to the minor, making all legal decisions while the foster parent is responsible for the day-to-day care of the minor.
Scholars and activists are concerned about the efficacy of the foster care services provided by NGOs. Specifically, this pertains to poor retention rates of social workers. Poor retention rates are attributed to being overworked in an emotionally draining field that offers minimal monetary compensation. The lack of professionals pursuing a degree in social work coupled with poor retention rates in the field has led to a shortage of social workers and created large caseloads for those who choose to work and stay in the field. The efficacy of caseworker retention also affects the overall ability to care for clients. Low staffing leads to data limitations that infringe on caseworkers ability to adequately serve clients and their families. Foster care is correlated with a range of negative outcomes compared to the general population. Children in foster care have a high rate of ill health, particularly psychiatric conditions such as anxiety, depression, and eating disorders. One third of foster children in a US study reported abuse from a foster parent or other adult in the foster home. Nearly half of foster children in the US become homeless when they reach the age of 18, and the poverty rate is three times higher among foster care alumni than in the general population.
By country
Australia
In Australia foster care was known as "boarding-out". Foster care had its early stages in South Australia in 1867 and stretched to the second half of the 19th century. It is said that the system was mostly run by women until the early 20th century. Then the control was centered in many state childrens departments. "Although boarding-out was also implemented by non-government[al] child rescue organizations, many large institutions remained. These institutions assumed an increasing importance from the late 1920s when the system went into decline." The system was re-energized in the postwar era, and in the 1970s. The system is still the main structure for "out-of-home care." The system took care of both local and foreign children. "The first adoption legislation was passed in Western Australia in 1896, but the remaining states did not act until the 1920s, introducing the beginnings of the closed adoption that reached it peak in the period 1940–1975. New baby adoption dropped dramatically from the mid-1970s, with the greater tolerance of and support for single mothers".
Cambodia
Foster care in Cambodia is relatively new as an official practice within the government. However, despite a later start, the practice is currently making great strides within the country. Left with a large number of official and unofficial orphanages from the 1990s, the Cambodian government conducted several research projects in 2006 and 2008, pointing to the overuse of orphanages as a solution for caring for vulnerable children within the country. Most notably, the studies found that the percentage of children within orphanages that had parents approached 80%. At the same time, local NGOs like "Children In Families" began offering limited foster care services within the country. In the subsequent years, the Cambodian government began implementing policies that required the closure of some orphanages and the implementation of minimum standards for residential care institutions. These actions lead to an increase in the number of NGOs providing foster care placements and helped to set the course for care reform around the country. As of 2015, the Cambodian government is working with UNICEF, USAID, several governments, and many local NGOs in continuing to build the capacity for child protection and foster care within the Kingdom.
Canada
Foster children in Canada are known as permanent wards, (crown wards in Ontario). A ward is someone, in this case a child, placed under protection of a legal guardian and are the legal responsibility of the government. Census data from 2011 counted children in foster care for the first time, counting 47,885 children in care. The majority of foster children – 29,590, or about 62 per cent – were aged 14 and under. The wards remain under the care of the government until they "age out of care." All ties are severed from the government and there is no longer any legal responsibility toward the youth. This age is different depending on the province.
Israel
In December 2013, the Israeli Knesset approved a bill co-drafted by the Israel National Council for the Child to regulate the rights and obligations of participants in the foster care system in Israel.
Japan
The idea of foster care or taking in abandoned children actually came about around 1392-1490s in Japan. The foster care system in Japan is similar to the Orphan Trains because Brace thought the children would be better off on farms. The people in Japan thought the children would do better on farms rather than living in the "dusty city." The families would often send their children to a farm family outside the village and only keep their oldest son. The farm families served as the foster parents and they were financially rewarded for taking in the younger siblings. "It was considered an honor to be chosen as foster parents, and selection greatly depended on the familys reputation and status within the village". Around 1895 the foster care program became more like the system used in the United States because the Tokyo Metropolitan Police sent children to a hospital where they would be "settled". Problems emerged in this system, such as child abuse, so the government started phasing it out and "began increasing institutional facilities". In 1948 the Child Welfare Law was passed, increasing official oversight, and creating better conditions for the children to grow up in.
United Kingdom
In the United Kingdom, foster care and adoption has always been an option, "in the sense of taking other peoples children into their homes and looking after them on a permanent or temporary basis." Although, nothing about it had a legal foundation, until the 20th century. The UK had "wardship," the family taking in the child had custody by the Chancery Court. Wardship was not used very often because it did not give the guardian "parental rights." In the 19th century came a "series of baby farming scandals." At the end of the 19th century they started calling it "boarding-out" like they did in Australia. They started placing the children in orphanages and workhouses as well. "The First World War saw an increase in organized adoption through adoption societies and child rescue organizations, and pressure grew for adoption to be given legal status." The first laws based on adoption and foster care were passed in 1926. "The peak number of adoptions was in 1968, since then there has been an enormous decline in adoption in the United Kingdom. The main reasons for children being adopted in the United Kingdom had been unmarried mothers giving up their children for adoption and stepparents adopting their new partners children".
United States
In the United States, foster care started as a result of the efforts of Charles Loring Brace. "In the mid 19th Century, some 30,000 homeless or neglected children lived in the New York City streets and slums." Brace took these children off the streets and placed them with families in most states in the country. Brace believed the children would do best with a Christian farm family. He did this to save them from "a lifetime of suffering" He sent these children to families by train, which gave the name The Orphan Train Movement. "This lasted from 1853 to the early 1890s and transported more than 120,000 250,000? children to new lives." When Brace died in 1890, his sons took over his work of the Childrens Aid Society until they retired. The Childrens Aid Society created "a foster care approach that became the basis for the federal Adoption and Safe Families Act of 1997" called Concurrent Planning. This greatly impacted the foster care system.
From August 1999 - August 2019, 9,073,607 American children have been removed from their families and placed in foster homes according to the federal government Adoption and Foster Care Analysis and Reporting System.As last reported in August 2019, 437,238 children nationally were removed from their families and placed in foster homes according to the federal government Adoption and Foster Care Analysis and Reporting System.- 24% of foster children are between the ages of 0 and 2- 18% of foster children are between the ages of 3 and 5- 28% of foster children are between the ages of 6 and 12- 40% of foster children are between the ages of 13 and 21- Average # of birthdays a child spends in foster care: 2 - 22% of children had three or more placements during a length of 20 months in foster care.- 91% of foster children under the age of 2 are adopted.
France
In France, foster families are called familles daccueil (literally "welcome families"). Foster homes must obtain an official approval from the government in order to welcome a minor or an elderly person. In order to receive this approval they must follow a training and their home is inspected to be sure it is safe and healthy. In 2017, 344 000 minors and 15000 elderly persons were welcomed in foster homes.
Placement
Family-based foster care is generally preferred to other forms of out of home care. Foster care is intended to be a short-term solution until a permanent placement can be made. In most states, the primary objective is to reconcile children with the biological parents. However, if the parents are unable or unwilling to care for the child, or if the child is an orphan, then the first choice of adoptive parents is a relative such as an aunt, uncle or grandparent, known as kinship care. Most kinship care is done informally, without the involvement of a court or public organization. However, in the United States, formal kinship care is increasingly common. In 2012, a quarter of all children in formal foster care were placed with relatives instead of being placed into the system.If no related family member is willing or able to adopt, the next preference is for the child to be adopted by the foster parents or by someone else involved in the childs life (such as a teacher or coach). This is to maintain continuity in the childs life. If neither above option are available, the child may be adopted by someone who is a stranger to the child.
If none of these options are viable, the plan for the minor may be to enter OPPLA (Other Planned Permanent Living Arrangement). This option allows the child to stay in custody of the state and the child can stay placed in a foster home, with a relative or a long-term care facility, such as a residential child care community or, for children with development disabilities, physical disabilities or mental disabilities, a treatment center.
671,000 children were served by the foster care system in the United States in 2015. "After declining more than 20 percent between FY 2006 and FY 2012 to a low of 397,000, the number of children in foster care on the last day of the fiscal year increased to 428,000 in FY 2015, with a slightly higher percent change from 2014 to 2015 (3.3%) than observed from 2013 to 2014 (3.2%)." Since FY 2012, the number of children in foster care at the end of each FY has steadily increased.The median amount of time a child spent in foster care in the U.S. in 2015 was 13.5 months. That year, 74% of children spent less than two years in foster care, while 13% were in care for three or more years. Of the estimated 427,910 children in foster care on September 30, 2015: 43 percent were White, 24 percent were African-American, 21 percent were Hispanic (of any race), 10 percent were other races or multiracial, and 2 percent were unknown or unable to be determined.Children may enter foster care voluntarily or involuntarily. Voluntary placement may occur when a biological parent or lawful guardian is unable to care for a child. Involuntary placement occurs when a child is removed from their biological parent or lawful guardian due to the risk or actual occurrence of physical or psychological harm, or if the child has been orphaned. In the US, most children enter foster care due to neglect. If a biological parent or legal guardian is unwilling to care for a child, the child is deemed to be dependent and is placed under the care of the child protection agency. The policies regarding foster care as well as the criteria to be met in order to become a foster parent vary according to legal jurisdiction.
Especially egregious failures of child protective services often serve as a catalyst for increased removal of children from the homes of biological parents. An example is the brutal torture and murder of 17-month-old Peter Connelly, a British toddler who died in London Borough of Haringey, North London after suffering more than 50 severe injuries over an eight-month period, including eight broken ribs and a broken back. Throughout the period of time in which he was being tortured, he was repeatedly seen by Haringey Childrens services and NHS health professionals. Haringey Childrens services already failed ten years earlier in the case of Victoria Climbié. In the time since his death, in 2007, cases have reached a record rate in England surpassing 10,000 in the reporting year ending in March 2012.
Abuse and negligence
From 1993 through 2002 there were 107 recorded deaths; there are approximately 400,000 children in out-of-home care, in the United States. Almost 10% of children in foster care have stayed in foster care for five or more years. Nearly half of all children in foster care have chronic medical problems. 8% of all children in foster care have serious emotional problems, 11% of children exiting foster care aged out of the system, in 2011. Children in foster care experience high rates of child abuse, emotional deprivation, and physical neglect. In one study in the United Kingdom "foster children were 7–8 times, and children in residential care 6 times more likely to be assessed by a pediatrician for abuse than a child in the general population". A study of foster children in Oregon and Washington State found that nearly one third reported being abused by a foster parent or another adult in a foster home. The "Parent Trauma Response Questionnaire" states that parental overprotection can be just as harmful psychologically as neglect.
Development
As of 2019, the majority of children in the foster care system were under 8 years of age. These early years are quite important for the physical and mental development of children. More specifically, these early years are most important for brain development. Stressful and traumatic experiences have been found to have long-term negative consequences for the brain development in children whereas talking, singing, and playing can help encourage brain growth. Since the majority of children are removed from their homes due to neglect, this means that many of these children did not experience stable and stimulating environments to help promote this necessary growth. In a research study conducted at the University of Minnesota, researchers found that children placed in non-parental homes, such as foster homes, showed significant behavior problems and higher levels of internalizing problems in comparison to children in traditional families and even children who were mistreated by caregivers. According to an article written by Elizabeth Curry titled The five things you should know about how orphanage life affects children they state that a child who has lived in an orphanage or a home for multiple children, they will have learned survival skills but lack family skills due to them never understanding permanency.
Medical and psychiatric disorders
A higher prevalence of physical, psychological, cognitive and epigenetic disorders for children in foster care has been established in studies in various countries. The Casey Family Programs Northwest Foster Care Alumni Study was a fairly extensive study of various aspects of children who had been in foster care.
Individuals who were in foster care experience higher rates of physical and psychiatric morbidity than the general population and suffer from not being able to trust and that can lead to placements breaking down.In the Casey study of foster children in Oregon and Washington state, they were found to have double the incidence of depression, 20% as compared to 10% and were found to have a higher rate of posttraumatic stress disorder (PTSD) than combat veterans with 25% of those studied having PTSD. Children in foster care have a higher probability of having attention deficit hyperactivity disorder (ADHD), and deficits in executive functioning, anxiety as well as other developmental problems.These children experience higher degrees of incarceration, poverty, homelessness, and suicide. Studies in the U.S. have suggested that some foster care placements may be more detrimental to children than remaining in a troubled home, but a more recent study suggested that these findings may have been affected by selection bias, and that foster care has little effect on behavioral problems.
Neurodevelopment
Foster children have elevated levels of cortisol, a stress hormone, in comparison to children raised by their biological parents. Elevated cortisol levels can compromise the immune system. (Harden BJ, 2004). Most of the processes involved in typical neurodevelopment are predicated upon the establishment of close nurturing relationships and environmental stimulation. Negative environmental influences during this critical period of brain development can have lifelong consequences.
Post traumatic stress disorder
Children in foster care have a higher incidence of posttraumatic stress disorder (PTSD). In one study, 60% of children in foster care who had experienced sexual abuse had PTSD, and 42% of those who had been physically abused met the PTSD criteria. PTSD was also found in 18% of the children who were not abused. These children may have developed PTSD due to witnessing violence in the home. (Marsenich, 2002).
In order to figure out if a child is going through PTSD, there is a PTSD module, the anxiety disorder interview. This is considered a reliable resource for establishing if a child is going through post traumatic stress disorder due to physical, sexual, or mental abuse.In a study conducted in Oregon and Washington state, the rate of PTSD in adults who were in foster care for one year between the ages of 14–18 was found to be higher than that of combat veterans, with 25 percent of those in the study meeting the diagnostic criteria as compared to 12–13 percent of Iraq war veterans and 15 percent of Vietnam war veterans, and a rate of 4% in the general population. The recovery rate for foster home alumni was 28.2% as opposed to 47% in the general population.
"More than half the study participants reported clinical levels of mental illness, compared to less than a quarter of the general population".
Eating disorders
Foster children are at increased risk for a variety of eating disorders in comparison to the general population. In a study done in the United Kingdom, 35% of foster children experienced an increase in Body Mass Index (BMI) once in care. Food Maintenance Syndrome is characterized by a set of aberrant eating behaviors of children in foster care. It is "a pattern of excessive eating and food acquisition and maintenance behaviors without concurrent obesity"; it resembles "the behavioral correlates of Hyperphagic Short Stature". It is hypothesized that this syndrome is triggered by the stress and maltreatment foster children are subjected to, it was prevalent amongst 25 percent of the study group in New Zealand. Bulimia nervosa is seven times more prevalent among former foster children than in the general population.
Poverty and homelessness
Nearly half of foster children in the U.S. become homeless when they turn 18. One of every 10 foster children stays in foster care longer than seven years, and each year about 15,000 reach the age of majority and leave foster care without a permanent family—many to join the ranks of the homeless or to commit crimes and be imprisoned.Three out of 10 of the United States homeless are former foster children. According to the results of the Casey Family Study of Foster Care Alumni, up to 80 percent are doing poorly—with a quarter to a third of former foster children at or below the poverty line, three times the national poverty rate. Very frequently, people who are homeless had multiple placements as children: some were in foster care, but others experienced "unofficial" placements in the homes of family or friends.
Individuals with a history of foster care tend to become homeless at an earlier age than those who were not in foster care. The length of time a person remains homeless is longer in individuals who were in foster care.
Suicide-death rate
Children in foster care are at a greater risk of suicide. The increased risk of suicide is still prevalent after leaving foster care. In a small study of twenty-two Texan youths who aged out of the system, 23 percent had a history of suicide attempts.A Swedish study utilizing the data of almost one million people including 22,305 former foster children who had been in care prior to their teens, concluded:
Former child welfare clients were in year of birth and sex standardised risk ratios (RRs) four to five times more likely than peers in the general population to have been hospitalised for suicide attempts....Individuals who had been in long-term foster care tended to have the most dismal outcome...former child welfare/protection clients should be considered a high-risk group for suicide attempts and severe psychiatric morbidity.
Death rate
Children in foster care have an overall higher mortality rate than children in the general population. A study conducted in Finland among current and former foster children up to age 24 found a higher mortality rate due to substance abuse, accidents, suicide and illness. The deaths due to illness were attributed to an increased incidence of acute and
chronic medical conditions and developmental delays among children in foster care.
Georgia Senator Nancy Schaefer published a report "The Corrupt Business of Child Protective Services" stating: "The National Center on Child Abuse and Neglect in 1998 reported that six times as many children died in foster care than in the general public and that once removed to official "safety", these children are far more likely to suffer abuse, including sexual molestation than in the general population".
Academic prospects
Educational outcomes of ex-foster children in the Northwest Alumni Study:
56% completed high school compared to 82% of the general population, although an additional 29% of former foster children received a G.E.D. compared to an additional 5% of the general population.
42.7% completed some education beyond high school.
20.6% completed any degree or certificate beyond high school
16.1% completed a vocational degree; 21.9% for those over 25.
1.8% complete a bachelors degree, 2.7% for over 25, the completion rate for the general population in the same age group is 24%, a sizable difference.The study reviewed case records for 659 foster care alumni in Northwest USA, and interviewed 479 of them
between September 2000 and January 2002.
Higher Education
Approximately 10% of foster youth make it to college and of those 10%, only about 3% actually graduate and obtain a 4-year degree. Although the number of foster youth who are starting at a 4-year university after high school has increased over the years, the number of youth who graduate from college continues to remain stable. A study of 712 youth in California, the results revealed that foster care youth are fives times less likely to attend college than youth who do not go through foster care. There are different resources that offer both financial and emotional support for foster youth to continue their education. Simultaneously, there are also many barriers that make getting to a college or university difficult.
Borton describes some of the barriers youth face in her article, Barriers to Post-Secondary Enrollment for Former Foster Youth. A few of those barriers include financial hurdles, navigating through the application process with little to no support, and lack of housing.Many studies have shown that there are a few factors that have seemingly played a role in the success of foster youth making it to and graduating from a college or university. While having financial resources for foster youth is a huge help, there are other components to look at. Beginning with having support for these youth at the high school level. In order for foster youth to obtain a college degree, they must enroll at a university first.
Out of the different factors that play in increasing college enrollment such as youth participating in extended foster care, reading ability, etc., youth who received assistance or had supportive relationships from adults, were more likely than youth who did not have supportive relationships, to enroll at a university.At colleges across the nation, there are programs that are specifically put in place to help youth who have aged out of the foster care system and continued into higher education. These programs often help youth financially by giving them supplemental funds and providing support through peer mentor programs or academic counseling services. While funding is an important key in helping get through college, it hasnt been found as the only crucial component in aiding a youths success.
A study done by Jay and colleagues provides insight on what youth view as important in helping them thrive on a college campus. The study, which had a sample of 51 foster youth, used Conceptual Mapping to break down the different components of support that may be important for youth to receive on a college campus. It is important to take in the different factors that can be helpful for youth at a university and to look beyond providing financial support.
Psychotropic medication use
Studies have revealed that youth in foster care covered by Medicaid insurance receive psychotropic medication at a rate that was 3 times higher than that of Medicaid-insured youth who qualify by low family income. In a review (September 2003 to August 2004) of the medical records of 32,135 Texas foster care 0–19 years old, 12,189 were prescribed psychotropic medication, resulting in an annual prevalence of 37.9% of these children being prescribed medication. 41.3% received 3 different classes of these drugs during July 2004, and 15.9% received 4 different classes. The most frequently used medications were antidepressants (56.8%), attention-deficit/hyperactivity disorder drugs (55.9%), and antipsychotic agents (53.2%). The study also showed that youth in foster care are frequently treated with concomitant psychotropic medication, for which sufficient evidence regarding safety and effectiveness is not available.The use of expensive, brand name, patent protected medication was prevalent. In the case of SSRIs the use of the most expensive medications was noted to be 74%; in the general market only 28% are for brand name SSRIs vs generics. The average out-of-pocket expense per prescription was $34.75 for generics and $90.17 for branded products, a $55.42, difference.
Therapeutic intervention
Children in the child welfare system have often experienced significant and repeated traumas and having a background in foster homes—especially in instances of sexual abuse—can be the precipitating factor in a wide variety of psychological and cognitive deficits it may also serve to obfuscate the true cause of underlying issues. The foster care experience may have nothing to do with the symptoms, or on the other hand, a disorder may be exacerbated by having a history of foster care and attendant abuses. The human brain however has been shown to have a fair degree of neuroplasticity. and adult neurogenesis has been shown to be an ongoing process.Multidimensional Treatment Foster Care (MTFC), also referred to as Treatment Foster Care Oregon (TFCO) and Treatment Foster Care (TFC) is a community-based intervention that was created in 1983 by Dr. Patricia Chamberlain and her associated colleagues with the initial design intended to offer a replacement for group facilities. MTFC has differing approaches for different age groups. Preschoolers receive “a behavior-management approach and intensively trains, supervises, and supports foster caregivers to provide positive adult support and consistent limit setting” coupled with “coordinated interventions with the child’s biological parents.” MTFC for adolescence consists of individual placement with an intensely trained foster family providing “coordinated interventions in the home, with peers, [and] in educational settings.” MTFC has been shown to provide better results than group facilities and proves to be more cost effective. Reports show that Multidimensional treatment has effective results in reducing depression, arrest rates, deviant peer affiliations, placement disruption, and pregnancy rates while having positive replication trials. It is one method that attempts to incorporate trauma- and violence-informed care into its design.Researchers have faced difficulty when it comes to accurately assessing what makes MTFC and other similar programs that involve multiple levels of intervention successful. It seems to remain in a "black box" scenario where it is unsure what aspect of the treatment plan is actually producing positive effects. Multiple peer-reviewed research articles on foster care programs point out a lack of research effectively evaluating the outcomes of specific foster care programs, calling for more complete assessments to be conducted in order to properly compare outcomes between treatment plans and evaluate what practices in MTFC are most effective. Ethical concerns have also been raised by Therese Åström and other associated researchers when conducting a systematic review on behalf of the Swedish Agency for Health Technology Assessment and Assessment of Social Services in 2018, noting that on the one hand MTFC is evaluated as effective, however, it tends to be implemented in a way that diminishes the childs agency.
Cross-cultural adoption policies
George Shanti, Nico Van Oudenhoven, and Ekha Wazir, co-authors of Foster Care Beyond the Crossroads: Lessons from an International Comparative Analysis, say that there are four types of Government foster care systems. The first one is that of developing countries. These countries do not have policies implemented to take care of the basic needs of these children and these children mostly receive assistance from relatives. The second system is that of former socialist governments. The historical context of these states has not allowed for the evolution of their foster care system. NGOs have urged them to evolve; however the traditional system of institutionalizing these children is still in place. Thirdly, liberal democracies do not have the support from its political system in order to take care of these children, even though they have the resources. Finally, social democracies are the most advanced governments in regards to their foster care system. These governments have a massive infrastructure, funding, and support system in order to help foster care children.
Adoption
Foster care adoption is a type of domestic adoption where the child is initially placed into a foster care system and is subsequently placed for adoption.
Children may be placed into foster care for a variety of reasons; including, removal from the home by a governmental agency because of maltreatment. In some jurisdictions, adoptive parents are licensed as and technically considered foster parents while the adoption is being finalized. According to the U.S Department of Health and Human Services Childrens Bureau, there were approximately 408,425 children in foster care in 2010. Of those children, twenty-five percent had a goal of adoption. In 2015, 243,060 children exited foster care and twenty-two percent were adopted. Nationwide, there are more than one hundred thousand children in the U.S. foster care system waiting for permanent families.
Outcomes
Youth who are aging out of foster care often face difficulties in transitioning into adulthood, especially in terms of finding stable housing, employment, finances, and educational opportunities. The suspected reason for these difficulties involves a lack of stability experienced while in the foster care system, and the reported abuse and/or neglect in their childhood, which may affect their ability to cope with significant life changes. In the United States, there are independent living programs designed with the intent to serve the needs of transitioning foster youth. However, youth aging out of foster care have indicated that these programs are failing to fully address the needs of young adults without familial assistance.In a study conducted by Gypen et al. (2017), involving a cross-database analysis of research articles relevant to the outcomes of former foster youth, they found that the educational, mental health, employment, income, stable housing, criminal involvement and substance abuse issues outcomes for youth who have aged out of the foster care system are substantially poorer than their peers </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Intracerebral hemorrhage and explain in detail? [/INST] Intracerebral hemorrhage (ICH), also known as cerebral bleed, intraparenchymal bleed, and hemorrhagic stroke, or haemorrhagic stroke, is a sudden bleeding into the tissues of the brain, into its ventricles, or into both. It is one kind of bleeding within the skull and one kind of stroke. Symptoms can include headache, one-sided weakness, vomiting, seizures, decreased level of consciousness, and neck stiffness. Often, symptoms get worse over time. Fever is also common.Causes include brain trauma, aneurysms, arteriovenous malformations, and brain tumors. The biggest risk factors for spontaneous bleeding are high blood pressure and amyloidosis. Other risk factors include alcoholism, low cholesterol, blood thinners, and cocaine use. Diagnosis is typically by CT scan. Other conditions that may present similarly include ischemic stroke.Treatment should typically be carried out in an intensive care unit. Guidelines recommend decreasing the blood pressure to a systolic of 140 mmHg. Blood thinners should be reversed if possible and blood sugar kept in the normal range. Surgery to place a ventricular drain may be used to treat hydrocephalus, but corticosteroids should not be used. Surgery to remove the blood is useful in certain cases.Cerebral bleeding affects about 2.5 per 10,000 people each year. It occurs more often in males and older people. About 44% of those affected die within a month. A good outcome occurs in about 20% of those affected. Intracerebral hemorrhage, a type of hemorrhagic stroke, was first distinguished from ischemic strokes due to insufficient blood flow, so called "leaks and plugs", in 1823.
Signs and symptoms
People with intracerebral bleeding have symptoms that correspond to the functions controlled by the area of the brain that is damaged by the bleed. These localizing signs and symptoms can include hemiplegia (or weakness localized to one side of the body) and paresthesia (loss of sensation) including hemisensory loss (if localized to one side of the body). These symptoms are usually rapid in onset, sometimes occurring in minutes, but not as rapid as the symptom onset in ischemic stroke. Other symptoms include those that indicate a rise in intracranial pressure caused by a large mass (due to hematoma expansion) putting pressure on the brain. These symptoms include headaches, nausea, vomiting, a depressed level of consciousness, stupor and death. Continued elevation in the intracranial pressure and the accompanying mass effect may eventually cause brain herniation (when different parts of the brain are displaced or shifted to new areas in relation to the skull and surrounding dura mater supporting structures). Brain herniation is associated with hyperventilation, extensor rigidity, pupillary asymmetry, pyramidal signs, coma and death.Hemorrhage into the basal ganglia or thalamus causes contralateral hemiplegia due to damage to the internal capsule. Other possible symptoms include gaze palsies or hemisensory loss. Intracerebral hemorrhage into the cerebellum may cause ataxia, vertigo, incoordination of limbs and vomiting. Some cases of cerebellar hemorrhage lead to blockage of the fourth ventricle with subsequent impairment of drainage of cerebrospinal fluid from the brain. The ensuing hydrocephalus, or fluid buildup in the ventricles of the brain leads to a decreased level of consciousness and coma. Brainstem hemorrhage most commonly occurs in the pons and is associated with cranial nerve palsies, pinpoint (but reactive) pupils, gaze palsies, facial weakness, and coma (if there is damage to the reticular activating system).
Causes
Intracerebral bleeds are the second most common cause of stroke, accounting for 10% of hospital admissions for stroke. High blood pressure raises the risks of spontaneous intracerebral hemorrhage by two to six times. More common in adults than in children, intraparenchymal bleeds are usually due to penetrating head trauma, but can also be due to depressed skull fractures. Acceleration-deceleration trauma, rupture of an aneurysm or arteriovenous malformation (AVM), and bleeding within a tumor are additional causes. Amyloid angiopathy is not an uncommon cause of intracerebral hemorrhage in patients over the age of 55. A very small proportion is due to cerebral venous sinus thrombosis.Risk factors for ICH include:
Hypertension (high blood pressure)
Diabetes mellitus
Menopause
Excessive alcohol consumption
Severe migraineHypertension is the strongest risk factor associated with intracerebral hemorrhage and long term control of elevated blood pressure has been shown to reduce the incidence of hemorrhage. Cerebral amyloid angiopathy, a disease characterized by deposition of amyloid beta peptides in the walls of the small blood vessels of the brain, leading to weakened blood vessel walls and an increased risk of bleeding; is also an important risk factor for the development of intracerebral hemorrhage. Other risk factors include advancing age (usually with a concomitant increase of cerebral amyloid angiopathy risk in the elderly), use of anticoagulants or antiplatelet medications, the presence of cerebral microbleeds, chronic kidney disease, and low low density lipoprotein (LDL) levels (usually below 70). The direct oral anticoagulants (DOACs) such as the factor Xa inhibitors or direct thrombin inhibitors are thought to have a lower risk of intracerebral hemorrhage as compared to the vitamin K antagonists such as warfarin.Cigarette smoking may be a risk factor but the association is weak.Traumautic intracerebral hematomas are divided into acute and delayed. Acute intracerebral hematomas occur at the time of the injury while delayed intracerebral hematomas have been reported from as early as 6 hours post injury to as long as several weeks.
Diagnosis
Both computed tomography angiography (CTA) and magnetic resonance angiography (MRA) have been proved to be effective in diagnosing intracranial vascular malformations after ICH. So frequently, a CT angiogram will be performed in order to exclude a secondary cause of hemorrhage or to detect a "spot sign".
Intraparenchymal hemorrhage can be recognized on CT scans because blood appears brighter than other tissue and is separated from the inner table of the skull by brain tissue. The tissue surrounding a bleed is often less dense than the rest of the brain because of edema, and therefore shows up darker on the CT scan. The oedema surrounding the haemorrhage would rapidly increase in size in the first 48 hours, and reached its maximum extent at day 14. The bigger the size of the haematoma, the larger its surrounding oedema.
Location
When due to high blood pressure, intracerebral hemorrhages typically occur in the putamen (50%) or thalamus (15%), cerebrum (10–20%), cerebellum (10–13%), pons (7–15%), or elsewhere in the brainstem (1–6%).
Treatment
Treatment depends substantially on the type of ICH. Rapid CT scan and other diagnostic measures are used to determine proper treatment, which may include both medication and surgery.
Tracheal intubation is indicated in people with decreased level of consciousness or other risk of airway obstruction.
IV fluids are given to maintain fluid balance, using isotonic rather than hypotonic fluids.Rapid lowering of the blood pressure for those with hypertensive emergency can have higher functional recovery at 90 days post intracerebral haemorrhage, when compared to those who undergone other treatments such as mannitol administration, reversal of anticoagulation (those previously on anticoagulant treatment for other conditions), surgery to evacuate the haematoma, and standard rehabilitation care in hospital, while showing similar rate of death at 12%.
Medication
One review found that antihypertensive therapy to bring down the blood pressure in acute phases appears to improve outcomes. Other reviews found an unclear difference between intensive and less intensive blood pressure control. The American Heart Association and American Stroke Association guidelines in 2015 recommended decreasing the blood pressure to a SBP of 140 mmHg. However, the evidence finds tentative usefulness as of 2015.
Giving Factor VIIa within 4 hours limits the bleeding and formation of a hematoma. However, it also increases the risk of thromboembolism. It thus overall does not result in better outcomes in those without hemophilia.
Frozen plasma, vitamin K, protamine, or platelet transfusions may be given in case of a coagulopathy. Platelets however appear to worsen outcomes in those with spontaneous intracerebral bleeding on antiplatelet medication.
The specific reversal agents idarucizumab and andexanet alfa may be used to stop continued intracerebral hemorrhage in people taking directly oral acting anticoagulants (such as factor Xa inhibitors or direct thrombin inhibitors). However, if these specialized medications are not available, prothrombin complex concentrate may also be used.
Fosphenytoin or other anticonvulsant is given in case of seizures or lobar hemorrhage.
H2 antagonists or proton pump inhibitors are commonly given for to try to prevent stress ulcers, a condition linked with ICH.
Corticosteroids, were thought to reduce swelling. However, in large controlled studies, corticosteroids have been found to increase mortality rates and are no longer recommended.
Surgery
Surgery is required if the hematoma is greater than 3 cm (1 in), if there is a structural vascular lesion or lobar hemorrhage in a young patient.
A catheter may be passed into the brain vasculature to close off or dilate blood vessels, avoiding invasive surgical procedures.
Aspiration by stereotactic surgery or endoscopic drainage may be used in basal ganglia hemorrhages, although successful reports are limited.
A craniectomy holds promise of reduced mortality, but the effects of long‐term neurological outcome remain controversial.
Prognosis
The risk of death from an intraparenchymal bleed in traumatic brain injury is especially high when the injury occurs in the brain stem. Intraparenchymal bleeds within the medulla oblongata are almost always fatal, because they cause damage to cranial nerve X, the vagus nerve, which plays an important role in blood circulation and breathing. This kind of hemorrhage can also occur in the cortex or subcortical areas, usually in the frontal or temporal lobes when due to head injury, and sometimes in the cerebellum. Larger volumes of hematoma at hospital admission as well as greater expansion of the hematoma on subsequent evaluation (usually occurring within 6 hours of symptom onset) are associated with a worse prognosis. Perihematomal edema, or secondary edema surrounding the hematoma, is associated with secondary brain injury, worsening neurological function and is associated with poor outcomes. Intraventricular hemorrhage, or bleeding into the ventricles of the brain, which may occur in 30-50% of patients, is also associated with long term disability and a poor prognosis. Brain herniation is associated with poor prognoses.For spontaneous intracerebral hemorrhage seen on CT scan, the death rate (mortality) is 34–50% by 30 days after the injury, and half of the deaths occur in the first 2 days. Even though the majority of deaths occur in the first few days after ICH, survivors have a long-term excess mortality rate of 27% compared to the general population. Of those who survive an intracerebral hemorrhage, 12–39% are independent with regard to self-care; others are disabled to varying degrees and require supportive care.
Epidemiology
The incidence of intracerebral hemorrhage is estimated at 24.6 cases per 100,000 person years with the incidence rate being similar in men and women. The incidence is much higher in the elderly, especially those who are 85 or older, who are 9.6 times more likely to have an intracerebral hemorrhage as compared to those of middle age. It accounts for 20% of all cases of cerebrovascular disease in the United States, behind cerebral thrombosis (40%) and cerebral embolism (30%).
History
Intracerebral hemorrhage was first distinguished from strokes due to insufficient blood flow, so called "leaks and plugs", in 1823 </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Legionnaires disease and explain in detail? [/INST] Legionnaires disease is a form of atypical pneumonia caused by any species of Legionella bacteria, quite often Legionella pneumophila. Signs and symptoms include cough, shortness of breath, high fever, muscle pains, and headaches. Nausea, vomiting, and diarrhea may also occur. This often begins 2–10 days after exposure.A legionellosis is any disease caused by Legionella, including Legionnaires disease (a pneumonia), Pontiac fever (a nonpneumonia illness), and Pittsburgh pneumonia, but Legionnaires disease is the most common, so mentions of legionellosis often refer to Legionnaires disease.
The bacterium is found naturally in fresh water. It can contaminate hot water tanks, hot tubs, and cooling towers of large air conditioners. It is usually spread by breathing in mist that contains the bacteria. It can also occur when contaminated water is aspirated. It typically does not spread directly between people, and most people who are exposed do not become infected. Risk factors for infection include older age, a history of smoking, chronic lung disease, and poor immune function. Those with severe pneumonia and those with pneumonia and a recent travel history should be tested for the disease. Diagnosis is by a urinary antigen test and sputum culture.No vaccine is available. Prevention depends on good maintenance of water systems. Treatment of Legionnaires disease is with antibiotics. Recommended agents include fluoroquinolones, azithromycin, or doxycycline. Hospitalization is often required. The fatality rate is around 10% for healthy persons and 25% for those with underlying conditions.The number of cases that occur globally is not known. Legionnaires disease is the cause of an estimated 2–9% of pneumonia cases that are acquired outside of a hospital. An estimated 8,000 to 18,000 cases a year in the United States require hospitalization. Outbreaks of disease account for a minority of cases. While it can occur any time of the year, it is more common in the summer and fall. The disease is named after the outbreak where it was first identified, at a 1976 American Legion convention in Philadelphia.
Signs and symptoms
The length of time between exposure to the bacteria and the appearance of symptoms (incubation period) is generally 2–10 days, but can more rarely extend to as long as 20 days. For the general population, among those exposed, between 0.1 and 5.0% develop the disease, while among those in hospital, between 0.4 and 14% develop the disease.Those with Legionnaires disease usually have fever, chills, and a cough, which may be dry or may produce sputum. Almost all experience fever, while around half have cough with sputum, and one-third cough up blood or bloody sputum. Some also have muscle aches, headache, tiredness, loss of appetite, loss of coordination (ataxia), chest pain, or diarrhea and vomiting. Up to half of those with Legionnaires disease have gastrointestinal symptoms, and almost half have neurological symptoms, including confusion and impaired cognition. "Relative bradycardia" may also be present, which is low to normal heart rate despite the presence of a fever.Laboratory tests may show that kidney functions, liver functions, and electrolyte levels are abnormal, which may include low sodium in the blood. Chest X-rays often show pneumonia with consolidation in the bottom portion of both lungs. Distinguishing Legionnaires disease from other types of pneumonia by symptoms or radiologic findings alone is difficult; other tests are required for definitive diagnosis.People with Pontiac fever, a much milder illness caused by the same bacterium, experience fever and muscle aches without pneumonia. They generally recover in 2–5 days without treatment. For Pontiac fever, the time between exposure and symptoms is generally a few hours to two days.
Cause
Over 90% of cases of Legionnaires disease are caused by Legionella pneumophila. Other types include L. longbeachae, L. feeleii, L. micdadei, and L. anisa.
Transmission
Legionnaires disease is usually spread by the breathing in of aerosolized water or soil contaminated with the Legionella bacteria. Experts have stated that Legionnaires disease is not transmitted from person to person. In 2014, one case of possible spread from someone sick to the caregiver occurred. Rarely, it has been transmitted by direct contact between contaminated water and surgical wounds. The bacteria grow best at warm temperatures and thrive at water temperatures between 25 and 45 °C (77 and 113 °F), with an optimum temperature of 35 °C (95 °F). Temperatures above 60 °C (140 °F) kill the bacteria. Sources where temperatures allow the bacteria to thrive include hot water tanks, cooling towers, and evaporative condensers of large air conditioning systems, such as those commonly found in hotels and large office buildings. Though the first known outbreak was in Philadelphia, cases of legionellosis have occurred throughout the world.
Reservoirs
L. pneumophila thrives in aquatic systems, where it is established within amoebae in a symbiotic relationship. Legionella bacteria survive in water as intracellular parasites of water-dwelling protozoa, such as amoebae. Amoebae are often part of biofilms, and once Legionella and infected amoebae are protected within a biofilm, they are particularly difficult to destroy.In the built environment, central air conditioning systems in office buildings, hotels, and hospitals are sources of contaminated water. Other places the bacteria can dwell include cooling towers used in industrial cooling systems, evaporative coolers, nebulizers, humidifiers, whirlpool spas, hot water systems, showers, windshield washers, fountains, room-air humidifiers, ice-making machines, and misting systems typically found in grocery-store produce sections.The bacteria may also be transmitted from contaminated aerosols generated in hot tubs if the disinfection and maintenance programs are not followed rigorously. Freshwater ponds, creeks, and ornamental fountains are potential sources of Legionella. The disease is particularly associated with hotels, fountains, cruise ships, and hospitals with complex potable water systems and cooling systems. Respiratory-care devices such as humidifiers and nebulizers used with contaminated tap water may contain Legionella species, so using sterile water is very important. Other sources include exposure to potting mix and compost.
Mechanism
Legionella spp. enter the lungs either by aspiration of contaminated water or inhalation of aerosolized contaminated water or soil. In the lung, the bacteria are consumed by macrophages, a type of white blood cell, inside of which the Legionella bacteria multiply, causing the death of the macrophage. Once the macrophage dies, the bacteria are released from the dead cell to infect other macrophages. Virulent strains of Legionella kill macrophages by blocking the fusion of phagosomes with lysosomes inside the host cell; normally, the bacteria are contained inside the phagosome, which merges with a lysosome, allowing enzymes and other chemicals to break down the invading bacteria.
Diagnosis
People of any age may develop Legionnaires disease, but the illness most often affects middle-aged and older people, particularly those who smoke cigarettes or have chronic lung disease. Immunocompromised people are also at higher risk. Pontiac fever most commonly occurs in those who are otherwise healthy.The most useful diagnostic tests detect the bacteria in coughed-up mucus, find Legionella antigens in urine samples, or allow comparison of Legionella antibody levels in two blood samples taken 3–6 weeks apart. A urine antigen test is simple, quick, and very reliable, but only detects L. pneumophila serogroup 1, which accounts for 70% of disease caused by L. pneumophila, which means use of the urine antigen test alone may miss as many as 30% of cases. This test was developed by Richard Kohler in 1982. When dealing with L. pneumophila serogroup 1, the urine antigen test is useful for early detection of Legionnaires disease and initiation of treatment, and has been helpful in early detection of outbreaks. However, it does not identify the specific subtypes, so it cannot be used to match the person with the environmental source of infection. The Legionella bacteria can be cultured from sputum or other respiratory samples. Legionella spp. stain poorly with Gram stain, stain positive with silver, and are cultured on charcoal yeast extract with iron and cysteine (CYE agar).
A significant under-reporting problem occurs with legionellosis. Even in countries with effective health services and readily available diagnostic testing, about 90% of cases of Legionnaires disease are missed. This is partly due to the disease being a relatively rare form of pneumonia, which many clinicians may not have encountered before, thus may misdiagnose. A further issue is that people with legionellosis can present with a wide range of symptoms, some of which (such as diarrhea) may distract clinicians from making a correct diagnosis.
Prevention
Although the risk of Legionnaires disease being spread by large-scale water systems cannot be eliminated, it can be greatly reduced by writing and enforcing a highly detailed, systematic water safety plan appropriate for the specific facility involved (office building, hospital, hotel, spa, cruise ship, etc.) Some of the elements that such a plan may include are:
Keep water temperature either above or below the 20–50 °C (68–122 °F) range in which the Legionella bacterium thrives.
Prevent stagnation, for example, by removing from a network of pipes any sections that have no outlet (dead ends). Where stagnation is unavoidable, as when a wing of a hotel is closed for the off-season, systems must be thoroughly disinfected just prior to resuming normal operation.
Prevent the buildup of biofilm, for example, by not using (or by replacing) construction materials that encourage its development, and by reducing the quantity of nutrients for bacterial growth that enter the system.
Periodically disinfect the system, by high heat or a chemical biocide, and use chlorination where appropriate. Treatment of water with copper-silver ionization or ultraviolet light may also be effective.
System design (or renovation) can reduce the production of aerosols and reduce human exposure to them, by directing them well away from building air intakes.An effective water safety plan also covers such matters as training, record-keeping, communication among staff, contingency plans, and management responsibilities. The format and content of the plan may be prescribed by public health laws or regulations. To inform the water safety plan, the undertaking of a site specific legionella risk assessment is often recommended in the first instance. The legionella risk assessment identifies the hazards, the level of risk they pose and provides recommendations of control measures to put place within the overarching water safety plan.
Treatment
Effective antibiotics include most macrolides, tetracyclines, ketolides, and quinolones. Legionella spp. multiply within the cell, so any effective treatment must have excellent intracellular penetration. Current treatments of choice are the respiratory tract quinolones (levofloxacin, moxifloxacin, gemifloxacin) or newer macrolides (azithromycin, clarithromycin, roxithromycin). The antibiotics used most frequently have been levofloxacin, doxycycline, and azithromycin.
Macrolides (azithromycin) are used in all age groups, while tetracyclines (doxycycline) are prescribed for children above the age of 12 and quinolones (levofloxacin) above the age of 18. Rifampicin can be used in combination with a quinolone or macrolide. Whether rifampicin is an effective antibiotic to take for treatment is uncertain. The Infectious Diseases Society of America does not recommend the use of rifampicin with added regimens. Tetracyclines and erythromycin led to improved outcomes compared to other antibiotics in the original American Legion outbreak. These antibiotics are effective because they have excellent intracellular penetration in Legionella-infected cells. The recommended treatment is 5–10 days of levofloxacin or 3–5 days of azithromycin, but in people who are immunocompromised, have severe disease, or other pre-existing health conditions, longer antibiotic use may be necessary. During outbreaks, prophylactic antibiotics have been used to prevent Legionnaires disease in high-risk individuals who have possibly been exposed.The mortality at the original American Legion convention in 1976 was high (29 deaths in 182 infected individuals) because the antibiotics used (including penicillins, cephalosporins, and aminoglycosides) had poor intracellular penetration. Mortality has plunged to less than 5% if therapy is started quickly. Delay in giving the appropriate antibiotic leads to higher mortality.
Prognosis
The fatality rate of Legionnaires disease has ranged from 5–30% during various outbreaks and approaches 50% for nosocomial infections, especially when treatment with antibiotics is delayed. Hospital-acquired Legionella pneumonia has a fatality rate of 28%, and the principal source of infection in such cases is the drinking-water distribution system.
Epidemiology
Legionnaires disease acquired its name in July 1976, when an outbreak of pneumonia occurred among people attending a convention of the American Legion at the Bellevue-Stratford Hotel in Philadelphia. Of the 182 reported cases, mostly men, 29 died. On 18 January 1977, the causative agent was identified as a previously unknown strain of bacteria, subsequently named Legionella, and the species that caused the outbreak was named Legionella pneumophila. Following this discovery, unexplained outbreaks of severe respiratory disease from the 1950s were retrospectively attributed to Legionella. Legionnaires disease also became a prominent historical example of an emerging infectious disease.Outbreaks of Legionnaires disease receive significant media attention, but this disease usually occurs in single, isolated cases not associated with any recognized outbreak. When outbreaks do occur, they are usually in the summer and early autumn, though cases may occur at any time of year. Most infections occur in those who are middle-aged or older. National surveillance systems and research studies were established early, and in recent years, improved ascertainment and changes in clinical methods of diagnosis have contributed to an upsurge in reported cases in many countries. Environmental studies continue to identify novel sources of infection, leading to regular revisions of guidelines and regulations. About 8,000 to 18,000 cases of Legionnaires disease occur each year in the United States, according to the Bureau of Communicable Disease Control.Between 1995 and 2005, over 32,000 cases of Legionnaires disease and more than 600 outbreaks were reported to the European Working Group for Legionella Infections. The data on Legionella are limited in developing countries, and Legionella-related illnesses likely are underdiagnosed worldwide. Improvements in diagnosis and surveillance in developing countries would be expected to reveal far higher levels of morbidity and mortality than are currently recognised. Similarly, improved diagnosis of human illness related to Legionella species and serogroups other than Legionella pneumophila would improve knowledge about their incidence and spread.A 2011 study successfully used modeling to predict the likely number of cases during Legionnaires outbreaks based on symptom onset dates from past outbreaks. In this way, the eventual likely size of an outbreak can be predicted, enabling efficient and effective use of public-health resources in managing an outbreak.During the COVID-19 pandemic, some researchers and organisations raised concerns about the impact of the COVID-19 lockdowns on Legionnaires disease outbreaks. Additionally, at least two people in England died from a co-infection of Legionella and SARS-CoV-2.
Outbreaks
An outbreak is defined as two or more cases where the onset of illness is closely linked in time (weeks rather than months) and space, where a suspicion or evidence exists of a common source of infection, with or without microbiological support (i.e. common spatial location of cases from travel history).In April 1985, 175 people in Stafford, England, were admitted to the District or Kingsmead Stafford Hospitals with chest infection or pneumonia. A total of 28 people died. Medical diagnosis showed that Legionnaires disease was responsible and the immediate epidemiological investigation traced the source of the infection to the air-conditioning cooling tower on the roof of Stafford District Hospital.In March 1999, a large outbreak in the Netherlands occurred during the Westfriese Flora flower exhibition in Bovenkarspel; 318 people became ill and at least 32 people died. This was the second-deadliest outbreak since the 1976 outbreak and possibly the deadliest, as several people were buried before Legionnaires disease had been diagnosed.The worlds largest outbreak of Legionnaires disease happened in July 2001, with people appearing at the hospital on 7 July, in Murcia, Spain. More than 800 suspected cases were recorded by the time the last case was treated on 22 July; 636–696 of these cases were estimated and 449 confirmed (so, at least 16,000 people were exposed to the bacterium) and six died, a case-fatality rate around 1%.In September 2005, 127 residents of a nursing home in Canada became ill with L. pneumophila. Within a week, 21 of the residents had died. Culture results at first were negative, which is not unusual, as L. pneumophila is a "fastidious" bacterium, meaning it requires specific nutrients, living conditions, or both to grow. The source of the outbreak was traced to the air-conditioning cooling towers on the nursing homes roof.In an outbreak in lower Quebec City, Canada, 180 people were affected with 13 resulting deaths due to contaminated water in a cooling tower.In November 2014, 302 people were hospitalized following an outbreak of legionellosis in Portugal, and seven related deaths were reported. All cases emerged in three civil parishes from the municipality of Vila Franca de Xira in the northern outskirts of Lisbon, and were treated in hospitals of the greater Lisbon area. The source is suspected to be located in the cooling towers of the fertilizer plant Fertibéria.Twelve people were diagnosed with the disease in an outbreak in the Bronx, New York, in December 2014; the source was traced to contaminated cooling towers at a housing development. In July and August 2015, another, unrelated outbreak in the Bronx killed 12 people and made about 120 people sick; the cases arose from a cooling tower on top of a hotel. At the end of September, another person died of the disease and 13 were sickened in yet another unrelated outbreak in the Bronx. The cooling towers from which the people were infected in the latter outbreak had been cleaned during the summer outbreak, raising concerns about how well the bacteria could be controlled.On 28 August 2015, an outbreak of Legionnaires disease was detected at San Quentin State Prison in Northern California; 81 people were sickened and the cause was sludge that had built up in cooling towers.Between June 2015, and January 2016, 87 cases of Legionnaires disease were reported by the Michigan Department of Health and Human Services for the city of Flint, Michigan, and surrounding areas. The outbreak may have been linked to the Flint water crisis, in which the citys water source was changed to a cheaper and inadequately treated source. Ten of those cases were fatal.In November 2017, an outbreak was detected at Hospital de São Francisco Xavier, Lisbon, Portugal, with up to 53 people being diagnosed with the disease and five of them dying from it.In Quincy, Illinois, at the Illinois Veterans Home, a 2015 outbreak of the disease killed 12 people and sickened more than 50 others </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Kernicterus and explain in detail? [/INST] Kernicterus is a bilirubin-induced brain dysfunction. The term was coined in 1904 by Christian Georg Schmorl. Bilirubin is a naturally occurring substance in the body of humans and many other animals, but it is neurotoxic when its concentration in the blood is too high, a condition known as hyperbilirubinemia. Hyperbilirubinemia may cause bilirubin to accumulate in the grey matter of the central nervous system, potentially causing irreversible neurological damage. Depending on the level of exposure, the effects range from clinically unnoticeable to severe brain damage and even death.
When hyperbilirubinemia increases past a mild level, it leads to jaundice, raising the risk of progressing to kernicterus. When this happens in adults, it is usually because of liver problems. Newborns are especially vulnerable to hyperbilirubinemia-induced neurological damage, because in the earliest days of life, the still-developing liver is heavily exercised by the breakdown of fetal hemoglobin as it is replaced with adult hemoglobin and the blood–brain barrier is not as developed. Mildly elevated serum bilirubin levels are common in newborns, and neonatal jaundice is not unusual, but bilirubin levels must be carefully monitored in case they start to climb, in which case more aggressive therapy is needed, usually via light therapy but sometimes even via exchange transfusion.
Classification
Acute bilirubin encephalopathy (ABE)
ABE is an acute state of elevated bilirubin in the central nervous system. Clinically, it encompasses a wide range of symptoms. These include lethargy, decreased feeding, hypotonia or hypertonia, a high-pitched cry, spasmodic torticollis, opisthotonus, setting sun sign, fever, seizures, and even death. If the bilirubin is not rapidly reduced, ABE quickly progresses to chronic bilirubin encephalopathy.
Chronic bilirubin encephalopathy (CBE)
CBE is a chronic state of severe bilirubin-induced neurological lesions. Reduction of bilirubin in this state will not reverse the sequelae. Clinically, manifestations of CBE include:
movement disorders – dyskinetic CP with often spasticity. 60% have severe motor disability (unable to walk).
auditory dysfunction – auditory neuropathy (ANSD)
visual/oculomotor impairments (nystagmus, strabismus, impaired upward or downward gaze, and/or cortical visual impairment). In rare cases, decreased visual acuity(blindness) can occur.
dental enamel hypoplasia/dysplasia of the deciduous teeth,
gastroesophageal reflux,
impaired digestive function.
slightly decreased intellectual function: Although most individuals (approximately 85%) with kernicterus fall in normal or dull-normal range.
epilepsy is uncommon.These impairments are associated with lesions in the basal ganglia, auditory nuclei of the brain stem, and oculomotor nuclei of the brain stem. Cortex and white matter are subtly involved. Cerebellum may be involved. Severe cortical involvement is uncommon.
Subtle bilirubin encephalopathy (SBE)
SBE is a chronic state of mild bilirubin-induced neurological dysfunction (BIND). Clinically, this may result in neurological, learning and movement disorders, isolated hearing loss and auditory dysfunction.
In the past it was thought that kernicterus (KI) often cause an intellectual disability. This was assumed due to difficulty with hearing, that is typically not detected in a normal audiogram accompanied by impairments of speech, with choreoathetosis. With advances in technology, this has proven to not be the case as those living with KI have repeatedly demonstrated their intelligence using Augmentative Communication devices. Although most individuals with kernicteric cerebral palsy have normal intelligence, some children with mild choreoathetosis develop dull normal intelligence or mild intellectual disability even without auditory dysfunction.
Causes
In the vast majority of cases, kernicterus is associated with unconjugated hyperbilirubinemia during the neonatal period. The blood–brain barrier is not fully functional in neonates and therefore bilirubin is able to cross into the central nervous system. Moreover, neonates have much higher levels of bilirubin in their blood due to:
Rapid breakdown of fetal red blood cells immediately prior to birth, with subsequent replacement by normal adult human red blood cells. This breakdown of fetal red blood cells releases large amounts of bilirubin.
Severe hemolytic disease of the newborn. Many children who survive exhibit permanent mental impairment or damage to motor areas of the brain because of precipitation of bilirubin in neurons.
Neonates have a limited ability to metabolize and excrete bilirubin. The sole pathway for bilirubin elimination is through the uridine diphosphate glucuronosyltransferase isoform 1A1 (UGT1A1) enzyme, which performs a reaction called "glucuronidation". This reaction adds a large sugar to the bilirubin, which makes the compound more water-soluble, so it can be more readily excreted via the urine and/or the feces. The UGT1A1 enzyme is not active in appreciable amounts until several months after birth. Apparently, this is a developmental compromise since the maternal liver and placenta perform glucuronidation for the fetus.
Administration of aspirin to neonates and infants. Aspirin displaces bilirubin from serum albumin, thus generating an increased level of free bilirubin which can cross the developing blood brain barrier. This can be life-threatening.Bilirubin is known to accumulate in the gray matter of neurological tissue where it exerts direct neurotoxic effects </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Tungiasis and explain in detail? [/INST] Tungiasis is an inflammatory skin disease caused by infection with the female ectoparasitic Tunga penetrans, a flea also known as the chigoe, chigo, chigoe flea, chigo flea, jigger, nigua, sand flea, or burrowing flea (and not to be confused with the chigger, a different arthropod). The flea and the disease that it causes are found in the tropical parts of Africa, the Caribbean, Central and South America, and India. Tunga penetrans is the smallest known flea, measuring 1 mm across. It is also known in Latin America as the nigua and bicho de pie (Spanish) or bicho de pé (Portuguese), literally "foot bug". Tunga penetrans is a member of the genus Tunga, which comprises 13 species.Tungiasis causes skin inflammation, severe pain, itching, and a lesion at the site of infection that is characterized by a black dot at the center of a swollen red lesion, surrounded by what looks like a white halo. Desquamation of the skin is always seen, especially after the flea expands during hypertrophy.
As of 2009, tungiasis is present worldwide in 88 countries with varying degrees of incidence. This disease is of special public health concern in highly endemic areas such as Nigeria, Trinidad and Tobago, and Brazil, where its prevalence, especially in poor communities, has been known to approach 50%.The chigoe flea is properly classified as a member of the order Siphonaptera as it is a flea. Although commonly referred to as chiggers, true chiggers are mites, which are minute arachnids. Mites penetrate the skin and feed on skin cells that are broken down by an enzyme they secrete from their mouthparts, but they do not lay eggs in the host as T. penetrans does. Moreover, in mites, the adult and the larval forms both feed on other animals. This is not the case with T. penetrans, as only the adults feed on mammals and it is only the female that stays attached to the host.
Tunga penetrans is also known by the following names: chigoe flea, sand flea, nigua, chigger flea, jigger flea, bicho de pé, pico, sikka, kuti, and piqui, among many others.
Signs and symptoms
The symptoms of this disease include:
Severe pruritus
Pain
Inflammation and swelling
Lesions and ulcerations, with black dots in the centerIf a Tunga infection is left untreated, secondary infections, such as bacteremia, tetanus, necrosis and gangrene, may be expected.
In all cases, tungiasis by itself only caused morbidity, though secondary infection may lead to mortality. The life cycle section presents the Fortaleza stages from the fleas developmental perspective. The discussion is specific to symptoms of human infection. The clinical presentation in humans follows the Fortaleza Classification as the stage of infection will determine the symptoms present. The following discussion will give an overview of the symptoms beginning in stage 2 because patients are not likely to present themselves at the early stages of infection, mostly because the fleas burrowing is usually not felt. This may be due to a keratolytic enzyme secreted during stage 1.The patient with a single flea may present as early as stage 2 when, though the erythema is barely perceptible, a boring pain and the curious sensation of pleasant itching occur. This inflammatory reaction is the initial immunological response to the infestation. Heavily infested patients may not notice a stage 2 infection due to the other fleas’ causing irritation as well. Feces may be seen, but this is more common in the 3rd stage.
Around the third day after penetration, erythema and skin tenderness are felt, accompanied by pruritus (severe itching) and a black furuncular nodule surrounded by a white halo of stretched skin caused by the expansion of the flea. Fecal coils may protrude from the center of the nodule where the fleas anus is facing upward. They should be washed off quickly as the feces may remain in the skin unless removed. During this 3a substage, pain can be severe, especially at night or, if the nodule is on the foot, while walking. Eggs will also begin to be released and a watery secretion can be observed. The radical metamorphosis during the 3rd to 6th day after penetration, or neosomy, precedes the formation of a small caldera-like rim rampart as a result of the increased thickness of the fleas chitin exoskeleton. During the caldera formation, the nodule shrinks a bit and it looks as if it is beginning to dry out; this takes 2 weeks and comprises substage 3b.At the third week after penetration and substage 4a, the eggs’ release will have stopped and the lesion will become smaller and more wrinkled. As the flea is near death, fecal and water secretion will stop altogether. Pain, tenderness, and skin inflammation will still be present. Around the 25th day after penetration, the lesion looks like a black crust and the fleas carcass is removed by host repair mechanisms and the skin begins to heal. With the flea gone, inflammation may still persist for a while.Although patients would not present within the 5th stage of tungiasis as the flea would be dead and no longer in the body, this stage is characterized by the reorganization of the skin (1–4 weeks) and a circular residue of 5–10 mm in diameter around the site in penetration. An intraepithelial abscess, which developed due to the presence of the flea, will drain and later heal. Although these disease residues would persist for a few months, tungiasis is no longer present.In severe cases, ulcers are common, as well as complete tissue and nail deformation. A patient may be unable to walk due to severe pain if too many of the lesions are present in the feet. Suppuration (pus formation), tissue death, auto-amputation of digits (via ainhum), and chronic lymphedema may also be seen.If the patient is not vaccinated, tetanus is often a complication due to secondary infection. Necrosis and gangrene are other common complications of severe infestation and superinfection. Staphylococcus aureus and Wolbachia endobacteria can be transmitted by the chigoe flea, as well as nearly 150 other different pathogens. For these reasons, the chigoe flea should be removed as soon as possible.
Incubation
Because of the relatively rapid onset of tungiasis, the incubation period tends to be short. Although some reddening around the site of penetration occurs, the first symptoms are perceived in stage 2 as itching and severe pain, usually a day after penetration.
Cause
Tungiasis is strictly caused by chigoe fleas (the term transmission does not apply because Tunga penetrans is itself responsible for the disease.) The preponderance of tungiasis lesions on the toes may be because chigoe flea is a poor jumper, attaining only a high of 20 cm. But the reality is more complex; for example, the jumping ability cannot explain why hands are the second-most affected body part. Lesions on the hands are better explained by playing in the sand and noting that hands are often used to remove sand from other parts of the body. The occurrence of tungiasis lesions on the toes, between them, and on the soles can be easily explained because most of the victims are poor, walk barefoot, and live in places where the sand (home to chigoe fleas) constitutes the floor. Rate of incidence therefore is greatly increased in poor communities and populations because of the lack of adequate housing. This occurs in significantly higher proportions during the peak of the dry season in local communities.
Reservoirs and transmission
T. penetrans has been documented to use various warm-blooded animals as reservoir hosts, including humans, pigs, dogs, cats, rats, sheep, cattle, donkeys, monkeys, birds, and elephants. These hosts directly propagate the disease by being the origin of the next generation of fleas. Once the female flea expels 100–200 eggs, the cycle of transmission begins again.
Lifecycle
T. penetrans eggs, on average, are 604 μm long, The larva will hatch from the egg within one to six days, assuming the environmental conditions (e.g., moisture, humidity, etc.) are favorable. After hatching, the flea will progress through two instar phases. This is unique in that most fleas go through three, instead. Over the course of that development, the flea will first decrease in size from its just-hatched size of 1,500 μm to 1,150 μm (first instar) before growing to 2,900 μm (second instar).About 6 to 8 days after hatching, the larva pupates and builds a cocoon around itself. Because it lives mostly on and below the surface of the sand, sand is used to stabilize the cocoon and help to promote its development. An environmental disturbance such as rain or a lack of sand have been shown to decrease incidence, most likely due to decreasing the environmental factors (i.e., sand) on which the flea depends for overall growth. Barring any disturbances to the cocoon, an adult flea will emerge from the puparium after 9–15 days.In the adult phase, the flea will occasionally feed on unsuspecting animals. Only once the female burrows into the skin can reproduction occur, as the male and female show no interest in each other in the wild. The male flea dies after copulation. The female flea continues in vivo ectodevelopment, described in stages by the Fortaleza classification of tungiasis.
Fortaleza classification
In a seminal paper on the biology and pathology of Tunga penetrans, Eisele et al. (2003) provided and detailed the five stages of tungiasis, thereby detailing the in vivo development of the female chigoe flea for the first time. In dividing the natural history of the disease, the Fortaleza Classification formally describes the last part of the female fleas life cycle where it burrows into its hosts skin, expels eggs, and dies. Due to the nature of the discussion, overlap with other sections, particularly the one on symptoms, is unavoidable.Stage 1 is characterized by the penetration of the skin by the female chigoe flea. Running along the body, the female uses its posterior legs to push its body upward by an angle of 45–90 degrees. Penetration then starts, beginning with the proboscis going through the epidermis.By stage 2 (day 1–2), penetration is complete and the flea has burrowed most of its body into the skin. Only the anus, the copulatory organs, and four rear air holes in fleas called stigmata remain on the outside of the epidermis. The anus will excrete feces that is thought to attract male fleas for mating, described in a later section. The hypertrophic zone between tergites 2 and 3 in the abdominal region begins to expand a day or two after penetration and takes the appearance of a life belt. During this time, the flea begins to feed on the hosts blood.Stage 3 is divided into two substages, 3a and 3b. Stage 3a is 2–3 days after penetration is complete. In substage 3a, the fleas midsection swells, balloon-like, to the size of a pea. This expansion of the flea causes its integument to be stretched thin, a process called physogastrism. The swelling ends in the condition of physogastry, and results in the appearance of a white halo around a black dot at the center of the lesion. That dot is the rear end of the flea. In substage 3b, the chitin exoskeleton of tergites 2 and 3 increase in thickness, which gives the structure the look of a miniature caldera. Egg release is common in substage 3b, as are fecal coils. The eggs tend to stick to the skin.At about the 3rd week after penetration, stage 4 begins, which is also divided into two substages. In 4a, the flea loses its signs of vitality and appears near death. As a result, the lesion shrinks in size, turns brown, and appears wrinkled. The death of the flea marks the beginning of substage 4b (around day 25 post-penetration) as the body begins to eliminate the parasite through skin repair mechanisms (e.g. shedding and subsequent skin repair). At this phase, the lesion is seen as brown or black.By the 5th stage of tungiasis, the carcass of the T. penetrans flea has been expelled and there are circular skin residues of the infection that remain. There are only lingering symptoms at this time, described in the next section.
Morphology
In a study of 1000 freshly ejected T. penetrans eggs, it was found that females are generally smaller than males for all criteria. In some cases, though, females had a bigger epipharynx and maxillar palpus. Due to its burrowing activity, the chigoe flea has developed a well-developed lacinia and epipharynx that is used to penetrate the skin. Overall, the fleas’ head is relatively flattened, which again aids in burrowing through the epidermal and dermal layers.Investigators have also found that adult T. penetrans have different morphologies with respect to the shape of their head. Some have a rounded head, others have head shapes that resemble ski ramps more than anything else; still, others demonstrate head shapes that are very linear with a slight bulge at the nose. These morphologies were seen to be host-specific, as only fleas of some head-types were found in specific hosts. This, along with genetic differences among the T. penetrans fleas that infect different host animals, may suggest that there are several species of closely related species have been grouped taxonomically under one binomial nomenclature.Though the chigoe flea resembles most others in morphology, the flea has a hypertrophic region between tergites 2 and 3. As stated in Eisele et al. (2003), tergites 2 and 3, as well as the abdominal sternites, stretch considerably and are bent apart. Chitinous clasps that are built for the abdominal enlargement surround these regions and hold onto the hypertrophic zone, giving them the appearance of a three-leafed clover. (See image 7 of the life cycle diagram.) Surprisingly, the rest of the flea, including the head and the thorax, do not change in shape.,With the rapid expansion of the flea, the morphology of the flea is now vastly different. It has gone from the smallest flea in the world to a bulging mass that measures 5–10mm in diameter. This results in a volume that is 2000 to 3000 times what it used to be.
Reproduction
Females have a depression or groove at their abdominal end whereas the males have their protrusive copulatory organs in that same region. These morphological differences reflect the way the male and female copulate. In the first step toward copulation, the female penetrates an organism in an ungravid state. It is only there that the male will find her and copulate. Copulation of adults has not been observed in the wild. With the female reproductive organs pointing outward, the male will place his reproductive organs "in direction to the upright abdominal end of the female" to copulate. Having copulated for only a few seconds to 2 minutes, the male will then begin to search for another female. After copulation is complete, the male will die, although sometimes he will take a blood meal before doing so. Eggs will be expelled whether or not they have been fertilized.The chigoe flea eggs’ average length is 604 μm and the just hatched larvae, in their first instar, have an average length of 1,500 μm. At the second and last instar (T. penetrans is unique among the fleas in that it only has two, instead of three, instars.) the larvae decrease in size to 1,150 μm after growing to at least 2,900 μm. The development from instar 1 to instar 2 lasts less than one day.On the whole, Tunga penetrans does not do very well in terms of its Darwinian fitness. In a laboratory setting in which different mediums were provided for larval growth, the rate of survival from egg to adult in the best medium was 1.05%. Only 15% of the eggs were found to develop into larvae, and of those, only 14% formed a cocoon. Moreover, only half of the pupae reached the adult phase, resulting in gender disequilibrium. Although these results reflect a laboratory setting, the general lack of success for T. penetrans’s reproductive (opportunistic) R-strategy is surprising given the number of fleas that a single person can attract. The low survival rate suggests that a concentrated public health effort directed at any point in the flea’s life cycle is likely to deal a crippling blow to the overall population of the flea in the area.
Diagnosis
There are no diagnostic tests for tungiasis. This is most likely because the parasite is ectoparasitic with visible symptoms. Identification of the parasite through removal, and a patient’s traveling history, should suffice for diagnosis, though the latter is clearly more useful than the former. Localization of the lesion may be a useful diagnostic method for the clinician. A biopsy may be done, though again, it is not required for diagnosis.
Prevention
Due to the high number of hosts, eradication of tungiasis is not feasible, at least not easily so. Public health and prevention strategies should then be done with elimination as the target. Better household hygiene, including having a cemented rather than a sand floor, and washing it often, would lower the rates of tungiasis significantly.Though vaccines would be useful, due to the ectoparasitic nature of chigoe flea, they are neither a feasible nor an effective tool against tungiasis. Nevertheless, due to the high incidence of secondary infection, those at risk of tungiasis should get vaccinated against tetanus. A better approach is to use repellents that specifically target the chigoe flea. One very successful repellent is called Zanzarin, a derivative of coconut oil, jojoba oil, and aloe vera. In a recent study involving two cohorts, the infestation rates dropped 92% on average for the first one and 90% for the other. Likewise, the intensity of the cohorts dropped by 86% and 87% respectively. The non-toxic nature of Zanzarin, combined with its "remarkable regression of the clinical pathology" make this a tenable public health tool against tungiasis.The use of pesticide, like DDT, has also led to elimination of the Tunga penetrans, but this control/prevention strategy should be utilized very carefully, if at all, because of the possible side effects such pesticides can have on the greater biosphere. In the 1950s, there was a worldwide effort to eradicate malaria. As part of that effort, Mexico launched the Campaña Nacional para la Erradicación de Paludismo, or the National Campaign for the Eradication of Malaria. By spraying DDT in homes, the Anopheles a genus of mosquitoes known to carry the deadly Plasmodium falciparum was mostly eliminated. As a consequence of this national campaign, other arthropods were either eliminated or significantly reduced in number, including the reduviid bug responsible for Chagas disease (American Trypanosomiasis) and T. penetrans. Controlled, in-home spraying of DDT is effective as it gives the home immunity against arthropods while not contaminating the local water supplies and doing as much ecological damage as was once the case when DDT was first introduced.While other species gradually gained resistance to DDT and other insecticides that were used, T. penetrans did not; as a result, the incidence of tungiasis in Mexico is very low when compared to the rest of Latin America, especially Brazil, where rates in poor areas have been known to be as high or higher than 50%. There was a 40-year period with no tungiasis cases in Mexico. It was not until August 1989 that three Mexican patients presented with the disease. Though there were other cases of tungiasis reported thereafter, all were acquired in Africa.
Treatment
As the disease is self-limiting, at least when exposure to the parasite is limited, management is mostly confined to treatment. Due to the secondary infection that can cause serious medical issues, the recommended course of action upon diagnosis is a surgical extraction of the fleas followed by the application of a topical antibiotic. Care should be taken to avoid tearing the flea during the extraction procedures as severe inflammation will result. The same will occur if part of the flea is left behind. Sterile equipment should always be used, as contaminated instruments could act as mechanical vectors for pathogens to enter the body.There is no drug that has proven to be effective against embedded fleas. Oral niridazole was once considered a therapeutic drug, but well-designed studies are lacking and, given the severe adverse effects, this is one drug that is likely to cause more harm than good. However, it has some anecdotal evidence of lysing the fleas altogether. Oral ivermectin is considered by some in endemic areas to be a panacea against the fleas but studies using high doses have failed to validate this hypothesis. Other drugs such as topical ivermectin and metrifonate have been somewhat successful, but not enough to be significant. [2,5] For superinfections, trimethoprim, sulfamethoxazole, metronidazole, amoxicillin, (with/without clavulanate) have been used successfully, though these treat only secondary infections.Successful topical treatments also include cryotherapy and electrodesiccation of the lesion. If formaldehyde, chloroform, or DDT are used topically, care should be taken when dealing with the resulting morbidity. The T. penetrans flea can also be suffocated using occlusive petrolatum, while Vaseline will kill the organism as well, most likely due to suffocation as the stigmata would be covered. The gum of the mammee apple (Mammea americana), a fruit that also goes by the name Saint Domingo apricot, has also been used to kill the chigoe flea, though this has not been reported in the main T. penetrans literature.Even without treatment, the burrowed fleas will die within five weeks and are naturally sloughed off as the skin sheds.
Epidemiology
For the most part, the chigoe flea lives 2–5 cm below the sand, an observation which helps explains its overall distribution. The temperature is generally too hot for the larvae to develop on the surface of the sand and the deeper sand does not have enough oxygen. This preferred ecological niche offers a way to decrease transmission among humans by investing in concrete grounds as opposed to the sand that is usually used in shacks and some favelas. Indeed, Nany et al. (2007) report that "In shacks with concreted ground being cleaned every day with water, Tunga [penetrans] larvae were hardly found."In a longitudinal study conducted from March 2001 to January 2002, incidence of tungiasis was found to vary significantly with the local seasons of an endemic community in Brazil. In particular, the study found that "occurrence of tungiasis varies throughout the year and seems to follow local precipitation patterns. Maximum and minimum prevalence rates differed by more than a factor of three." The authors suggest that the correlation is due to the high humidity in the soil impairing larval development during the rainy season, as well as the more obvious reason that rain may simply wash away all stages of T. penetrans due its small size of 1mm.Acting as both biological vectors and definitive hosts, humans have spread Tunga penetrans from its isolated existence in the West Indies to all of Latin America and most of Africa via sea travel. Since the chigoe flea technically has no reservoir species and the female will cause tungiasis to any mammalian organism it can penetrate, this means the flea will have a relatively large number of hosts and victims </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is II and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Multiple abnormalities and explain in detail? [/INST] </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Arm and explain in detail? [/INST] In human anatomy, the arm refers to the upper limb in common usage, although academically the term specifically means the upper arm between the glenohumeral joint (shoulder joint) and the elbow joint. The distal part of the upper limb between the elbow and the radiocarpal joint (wrist joint) is known as the forearm or "lower" arm, and the extremity beyond the wrist is the hand.
By anatomical definitions, the bones, ligaments and skeletal muscles of the shoulder girdle, as well as the axilla between them, is considered parts of the upper limb, and thus also components of the arm. The Latin term brachium, which serves as a root word for naming many anatomical structures, may refer to either the upper limb as a whole or to the upper arm on its own.
Anatomy
Bones
The humerus is one of the three long bones of the arm. It joins with the scapula at the shoulder joint and with the other long bones of the arm, the ulna and radius at the elbow joint. The elbow is a complex hinge joint between the end of the humerus and the ends of the radius and ulna.
Muscles
The arm is divided by a fascial layer (known as lateral and medial intermuscular septa) separating the muscles into two osteofascial compartments: the anterior and the posterior compartments of the arm. The fascia merges with the periosteum (outer bone layer) of the humerus.The anterior compartment contains three muscles: biceps brachii, brachialis and coracobrachialis muscles. They are all innervated by the musculocutaneous nerve. The posterior compartment contains only the triceps brachii muscle, supplied by the radial nerve.
Nerve supply
The musculocutaneous nerve, from C5, C6, C7, is the main supplier of muscles of the anterior compartment. It originates from the lateral cord of the brachial plexus of nerves. It pierces the coracobrachialis muscle and gives off branches to the muscle, as well as to brachialis and biceps brachii. It terminates as the anterior cutaneous nerve of the forearm.
The radial nerve, which is from the fifth cervical spinal nerve to the first thoracic spinal nerve, originates as the continuation of the posterior cord of the brachial plexus. This nerve enters the lower triangular space (an imaginary space bounded by, amongst others, the shaft of the humerus and the triceps brachii) of the arm and lies deep to the triceps brachii. Here it travels with the deep artery of the arm, which sits in the radial groove of the humerus. This fact is very important clinically as a fracture of the shaft of the bone here can cause lesions or even transections in the nerve.
Other nerves passing through give no supply to the arm. These include:
The median nerve, nerve origin C5-T1, which is a branch of the lateral and medial cords of the brachial plexus. This nerve continues in the arm, travelling in a plane between the biceps and triceps muscles. At the cubital fossa, this nerve is deep to the pronator teres muscle and is the most medial structure in the fossa. The nerve passes into the forearm.
The ulnar nerve, origin C8-T1, is a continuation of the medial cord of the brachial plexus. This nerve passes in the same plane as the median nerve, between the biceps and triceps muscles. At the elbow, this nerve travels posterior to the medial epicondyle of the humerus. This means that condylar fractures can cause lesion to this nerve.
Blood supply
The main artery in the arm is the brachial artery. This artery is a continuation of the axillary artery. The point at which the axillary becomes the brachial is distal to the lower border of teres major. The brachial artery gives off an unimportant branch, the deep artery of arm. This branching occurs just below the lower border of teres major.
The brachial artery continues to the cubital fossa in the anterior compartment of the arm. It travels in a plane between the biceps and triceps muscles, the same as the median nerve and basilic vein. It is accompanied by venae comitantes (accompanying veins). It gives branches to the muscles of the anterior compartment. The artery is in between the median nerve and the tendon of the biceps muscle in the cubital fossa. It then continues into the forearm.
The deep artery of the arm travels through the lower triangular space with the radial nerve. From here onwards it has an intimate relationship with the radial nerve. They are both found deep to the triceps muscle and are located on the spiral groove of the humerus. Therefore, fracture of the bone may not only lead to lesion of the radial nerve, but also haematoma of the internal structures of the arm. The artery then continues on to anastamose with the recurrent radial branch of the brachial artery, providing a diffuse blood supply for the elbow joint.
Veins
The veins of the arm carry blood from the extremities of the limb, as well as drain the arm itself </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Borderline intellectual functioning and explain in detail? [/INST] Borderline intellectual functioning, also called borderline mental retardation (in the ICD-8), is a categorization of intelligence wherein a person has below average cognitive ability (generally an IQ of 70–85), but the deficit is not as severe as intellectual disability (below 70). It is sometimes called below average IQ (BAIQ). This is technically a cognitive impairment; however, this group may not be sufficiently mentally disabled to be eligible for specialized services.
Codes
The DSM-IV-TR code of borderline intellectual functioning is V62.89. DSM-5 diagnosis codes are V62.89 and R41.83.
Learning skills
During school years, individuals with borderline intellectual functioning are often "slow learners" </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Delusion and explain in detail? [/INST] A delusion is a false fixed belief that is not amenable to change in light of conflicting evidence. As a pathology, it is distinct from a belief based on false or incomplete information, confabulation, dogma, illusion, hallucination, or some other misleading effects of perception, as individuals with those beliefs are able to change or readjust their beliefs upon reviewing the evidence. However:
"The distinction between a delusion and a strongly held idea is sometimes difficult to make and depends in part on the degree of conviction with which the belief is held despite clear or reasonable contradictory evidence regarding its veracity."Delusions have been found to occur in the context of many pathological states (both general physical and mental) and are of particular diagnostic importance in psychotic disorders including schizophrenia, paraphrenia, manic episodes of bipolar disorder, and psychotic depression.
Types
Delusions are categorized into four different groups:
Bizarre delusion: Delusions are deemed bizarre if they are clearly implausible and not understandable to same-culture peers and do not derive from ordinary life experiences. An example named by the DSM-5 is a belief that someone replaced all of ones internal organs with someone elses without leaving a scar, depending on the organ in question.
Non-bizarre delusion: A delusion that, though false, is at least technically possible, e.g., the affected person mistakenly believes that they are under constant police surveillance.
Mood-congruent delusion: Any delusion with content consistent with either a depressive or manic state, e.g., a depressed person believes that news anchors on television highly disapprove of them, or a person in a manic state might believe they are a powerful deity.
Mood-neutral delusion: A delusion that does not relate to the patients emotional state; for example, a belief that an extra limb is growing out of the back of ones head is neutral to either depression or mania.
Themes
In addition to these categories, delusions often manifest according to a consistent theme. Although delusions can have any theme, certain themes are more common. Some of the more common delusion themes are:
Delusion of control: False belief that another person, group of people, or external force controls ones general thoughts, feelings, impulses, or behaviors.
Cotard delusion: False belief that one does not exist or that one has died. Some cases also include the belief that one is immortal or that one has lost their internal organs, blood, or other body parts.
Delusional jealousy: False belief that a spouse or lover is having an affair, with no proof to back up the claim.
Delusion of guilt or sin (or delusion of self-accusation): Ungrounded feeling of remorse or guilt of delusional intensity.
Delusion of mind being read: False belief that other people can know ones thoughts.
Delusion of thought insertion: Belief that another thinks through the mind of the person.
Delusion of reference: False belief that insignificant remarks, events, or objects in ones environment have personal meaning or significance. "Usually the meaning assigned to these events is negative, but the messages can also have a grandiose quality."
Erotomania: False belief that another person is in love with them.
Religious delusion: Belief that the affected person is a god or chosen to act as a god.
Somatic delusion: Delusion whose content pertains to bodily functioning, bodily sensations or physical appearance. Usually the false belief is that the body is somehow diseased, abnormal or changed. A specific example of this delusion is delusional parasitosis: Delusion in which one feels infested with insects, bacteria, mites, spiders, lice, fleas, worms, or other organisms.
Delusion of poverty: Person strongly believes they are financially incapacitated. Although this type of delusion is less common now, it was particularly widespread in the days preceding state support.
Grandiose delusions
Grandiose delusions or delusions of grandeur are principally a subtype of delusional disorder but could possibly feature as a symptom of schizophrenia and manic episodes of bipolar disorder. Grandiose delusions are characterized by fantastical beliefs that one is famous, omnipotent or otherwise very powerful. The delusions are generally fantastic, often with a supernatural, science-fictional, or religious bent. In colloquial usage, one who overestimates ones own abilities, talents, stature or situation is sometimes said to have "delusions of grandeur". This is generally due to excessive pride, rather than any actual delusions. Grandiose delusions or delusions of grandeur can also be associated with megalomania.
Persecutory delusions
Persecutory delusions are the most common type of delusions and involve the theme of being followed, harassed, cheated, poisoned or drugged, conspired against, spied on, attacked, or otherwise obstructed in the pursuit of goals.
Persecutory delusions are a condition in which the affected person wrongly believes that they are being persecuted. Specifically, they have been defined as containing two central elements: The individual thinks that:
harm is occurring, or is going to occur
the persecutors have the intention to cause harmAccording to the DSM-IV-TR, persecutory delusions are the most common form of delusions in schizophrenia, where the person believes they are "being tormented, followed, sabotaged, tricked, spied on, or ridiculed". In the DSM-IV-TR, persecutory delusions are the main feature of the persecutory type of delusional disorder. When the focus is to remedy some injustice by legal action, they are sometimes called "querulous paranoia".
Causes
Explaining the causes of delusions continues to be challenging and several theories have been developed. One is the genetic or biological theory, which states that close relatives of people with delusional disorder are at increased risk of delusional traits. Another theory is the dysfunctional cognitive processing, which states that delusions may arise from distorted ways people have of explaining life to themselves. A third theory is called motivated or defensive delusions. This one states that some of those persons who are predisposed might experience the onset of delusional disorder in those moments when coping with life and maintaining high self-esteem becomes a significant challenge. In this case, the person views others as the cause of their personal difficulties in order to preserve a positive self-view.This condition is more common among people who have poor hearing or sight. Also, ongoing stressors have been associated with a higher possibility of developing delusions. Examples of such stressors are immigration, low socioeconomic status, and even possibly the accumulation of smaller daily hassles.
Specific delusions
The top two factors mainly concerned in the germination of delusions are disorder of brain functioning and background influences of temperament and personality.Higher levels of dopamine qualify as a symptom of disorders of brain function. That they are needed to sustain certain delusions was examined by a preliminary study on delusional disorder (a psychotic syndrome) instigated to clarify if schizophrenia had a dopamine psychosis. There were positive results - delusions of jealousy and persecution had different levels of dopamine metabolite HVA and homovanillyl alcohol (which may have been genetic). These can be only regarded as tentative results; the study called for future research with a larger population.
It is simplistic to say that a certain measure of dopamine will bring about a specific delusion. Studies show age and gender to be influential and it is most likely that HVA levels change during the life course of some syndromes.On the influence of personality, it has been said: "Jaspers considered there is a subtle change in personality due to the illness itself; and this creates the condition for the development of the delusional atmosphere in which the delusional intuition arises."Cultural factors have "a decisive influence in shaping delusions". For example, delusions of guilt and punishment are frequent in a Western, Christian country like Austria, but not in Pakistan, where it is more likely persecution. Similarly, in a series of case studies, delusions of guilt and punishment were found in Austrian patients with Parkinsons being treated with l-dopa, a dopamine agonist.
Pathophysiology
The two-factor model of delusions posits that dysfunction in both belief formation systems and belief evaluation systems are necessary for delusions. Dysfunction in evaluations systems localized to the right lateral prefrontal cortex, regardless of delusion content, is supported by neuroimaging studies and is congruent with its role in conflict monitoring in healthy persons. Abnormal activation and reduced volume is seen in people with delusions, as well as in disorders associated with delusions such as frontotemporal dementia, psychosis and Lewy body dementia. Furthermore, lesions to this region are associated with "jumping to conclusions", damage to this region is associated with post-stroke delusions, and hypometabolism this region associated with caudate strokes presenting with delusions.The aberrant salience model suggests that delusions are a result of people assigning excessive importance to irrelevant stimuli. In support of this hypothesis, regions normally associated with the salience network demonstrate reduced grey matter in people with delusions, and the neurotransmitter dopamine, which is widely implicated in salience processing, is also widely implicated in psychotic disorders.Specific regions have been associated with specific types of delusions. The volume of the hippocampus and parahippocampus is related to paranoid delusions in Alzheimers disease, and has been reported to be abnormal post mortem in one person with delusions. Capgras delusions have been associated with occipito-temporal damage and may be related to failure to elicit normal emotions or memories in response to faces.
Diagnosis
The modern definition and Jaspers original criteria have been criticised, as counter-examples can be shown for every defining feature.
Studies on psychiatric patients show that delusions vary in intensity and conviction over time, which suggests that certainty and incorrigibility are not necessary components of a delusional belief.Delusions do not necessarily have to be false or incorrect inferences about external reality. Some religious or spiritual beliefs by their nature may not be falsifiable, and hence cannot be described as false or incorrect, no matter whether the person holding these beliefs was diagnosed as delusional or not.
In other situations the delusion may turn out to be true belief. For example, in delusional jealousy, where a person believes that their partner is being unfaithful (and may even follow them into the bathroom believing them to be seeing their lover even during the briefest of partings), it may actually be true that the partner is having sexual relations with another person. In this case, the delusion does not cease to be a delusion because the content later turns out to be verified as true or the partner actually chose to engage in the behavior of which they were being accused.
In other cases, the belief may be mistakenly assumed to be false by a doctor or psychiatrist assessing it, just because it seems to be unlikely, bizarre or held with excessive conviction. Psychiatrists rarely have the time or resources to check the validity of a persons claims leading to some true beliefs to be erroneously classified as delusional. This is known as the Martha Mitchell effect, after the wife of the attorney general who alleged that illegal activity was taking place in the White House. At the time, her claims were thought to be signs of mental illness, and only after the Watergate scandal broke was she proved right (and hence sane).
Similar factors have led to criticisms of Jaspers definition of true delusions as being ultimately un-understandable. Critics (such as R. D. Laing) have argued that this leads to the diagnosis of delusions being based on the subjective understanding of a particular psychiatrist, who may not have access to all the information that might make a belief otherwise interpretable. R. D. Laings hypothesis has been applied to some forms of projective therapy to "fix" a delusional system so that it cannot be altered by the patient. Psychiatric researchers at Yale University, Ohio State University and the Community Mental Health Center of Middle Georgia have used novels and motion picture films as the focus. Texts, plots and cinematography are discussed and the delusions approached tangentially. This use of fiction to decrease the malleability of a delusion was employed in a joint project by science-fiction author Philip Jose Farmer and Yale psychiatrist A. James Giannini. They wrote the novel Red Orcs Rage, which, recursively, deals with delusional adolescents who are treated with a form of projective therapy. In this novels fictional setting other novels written by Farmer are discussed and the characters are symbolically integrated into the delusions of fictional patients. This particular novel was then applied to real-life clinical settings.Another difficulty with the diagnosis of delusions is that almost all of these features can be found in "normal" beliefs. Many religious beliefs hold exactly the same features, yet are not universally considered delusional. For instance, if a person was holding a true belief then they will of course persist with it. This can cause the disorder to be misdiagnosed by psychiatrists. These factors have led the psychiatrist Anthony David to note that "there is no acceptable (rather than accepted) definition of a delusion." In practice, psychiatrists tend to diagnose a belief as delusional if it is either patently bizarre, causing significant distress, or excessively pre-occupying the patient, especially if the person is subsequently unswayed in belief by counter-evidence or reasonable arguments.
Joseph Pierre, M.D. states that one factor that helps differentiate delusions from other kinds of beliefs is that anomalous subjective experiences are often used to justify delusional beliefs. While idiosyncratic and self-referential content often make delusions impossible to share with others, Dr. Pierre suggests that it may be more helpful to emphasize the level of conviction, preoccupation, and extension of a belief rather than the content of the belief when considering whether a belief is delusional.It is important to distinguish true delusions from other symptoms such as anxiety, fear, or paranoia. To diagnose delusions a mental state examination may be used. This test includes appearance, mood, affect, behavior, rate and continuity of speech, evidence of hallucinations or abnormal beliefs, thought content, orientation to time, place and person, attention and concentration, insight and judgment, as well as short-term memory.Johnson-Laird suggests that delusions may be viewed as the natural consequence of failure to distinguish conceptual relevance. That is, irrelevant information would be framed as disconnected experiences, then it is taken to be relevant in a manner that suggests false causal connections. Furthermore, relevant information would be ignored as counterexamples.
Definition
Although non-specific concepts of madness have been around for several thousand years, the psychiatrist and philosopher Karl Jaspers was the first to define the four main criteria for a belief to be considered delusional in his 1913 book General Psychopathology. These criteria are:
certainty (held with absolute conviction)
incorrigibility (not changeable by compelling counterargument or proof to the contrary)
impossibility or falsity of content (implausible, bizarre, or patently untrue)
not amenable to understanding (i.e., belief cannot be explained psychologically)Furthermore, when beliefs involve value judgments, only those which cannot be proven true are considered delusions. For example: a man claiming that he flew into the Sun and flew back home. This would be considered a delusion, unless he were speaking figuratively, or if the belief had a cultural or religious source. Only the first three criteria remain cornerstornes of the current definition of a delusion in the DSM-5.
Robert Trivers writes that delusion is a discrepancy in relation to objective reality, but with a firm conviction in reality of delusional ideas, which is manifested in the "affective basis of delusion."
Treatment
Delusions and other positive symptoms of psychosis are often treated with antipsychotic medication, which exert a medium effect size according to meta-analytic evidence. Cognitive behavioral therapy (CBT) improves delusions relative to control conditions according to a meta-analysis. A meta-analysis of 43 studies reported that metacognitive training (MCT) reduces delusions at a medium to large effect size relative to control conditions.
Criticism
Some psychiatrists criticize the practice of defining one and the same belief as normal in one culture and pathological in another culture for cultural essentialism </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Hypersomnia and explain in detail? [/INST] Hypersomnia is a neurological disorder of excessive time spent sleeping or excessive sleepiness. It can have many possible causes (such as seasonal affective disorder) and can cause distress and problems with functioning. In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), hypersomnolence, of which there are several subtypes, appears under sleep-wake disorders.Hypersomnia is a pathological state characterized by a lack of alertness during the waking episodes of the day. It is not to be confused with fatigue, which is a normal physiological state. Daytime sleepiness appears most commonly during situations where little interaction is needed.Since the patients’ attention levels (wakefulness) are impaired, their quality of life may be impacted as well. This is especially true for people whose jobs request high levels of attention, such as in the healthcare field.
Symptoms
The main symptom of hypersomnia is excessive daytime sleepiness (EDS), or prolonged nighttime sleep, which has occurred for at least 3 months prior to diagnosis.Sleep drunkenness is also a symptom found in hypersomniac patients. It is a difficulty transitioning from sleep to wake. Individuals experiencing sleep drunkenness report waking with confusion, disorientation, slowness and repeated returns to sleep.It also appears in non-hypersomniac persons, for example after a night of insufficient sleep. Fatigue and consumption of alcohol or hypnotics can cause sleep drunkenness as well. It is also associated with irritability: people who get angry shortly before sleeping tend to experience sleep drunkenness.According to the American Academy of Sleep Medicine, hypersomniac patients often take long naps during the day that are mostly unrefreshing. Researchers found that naps are usually more frequent and longer in patients than in controls. Furthermore, 75% of the patients report that short naps are not refreshing, compared to controls.
Diagnosis
"The severity of daytime sleepiness needs to be quantified by subjective scales (at least the Epworth Sleepiness Scale) and objective tests such as the multiple sleep latency test (MSLT)." The Stanford sleepiness scale (SSS) is another frequently-used subjective measurement of sleepiness. After it is determined that excessive daytime sleepiness is present, a complete medical examination and full evaluation of potential disorders in the differential diagnosis (which can be tedious, expensive and time-consuming) should be undertaken.
Differential diagnosis
Hypersomnia can be primary (of central/brain origin), or it can be secondary to any of numerous medical conditions. More than one type of hypersomnia can coexist in a single patient. Even in the presence of a known cause of hypersomnia, the contribution of this cause to the complaint of excessive daytime sleepiness needs to be assessed. When specific treatments of the known condition do not fully suppress excessive daytime sleepiness, additional causes of hypersomnia should be sought. For example, if a patient with sleep apnea is treated with CPAP (continuous positive airway pressure), which resolves their apneas but not their excessive daytime sleepiness, it is necessary to seek other causes for the excessive daytime sleepiness. Obstructive sleep apnea "occurs frequently in narcolepsy and may delay the diagnosis of narcolepsy by several years and interfere with its proper management."
Primary hypersomnias
The true primary hypersomnias include:
Narcolepsy (with and without cataplexy)
Idiopathic hypersomnia
Recurrent hypersomnias (like Kleine-Levin syndrome)
Primary hypersomnia mimics
There are also several genetic disorders that may be associated with primary/central hypersomnia. These include the following: Prader-Willi syndrome; Norrie disease; Niemann–Pick disease, type C; and myotonic dystrophy. However, hypersomnia in these syndromes may also be associated with other secondary causes, so it is important to complete a full evaluation. Myotonic dystrophy is often associated with SOREMPs (sleep onset REM periods, such as occur in narcolepsy).There are many neurological disorders that may mimic the primary hypersomnias, narcolepsy and idiopathic hypersomnia: brain tumors; stroke-provoking lesions; and dysfunction in the thalamus, hypothalamus, or brainstem. Also, neurodegenerative conditions such as Alzheimers disease, Parkinsons disease, or multiple system atrophy are frequently associated with primary hypersomnia. However, in these cases, one must still rule out other secondary causes.Early hydrocephalus can also cause severe excessive daytime sleepiness. Additionally, head trauma can be associated with a primary/central hypersomnia, and symptoms similar to those of idiopathic hypersomnia can be seen within 6–18 months following the trauma. However, the associated symptoms of headaches, memory loss, and lack of concentration may be more frequent in head trauma than in idiopathic hypersomnia. "The possibility of secondary narcolepsy following head injury in previously asymptomatic individuals has also been reported."
Secondary hypersomnias
Secondary hypersomnias are extremely numerous.
Hypersomnia can be secondary to disorders such as clinical depression, multiple sclerosis, encephalitis, epilepsy, or obesity. Hypersomnia can also be a symptom of other sleep disorders, like sleep apnea. It may occur as an adverse effect of taking certain medications, of withdrawal from some medications, or of substance use. A genetic predisposition may also be a factor. In some cases it results from a physical problem, such as a tumor, head trauma, or dysfunction of the autonomic or central nervous system.Sleep apnea is the second most frequent cause of secondary hypersomnia, affecting up to 4% of middle-aged adults, mostly men. Upper airway resistance syndrome (UARS) is a clinical variant of sleep apnea that can also cause hypersomnia. Just as other sleep disorders (like narcolepsy) can coexist with sleep apnea, the same is true for UARS. There are many cases of UARS in which excessive daytime sleepiness persists after CPAP treatment, indicating an additional cause, or causes, of the hypersomnia and requiring further evaluation.Sleep movement disorders, such as restless legs syndrome (RLS) and periodic limb movement disorder (PLMD or PLMS) can also cause secondary hypersomnia. Although RLS does commonly cause excessive daytime sleepiness, PLMS does not. There is no evidence that PLMS plays "a role in the etiology of daytime sleepiness. In fact, two studies showed no correlation between PLMS and objective measures of excessive daytime sleepiness. In addition, EDS in these patients is best treated with psychostimulants—and not with dopaminergic agents known to suppress PLMS."Neuromuscular diseases and spinal cord diseases often lead to sleep disturbances due to respiratory dysfunction causing sleep apnea, and they may also cause insomnia related to pain. "Other sleep alterations, such as periodic limb movement disorders in patients with spinal cord disease, have also been uncovered with the widespread use of polysomnography."Primary hypersomnia in diabetes, hepatic encephalopathy, and acromegaly is rarely reported, but these medical conditions may also be associated with hypersomnia secondary to sleep apnea and periodic limb movement disorder (PLMD).Chronic fatigue syndrome and fibromyalgia can also be associated with hypersomnia. Chronic fatigue syndrome is "characterized by persistent or relapsing fatigue that does not resolve with sleep or rest. Polysomnography shows reduced sleep efficiency and may include alpha intrusion into sleep EEG. It is likely that a number of cases labeled as chronic fatigue syndrome are unrecognized cases of upper airway resistance syndrome" or other sleep disorders, such as narcolepsy, sleep apnea, PLMD, etc.As with chronic fatigue syndrome, fibromyalgia may be associated with anomalous alpha wave activity (typically associated with arousal states) during NREM sleep. Also, researchers have shown that disrupting stage IV sleep consistently in young, healthy subjects causes a significant increase in muscle tenderness—similar to that experienced in "neurasthenic musculoskeletal pain syndrome". This pain resolved when the subjects were able to resume their normal sleep patterns.Chronic kidney disease is commonly associated with sleep symptoms and excessive daytime sleepiness. 80% of those on dialysis have sleep disturbances. Sleep apnea can occur 10 times as often in uremic patients than in the general population and can affect up to 30-80% of patients on dialysis, though nighttime dialysis can improve this. About 50% of dialysis patients have hypersomnia, as severe kidney disease can cause uremic encephalopathy, increased sleep-inducing cytokines, and impaired sleep efficiency. About 70% of dialysis patients are affected by insomnia, and RLS and PLMD affect 30%, though these may improve after dialysis or kidney transplant.Most forms of cancer and their therapies can cause fatigue and disturbed sleep, affecting 25-99% of patients and often lasting for years after treatment completion. "Insomnia is common and a predictor of fatigue in cancer patients, and polysomnography demonstrates reduced sleep efficiency, prolonged initial sleep latency, and increased wake time during the night." Paraneoplastic syndromes can also cause insomnia, hypersomnia, and parasomnias.Autoimmune diseases, especially lupus and rheumatoid arthritis, are often associated with hypersomnia. Morvans syndrome is an example of a rarer autoimmune illness that can also lead to hypersomnia. Celiac disease is another autoimmune disease associated with poor sleep quality (which may lead to hypersomnia), "not only at diagnosis but also during treatment with a gluten-free diet." There are also some case reports of central hypersomnia in celiac disease. And RLS "has been shown to be frequent in celiac disease," presumably due to its associated iron deficiency.Hypothyroidism and iron deficiency with or without (iron-deficiency anemia) can also cause secondary hypersomnia. Various tests for these disorders are done so they can be treated. Hypersomnia can also develop within months after viral infections such as Whipples disease, mononucleosis, HIV, and Guillain–Barré syndrome.Behaviorally induced insufficient sleep syndrome must be considered in the differential diagnosis of secondary hypersomnia. This disorder occurs in individuals who fail to get sufficient sleep for at least three months. In this case, the patient has chronic sleep deprivation, although they may not necessarily be aware of it. This situation is becoming more prevalent in western society due to the modern demands and expectations placed upon the individual.Many medications can lead to secondary hypersomnia. Therefore, a patients complete medication list should be carefully reviewed for sleepiness or fatigue as side effects. In these cases, careful withdrawal from the possibly offending medication(s) is needed; then, medication substitution can be undertaken.Mood disorders, like depression, anxiety disorder and bipolar disorder, can also be associated with hypersomnia. The complaint of excessive daytime sleepiness in these conditions is often associated with poor sleep at night. "In that sense, insomnia and EDS are frequently associated, especially in cases of depression." Hypersomnia in mood disorders seems to be primarily related to "lack of interest and decreased energy inherent in the depressed condition rather than an increase in sleep or REM sleep propensity". In all cases with these mood disorders, the MSLT is normal (not too short and no SOREMPs).
Posttraumatic hypersomnias
In some cases, hypersomnia can be caused by a brain injury. Researchers found that the level of sleepiness is correlated with the severity of the injury. Even if patients reported an improvement, sleepiness remained present for a year in about a quarter of patients with traumatic brain injury.
Recurrent hypersomnias
Recurrent hypersomnias are defined by several episodes of hypersomnia persisting from a few days to weeks. These episodes can occur weeks or months apart from each other. There are 2 subtypes of recurrent hypersomnias: Kleine-Levin syndrome and menstrual-related hypersomnia.Kleine-Levin syndrome is characterized by the association of episodes of hypersomnias with behavioral, cognitive and mood abnormalities. The behavioral disturbances can be composed of hyperphagia, irritability, or sexual disinhibition. The cognitive disorders consist of confusion, hallucinations or delusions. Mood symptoms are characterized by anxiety or depression.Menstrual-related hypersomnia is characterized by episodes of excessive sleepiness associated with the menstrual cycle. Researchers found that the degree of premenstrual symptoms were correlated with daytime sleepiness. Unlike Kleine-Levin syndrome, hyperphagia and hypersexuality are not reported in people with menstrual-related hypersomnia, but hypophagia could be present. Ordinarily, these episodes appear 2 weeks before menstruation. A few studies have attested that some hormones as prolactin and progesterone could be responsible for Menstrual-Related Hypersomnia. Therefore, different contraceptive pills could improve the symptoms. The sleep architecture changes. There is a decrease of slow-wave sleep and an increase of slow-Theta-wave activity.
Assessment tools
Polysomnography
Polysomnography is an objective sleep assessment method. It comprises a lot of electrodes which measure physiological variables related to sleep. Polysomnography often includes electroencephalography, electromyography, electrocardiography, muscle activity and respiratory function.Polysomnography is helpful to identify the very short sleep onset latency period, the very efficient sleep (more than 90%), the increased slow wave sleep, and sometimes an elevated amount of sleep spindles in idiopathic hypersomnia patients.
Multiple sleep latency test (MSLT)
The multiple sleep latency test (MSLT) is an objective tool which indicates the degree of sleepiness by measuring the sleep latency (i.e. the speed of falling asleep). It also gives information regarding the presence of abnormal REM sleep onset episodes. During that test, patients have a series of opportunities to sleep at 2-h intervals across the day in a darkened room and with no external alerting influences.The MSLT is often administered the day after recording the polysomnography, and the mean sleep latency score is often found to be around (or less than) 8 minutes in idiopathic hypersomnia patients. Some patients might even have a sleep onset latency of 5 minutes or less. These patients are often even more aware of sleeping during naps than narcolepsy patients.
Actigraphy
Actigraphy, which operates by analyzing the patients limb movements, is used to record the sleep and wake cycles. In order to report them, the patient has to wear continuously a device on his or her wrist, which looks like a watch and does not contain any electrodes.
The advantage actigraphy shows over polysomnography is that it is possible to record for 24-hours a day for weeks. Furthermore, unlike the polysomnography, it is less expensive and non-invasive.An actigraphy over several days can show longer sleep periods, which are characteristic for idiopathic hypersomnia. Actigraphy is also helpful in ruling out other sleep disorders, especially circadian disorders, leading to an excess of sleepiness during the day, too.
The maintenance of wakefulness test (MWT)
The maintenance of wakefulness test (MWT) is a test that measures the ability to stay awake. It is used to diagnose disorders of excessive somnolence, such as hypersomnia, narcolepsy or obstructive sleep apnea. During that test, patients sit comfortably and are instructed to try to stay awake.
The Stanford sleepiness scale (SSS)
The Stanford sleepiness scale (SSS) is a self-report scale that measures the different steps of sleepiness. For each statement, patients report their level of sleepiness using a 7-point scale, going from very alert to excessively sleepy </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Pectus carinatum and explain in detail? [/INST] Pectus carinatum, also called pigeon chest, is a malformation of the chest characterized by a protrusion of the sternum and ribs. It is distinct from the related malformation pectus excavatum.
Signs and symptoms
People with pectus carinatum usually develop normal hearts and lungs, but the malformation may prevent these from functioning optimally. In moderate to severe cases of pectus carinatum, the chest wall is rigidly held in an outward position. Thus, respirations are inefficient and the individual needs to use the accessory muscles for respiration, rather than normal chest muscles, during strenuous exercise. This negatively affects gas exchange and causes a decrease in stamina. Children with pectus malformations often tire sooner than their peers due to shortness of breath and fatigue. Commonly concurrent is mild to moderate asthma.
Some children with pectus carinatum also have scoliosis (i.e., curvature of the spine). Some have mitral valve prolapse, a condition in which the heart mitral valve functions abnormally. Connective tissue disorders involving structural abnormalities of the major blood vessels and heart valves are also seen. Although rarely seen, some children have other connective tissue disorders, including arthritis, visual impairment and healing impairment.
Apart from the possible physiologic consequences, pectus malformations can have a significant psychologic impact. Some people, especially those with milder cases, live happily with pectus carinatum. For others, though, the shape of the chest can damage their self-image and confidence, possibly disrupting social connections and causing them to feel uncomfortable throughout adolescence and adulthood. As the child grows older, bodybuilding techniques may be useful for balancing visual impact.A less common variant of pectus carinatum is pectus arcuatum (also called type 2 pectus excavatum, chondromanubrial malformation or Currarino–Silverman syndrome or pouter pigeon malformation), which produces a manubrial and upper sternal protrusion, particularly also at the sternal angle. Pectus arcuatum is often confused with a combination of pectus carinatum and pectus excavatum, but in pectus arcuatum the visual appearance is characterized by a protrusion of the costal cartilages and there is no depression of the sternum.
Causes
Pectus carinatum is an overgrowth of costal cartilage causing the sternum to protrude forward. It primarily occurs among four different patient groups, and males are more frequently affected than females. Most commonly, pectus carinatum develops in 11-to-14-year-old pubertal males undergoing a growth spurt. Some parents report that their childs pectus carinatum seemingly popped up overnight. Second most common is the presence of pectus carinatum at or shortly after birth. The condition may be evident in newborns as a rounded anterior chest wall. As the child reaches age 2 or 3 years of age, the outward sternal protrusion becomes more pronounced. Pectus carinatum can also be caused by vitamin D deficiency in children (Rickets) due to deposition of unmineralized osteoid. Least common is a pectus carinatum malformation following open-heart surgery or in children with poorly controlled bronchial asthma.
Pectus carinatum is generally a solitary, non-syndromic abnormality. However, the condition may be present in association with other syndromes: Turner syndrome, Noonan syndrome, Loeys–Dietz syndrome, Marfan syndrome, Ehlers–Danlos syndrome, Morquio syndrome, trisomy 18, trisomy 21, homocystinuria, osteogenesis imperfecta, multiple lentigines syndrome (LEOPARD syndrome), Sly syndrome (mucopolysaccharidosis type VII), and scoliosis.
In about 25% of cases of pectus carinatum, the patient has a family member with the condition.
Diagnosis
The pectus carinatum can be easily diagnosed by certain tests like a CT scan (2D and 3D). It may then be found out that the rib cage is in normal structure. If there is more than average growth of sternum than pectus carinatum protrudes. Also it is of two types, as pectus carinatum is symmetrical or unsymmetrical. On the basis of that further treatment is given to the patient.
Treatment
External bracing technique
The use of orthotic bracing, pioneered by Sydney Haje as of 1977, is finding increasing acceptance as an alternative to surgery in select cases of pectus carinatum. In children, teenagers, and young adults who have pectus carinatum and are motivated to avoid surgery, the use of a customized chest-wall brace that applies direct pressure on the protruding area of the chest produces excellent outcomes. Willingness to wear the brace as required is essential for the success of this treatment approach. The brace works in much the same way as orthodontics (braces that correct the alignment of teeth). The brace consists of front and back compression plates that are anchored to aluminum bars. These bars are bound together by a tightening mechanism which varies from brace to brace. This device is easily hidden under clothing and must be worn from 14 to 24 hours a day. The wearing time varies with each brace manufacturer and the managing physicians protocol, which could be based on the severity of the carinatum malformation (mild moderate severe) and if it is symmetric or asymmetric.Depending on the manufacturer and/or the patients preference, the brace may be worn on the skin or it may be worn over a body sock or sleeve called a Bracemate, specifically designed to be worn under braces. A physician or orthotist or brace manufacturers representative can show how to check to see if the brace is in correct position on the chest.Bracing is becoming more popular over surgery for pectus carinatum, mostly because it eliminates the risks that accompany surgery. The prescribing of bracing as a treatment for pectus carinatum has trickled down from both paediatric and thoracic surgeons to the family physician and pediatricians again due to its lower risks and well-documented very high success results. The pectus carinatum guideline of 2012 of the American Pediatric Surgical Association has stated: "As reconstructive therapy for the compliant pectus [carinatum] malformation, nonoperative compressive orthotic bracing is usually an appropriate first line of therapy as it does not preclude the operative option. For appropriate candidates, orthotic bracing of chest wall malformations can reasonably be expected to prevent worsening of the malformation and often results in a lasting correction of the malformation. Orthotic bracing is often successful in prepubertal children whose chest wall is compliant. Expert opinion suggests that the noncompliant chest wall malformation or significant asymmetry of the pectus carinatum malformation caused by a concomitant excavatum-type malformation may not respond to orthotic bracing."Regular supervision during the bracing period is required for optimal results. Adjustments may be needed to the brace as the child grows and the pectus improves.
Surgical
For patients with severe pectus carinatum, surgery may be necessary. However bracing could and may still be the first line of treatment </s> |
<s> [INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What is Transsexual and explain in detail? [/INST] Transsexual people experience a gender identity that is inconsistent with their assigned sex, and desire to permanently transition to the sex or gender with which they identify, usually seeking medical assistance (including sex reassignment therapies, such as hormone replacement therapy and sex reassignment surgery) to help them align their body with their identified sex or gender.
The term transsexual is a subset of transgender, but some transsexual people reject the label of transgender. A medical diagnosis of gender dysphoria can be made if a person experiences marked and persistent incongruence between their experienced gender — their personal sense of their own gender — and their assigned sex.
Understanding of transsexuality has changed very quickly in the 21st century. Many 20th century medical beliefs and practices around transsexuality are now considered deeply outdated. It was once classified as a mental disorder and subject to extensive gatekeeping by the medical establishment, and remains so in much of the world.
Terminology
Transsexual has had different meanings throughout time. In modern usage, it refers to "a person who desires to or who has modified their body to transition from one gender or sex to another through the use of medical technologies such as hormones or surgeries." It is considered an antiquated term, and is sometimes pejorative. The general shift is to instead use the term transgender or the abbreviated form trans, but due to its historical usage, transsexual remains in the modern vernacular.: 742–744 In understanding the subject, it is noted that there is a difference between gender and sex. Gender is defined as a "set of social, cultural, and linguistic norms that can be attributed to someone’s identity, expression, or role as masculine, feminine, androgynous, or nonbinary." Sex is defined as being "assigned at birth by medical professionals based on the appearance of genitalia, and related assumptions about chromosomal makeup, gender identity, expressions, and roles emerge over the life span, sometimes changing over time.": 277–278
Origins
Norman Haire reported that in 1921 Dora R of Germany began a surgical transition, under the care of Magnus Hirschfeld, which ended in 1930 with a successful genital reassignment surgery (GRS). In 1930, Hirschfeld supervised the second genital reassignment surgery to be reported in detail in a peer-reviewed journal, that of Lili Elbe of Denmark. In 1923, Hirschfeld introduced the (German) term "Transsexualismus", after which David Oliver Cauldwell introduced "transsexualism" and "transsexual" to English in 1949 and 1950.Cauldwell appears to be the first to use the term to refer to those who desired a change of physiological sex. In 1969, Harry Benjamin claimed to have been the first to use the term "transsexual" in a public lecture, which he gave in December 1953. Benjamin went on to popularize the term in his 1966 book, The Transsexual Phenomenon, in which he described transsexual people on a scale (later called the "Benjamin scale") of three levels of intensity: "Transsexual (nonsurgical)", "Transsexual (moderate intensity)", and "Transsexual (high intensity)". In his book, Benjamin described "true" transsexualism as the following:
True transsexuals feel that they belong to the other sex, they want to be and function as members of the opposite sex, not only to appear as such. For them, their sex organs, the primary (testes) as well as the secondary (penis and others) are disgusting deformities that must be changed by the surgeons knife.
Relationship to transgender
The term transgender was coined by John Oliven in 1965. By the 1990s, transsexual had come to be considered a subset of the umbrella term transgender. The term transgender is now more common, and many transgender people prefer the designation transgender and reject transsexual. Some people who pursue medical assistance (for example, sex reassignment surgery) to change their sexual characteristics to match their gender identity prefer the designation transsexual and reject transgender. One perspective offered by transsexual people who reject a transgender label for that of transsexed is that, for people who have gone through sexual reassignment surgery, their anatomical sex has been altered, whilst their gender remains constant.Historically, one reason some people preferred transsexual to transgender is that the medical community in the 1950s through the 1980s encouraged a distinction between the terms that would only allow the former access to medical treatment. Other self-identified transsexual people state that those who do not seek sex reassignment surgery (SRS) are fundamentally different from those who do, and that the two have different concerns, but this view is controversial, and others argue that merely having some medical procedures does not have such far-reaching consequences as to put those who have them and those who have not (e.g. because they cannot afford them) into such distinctive categories. Some have objected to the term transsexual on the basis that it describes a condition related to gender identity rather than sexuality. For example, Christine Jorgensen, the first person widely known to have sex reassignment surgery (in this case, male-to-female), rejected transsexual and instead identified herself in newsprint as trans-gender, on this basis.The trans community has generally taken issue with the term transsexual because it over-medicalizes the trans experience, and focuses too much on diagnosis.: 743 The term transgender emerged in part in an attempt to break the "medical monopoly" on transitioning that transsexual implied.
GLAADs media reference guide offers the following distinction on the use of transsexual:An older term that originated in the medical and psychological communities. As the gay and lesbian community rejected homosexual and replaced it with gay and lesbian, the transgender community rejected transsexual and replaced it with transgender. Some people within the trans community may still call themselves transsexual. Do not use transsexual to describe a person unless it is a word they use to describe themself. If the subject of your news article uses the word transsexual to describe themself, use it as an adjective: transsexual woman or transsexual man.
Terminological variance
The word transsexual is most often used as an adjective rather than a noun – a "transsexual person" rather than simply "a transsexual". As of 2018, use of the noun form (e.g. referring to people as transsexuals) is often deprecated by those in the transsexual community. Like other trans people, transsexual people prefer to be referred to by the gender pronouns and terms associated with their gender identity. For example, a trans man is a person who was assigned the female sex at birth on the basis of his genitals, but despite that assignment, identifies as a man and is transitioning or has transitioned to a male gender role; in the case of a transsexual man, he furthermore has or will have a masculine body. Transsexual people are sometimes referred to with directional terms, such as "female-to-male" for a transsexual man, abbreviated to "F2M", "FTM", and "F to M", or "male-to-female" for a transsexual woman, abbreviated "M2F", "MTF" and "M to F".
Individuals who have undergone and completed sex reassignment surgery are sometimes referred to as transsexed individuals; however, the term transsexed is not to be confused with the term transsexual, which can also refer to individuals who have not yet undergone SRS, and whose anatomical sex (still) does not match their psychological sense of personal gender identity.
The terms gender dysphoria and gender identity disorder were not used until the 1970s, when Laub and Fisk published several works on transsexualism using these terms. "Transsexualism" was replaced in the DSM-IV by "gender identity disorder in adolescents and adults".
Male-to-female transsexualism has sometimes been called "Harry Benjamins syndrome" after the endocrinologist who pioneered the study of dysphoria. As the present-day medical study of gender variance is much broader than Benjamins early description, there is greater understanding of its aspects, and use of the term Harry Benjamins syndrome has been criticized for delegitimizing gender-variant people with different experiences.
Sexual orientation
Since the middle of the 20th century, homosexual transsexual and related terms were used to label individuals sexual orientation based on their birth sex. Many sources criticize this choice of wording as confusing, "heterosexist", "archaic", and demeaning because it labels people by sex assigned at birth instead of their gender identity. Sexologist John Bancroft also recently expressed regret for having used this terminology, which was standard when he used it, to refer to transsexual women. He says that he now tries to choose his words more sensitively. Sexologist Charles Allen Moser is likewise critical of the terminology. Sociomedical scientist Rebecca Jordan-Young challenges researchers like Simon LeVay, J. Michael Bailey, and Martin Lalumiere, who she says "have completely failed to appreciate the implications of alternative ways of framing sexual orientation."The terms androphilia and gynephilia to describe a persons sexual orientation without reference to their gender identity were proposed and popularized by psychologist Ron Langevin in the 1980s. The similar specifiers attracted to men, attracted to women, attracted to both or attracted to neither were used in the DSM-IV.Many transsexual people choose the language of how they refer to their sexual orientation based on their gender identity, not their birth assigned sex.
Surgical status
Several terms are in common use, especially within the community itself relating to the surgical or operative status of someone who is transsexual, depending on whether they have already had sex reassignment surgery (SRS), have not had SRS but still intend to, or do not intend to have SRS. They are, post-op, pre-op, and non-op, respectively.
Pre-operative
A pre-operative transsexual person, or simply pre-op for short, is someone who intends to have SRS at some point, but has not yet had it.
Post-operative
A post-operative transsexual person, or post-op for short, is someone who has had SRS.
Non-operative
A non-operative transsexual person, or non-op, is someone who has not had SRS, and does not intend to have it in the future. There can be various reasons for this, from personal to financial. Having SRS is not a requirement of being transsexual. Evolutionary biologist and trans woman Julia Serano criticizes the societal preoccupation with SRS as phallocentric, objectifying of transsexuals, and an invasion of privacy.: 229–231
Historical understanding
Transgender people are known to have existed since ancient times. A wide range of societies had traditional third gender roles, or otherwise accepted trans people in some form. However, a precise history is difficult because the modern concept of being transgender, and gender in general, did not develop until the mid-1900s. Historical understandings are thus inherently filtered through modern principles, and were largely viewed through a medical lens until the late 1900s.Ancient Greek Hippocrates (interpreting the writing of Herodotus) discusses transgender individuals briefly. He describes the "disease of the Scythians" (regarding the Enaree), which he attributes to impotency due to riding on a horse without stirrups. Hippocrates reference was well discussed by medical writings of the 1500s–1700s. Pierre Petit writing in 1596 viewed the "Scythian disease" as natural variation, but by the 1700s writers viewed it as a "melancholy", or "hysterical" psychiatric disease. By the early 1800s, being transgender separate from Hippocrates idea of it was claimed to be widely known, but remained poorly documented. Both MtF and FtM individuals were cited in European insane asylums of the early 1800s. The most complete account of the time came from the life of the Chevalier dÉon (1728–1810). As cross-dressing became more widespread in the late 1800s, discussion of transgender people increased greatly and writers attempted to explain the origins of being transgender. Much study came out of Germany, and was exported to other Western audiences. Cross-dressing was seen in a pragmatic light until the late 1800s; it had previously served a satirical or disguising purpose. But in the latter half of the 1800s, cross-dressing and being transgender became viewed as an increasing societal danger.William A. Hammond wrote an 1882 account of transgender Pueblo shamans (mujerados), comparing them to the Scythian disease. Other writers of the late 1700s and 1800s (including Hammonds associates in the American Neurological Association) had noted the widespread nature of transgender cultural practices among native peoples. Explanations varied, but authors generally did not ascribe native transgender practices to psychiatric causes, instead condemning the practices in a religious and moral sense. Native groups provided much study on the subject, and perhaps the majority of all study until after WWII.Critical studies first began to emerge in the late 1800s in Germany, with the works of Magnus Hirschfeld. Hirschfeld coined the term "transvestite" in 1910 as the scope of transgender study grew. His work would lead to the 1919 founding of the Institut für Sexualwissenschaft in Berlin. Though Hirschelds legacy is disputed, he revolutionized the field of study. The Institut was destroyed when the Nazis seized power in 1933, and its research was infamously burned in the May 1933 Nazi book burnings. Transgender issues went largely out of the public eye until after World War II. Even when they re-emerged, they reflected a forensic psychology approach, unlike the more sexological that had been employed in the lost German research.
20th century medical understanding
Benjamin suggested that moderate intensity male to female transsexual people may benefit from estrogen medication as a "substitute for or preliminary to operation." Some people have had sex reassignment surgery (SRS) but do not meet the above definition of transsexual. Other people do not desire SRS although they meet the other elements of Benjamins definition of a "true transsexual". Transsexuality was included for the first time in the DSM-III in 1980 and again in the DSM-III-R in 1987, where it was located under Disorders Usually First Evident in Infancy, Childhood or Adolescence.
Beyond Benjamins work, which focused on male-to-female (MTF) transsexual people, there are cases of the female to male transsexual, for whom genital surgery may not be practical. Benjamin gave certifying letters to his MTF transsexual patients that stated "Their anatomical sex, that is to say, the body, is male. Their psychological sex, that is to say, the mind, is female." Starting in 1968 Benjamin abandoned his early terminology and adopted that of "gender identity."
Medical diagnosis
Transsexualism no longer is classified as a mental disorder in the International Statistical Classification of Diseases and Related Health Problems (ICD). The World Professional Association for Transgender Health (WPATH) and many transsexual people had recommended this removal,: 743 arguing that at least some mental health professionals are being insensitive by labelling transsexualism as a "disease" rather than as an inborn trait, as many transsexuals believe it to be. Now, instead, it is classified as a sexual health condition; this classification continues to enable healthcare systems to provide healthcare needs related to gender. The eleventh edition was released in June 2018. The previous version, ICD-10, had incorporated transsexualism, dual role transvestism, and gender identity disorder of childhood into its gender identity disorder category. It defined transsexualism as "[a] desire to live and be accepted as a member of the opposite sex, usually accompanied by a sense of discomfort with, or inappropriateness of, ones anatomic sex, and a wish to have surgery and hormonal treatment to make ones body as congruent as possible with ones preferred sex." ICD-11 renamed Transexualism as Gender incongruence of adolescence or adulthood (HA60), and Gender identity disorder of childhood was renamed Gender incongruence of childhood (HA61).
HA60 of the ICD-11 reads:Gender Incongruence of Adolescence and Adulthood is characterised by a marked and persistent incongruence between an individual´s experienced gender and the assigned sex, which often leads to a desire to ‘transition’, in order to live and be accepted as a person of the experienced gender, through hormonal treatment, surgery or other health care services to make the individual´s body align, as much as desired and to the extent possible, with the experienced gender. The diagnosis cannot be assigned prior the onset of puberty. [HA61 applies before puberty] Gender variant behaviour and preferences alone are not a basis for assigning the diagnosis.Historically, transsexualism has also been included in the American Psychiatric Associations Diagnostic and Statistical Manual of Mental Disorders (DSM). With the DSM-5, transsexualism was removed as a diagnosis, and a diagnosis of gender dysphoria was created in its place. This change was made to reflect the consensus view by members of the APA that transsexuality is not in and of itself a disorder and that transsexual people should not be stigmatized unnecessarily. By including a diagnosis for gender dysphoria, transsexual people are still able to access medical care through the process of transition.
The current diagnosis for transsexual people who present themselves for medical treatment is gender dysphoria (leaving out those who have sexual identity disorders without gender concerns). According to the Standards of care formulated by WPATH, formerly the Harry Benjamin International Gender Dysphoria Association, this diagnostic label is often necessary to obtain sex reassignment therapy with health insurance coverage, and the designation of gender identity disorders as mental disorders is not a license for stigmatization or for the deprivation of gender patients civil rights.
Causes, studies, and theories
Causes
Studies conducted on twins suggest that there are likely genetic causes of gender incongruence, although the precise genes involved are not known or fully understood. One study published in the International Journal of Transgender Health found that in 20% of identical twin pairs, if one twin was trans, the other was as well, compared to only 2.6% of non-identical twins where this was the case; researchers attribute this to their shared genetics.Sexologist Ray Blanchard created a taxonomy of male-to-female transsexualism that proposes two distinct etiologies for androphilic and gynephilic individuals; this taxonomy has become controversial, supported by J. Michael Bailey, Anne Lawrence, James Cantor and others, but opposed by Charles Allen Moser, Julia Serano, and the World Professional Association for Transgender Health.
Focus on trans women over trans men
Historically, formal efforts by the medical community to provide transsexual healthcare were extremely focused on transsexual women, with little thought for transsexual men. Julia Serano suggests that effemimania (the idea that male femininity is more psychopathological than female masculinity) was the driving factor. She sees this as a kind of transmisogyny (hatred of trans women as an extension of sexism).: 126–127 This effimimania conflates male homosexuality, MTF transsexuality, and feminine gender expression, while treating them all as a disease.: 129 She points to the medical communitys long love of now outdated theories such as autogynephilia.: 131
Medical assistance
Individuals make different choices regarding sex reassignment therapy, which may include hormones, minor to extensive surgery, social changes, and psychological interventions. The extent of medical intervention is a highly personal decision: there is no one-size-fits-all solution.
Hormone replacement therapy
Transsexual individuals frequently opt for masculinizing or feminizing hormone replacement therapy (HRT) to modify secondary sex characteristics.
Sex reassignment therapy
Sex reassignment therapy (SRT) is an umbrella term for all medical treatments related to sex reassignment of both transgender and intersex people. Sex reassignment surgery (such as orchiectomy) alters primary sex characteristics, including chest surgery such as top surgery or breast augmentation, or, in the case of trans women, a trachea shave, facial feminization surgery or permanent hair removal.
To obtain sex reassignment therapy, transsexual people are generally required to undergo a psychological evaluation and receive a diagnosis of gender identity disorder in accordance with the Standards of Care (SOC) as published by the World Professional Association for Transgender Health. This assessment is usually accompanied by counseling on issues of adjustment to the desired gender role, effects and risks of medical treatments, and sometimes also by psychological therapy. The SOC are intended as guidelines, not inflexible rules, and are intended to ensure that clients are properly informed and in sound psychological health, and to discourage people from transitioning based on unrealistic expectations.
Gender roles and transitioning
After an initial psychological evaluation, trans men and trans women may begin medical treatment, starting with hormone replacement therapy or hormone blockers. In these cases, people who change their gender are usually required to live as members of their target gender for at least one year prior to genital surgery, gaining real-life experience, which is sometimes called the "real-life test" (RLT). Transsexual individuals may undergo some, all, or none of the medical procedures available, depending on personal feelings, health, income, and other considerations. Some people posit that transsexualism is a physical condition, not a psychological issue, and assert that sex reassignment therapy should be given on request. (Brown 103)
Like other trans people, transsexual people may refer to themselves as trans men or trans women. Transsexual people desire to establish a permanent gender role as a member of the gender with which they identify, and many transsexual people pursue medical interventions as part of the process of expressing their gender. The entire process of switching from one physical sex and social gender presentation to another is often referred to as transitioning, and usually takes several years. Transsexual people who transition usually change their social gender roles, legal names and legal sex designation.Not all transsexual people undergo a physical transition. Some have obstacles or concerns preventing them from doing so, such as the expense of surgery, the risk of medical complications, or medical conditions which make the use of hormones or surgery dangerous. Others may not identify strongly with another binary gender role. Still others may find balance at a midpoint during the process, regardless of whether or not they are binary-identified. Many transsexual people, including binary-identified transsexual people, do not undergo genital surgery, because they are comfortable with their own genitals, or because they are concerned about nerve damage and the potential loss of sexual pleasure, including orgasm. This is especially so in the case of trans men, many of whom are dissatisfied with the current state of phalloplasty, which is typically very expensive, not covered by health insurance, and commonly does not achieve desired results. For example, not only does phalloplasty not result in a completely natural erection, it may not allow for an erection at all, and its results commonly lack penile sexual sensitivity; in other cases, however, phalloplasty results are satisfying for trans men. By contrast, metoidioplasty, which is more popular, is significantly less expensive and has far better sexual results.Transsexual people can be heterosexual, gay, lesbian, or bisexual; many choose the language of how they refer to their sexual orientation based on their gender identity, not their birth assigned sex.
Psychological treatment
Psychological techniques that attempt to alter gender identity to one considered appropriate for the persons assigned sex, aka conversion therapy, are ineffective. The widely recognized Standards of Care note that sometimes the only reasonable and effective course of treatment for transsexual people is to go through sex reassignment therapy.The need for treatment of transsexual people is emphasized by the high rate of mental health problems, including depression, anxiety, and various addictions, as well as a higher suicide rate among untreated transsexual people than in the general population. These problems are alleviated by a change of gender role and/or physical characteristics.Many transgender and transsexual activists, and many caregivers, note that these problems are not usually related to the gender identity issues themselves, but the social and cultural responses to gender-variant individuals. Some transsexual people reject the counseling that is recommended by the Standards of Care because they do not consider their gender identity to be a cause of psychological problems.
Brown and Rounsley noted that "some transsexual people acquiesce to legal and medical expectations in order to gain rights granted through the medical/psychological hierarchy." Legal needs, such as a change of sex on legal documents, and medical needs, such as sex reassignment surgery, are usually difficult to obtain without a doctor or therapists approval. Because of this, some transsexual people feel coerced into affirming outdated concepts of gender to overcome simple legal and medical hurdles.
Regrets and detransitions
People who undergo sex reassignment surgery can develop regret for the procedure later in life, largely predicted by a lack of support from family or peers, with data from the 1990s suggesting a rate of 3.8%. In a 2001 study of 232 MTF patients who underwent GRS, none of the patients reported complete regret and only 6% reported partial or occasional regrets. A 2009 review of Medline literature suggests the total rate of patients expressing feelings of doubt or regret is estimated to be as high as 8%.A 2010 meta-study, based on 28 previous long-term studies of transsexual men and women, found that the overall psychological functioning of transsexual people after transition was similar to that of the general population and significantly better than that of untreated transsexual people.
Prevalence
Estimates of the prevalence of transsexual people are highly dependent on the specific case definitions used in the studies, with prevalence rates varying by orders of magnitude. In the United States, the Diagnostic and Statistical Manual of Mental Disorders (DSM-V 2013) gives the following estimates: "For natal adult males [MTF], prevalence ranges from 0.005% to 0.014%, and for natal females [FTM], from 0.002% to 0.003%." It states, however, that these are likely underestimates since the figures are based on referrals to specialty clinics.The Amsterdam Gender Dysphoria Clinic over four decades has treated roughly 95% of Dutch transsexual clients, and it suggests (1997) a prevalence of 1:10,000 among assigned males and 1:30,000 among assigned females.Olyslager and Conway presented a paper at the WPATH 20th International Symposium (2007) arguing that the data from their own and other studies actually imply much higher prevalence, with minimum lower bounds of 1:4,500 male-to-female transsexual people and 1:8,000 female-to-male transsexual people for a number of countries worldwide. They estimate the number of post-op women in the US to be 32,000 and obtain a figure of 1:2500 male-to-female transsexual people. They further compare the annual instances of sex reassignment surgery (SRS) and male birth in the U.S. to obtain a figure of 1:1000 MTF transsexual people and suggest a prevalence of 1:500 extrapolated from the rising rates of SRS in the US and a "common sense" estimate of the number of undiagnosed transsexual people. Olyslager and Conway also argue that the US population of assigned males having already undergone reassignment surgery by the top three US SRS surgeons alone is enough to account for the entire transsexual population implied by the 1:10,000 prevalence number, yet this excludes all other US SRS surgeons, surgeons in countries such as Thailand, Canada, and others, and the high proportion of transsexual people who have not yet sought treatment, suggesting that a prevalence of 1:10,000 is too low.
A 2008 study of the number of New Zealand passport holders who changed the sex on their passport estimated that 1:3,639 birth-assigned males and 1:22,714 birth-assigned females were transsexual.A 2008 presentation at the LGBT Health Summit in Bristol, UK, showed that the prevalence of transsexual people in the UK was increasing (14% per year) and that the mean age of transition was rising.
Though no direct studies on the prevalence of gender identity disorder (GID) have been done, a variety of clinical papers published in the past 20 years provide estimates ranging from 1:7,400 to 1:42,000 in assigned males and 1:30,040 to 1:104,000 in assigned females.In 2015, the National Center for Transgender Equality conducted a National Transgender Discrimination Survey. Of the 27,715 transgender and genderqueer people who took the survey, 35% identified as "non-binary", 33% identified as transgender women, 29% identified as transgender men, and 3% said that "crossdresser" best described their gender identity.A 2016 systematic review and meta-analysis of "how various definitions of transgender affect prevalence estimates" in 27 studies found a meta-prevalence (mP) estimates per 100,000 population of 9.2 (95% CI = 4.9–13.6), equal to 1:11,000 for surgical or hormonal gender affirmation therapy and 6.8 (95% CI = 4.6–9.1), equal to 1:15,000 for transgender-related medical condition diagnoses. Of studies assessing self-reported transgender identity, prevalence was 355 (95% CI = 144–566), equal to 1 in 282. However, a single outlier study would have influenced the result to 871 (95% CI = 519–1,224), equal to 1 in 115; this study was removed. "Significant heterogeneity was observed in most analyses."
Society and culture
A number of Native American and First Nations cultures have traditional social and ceremonial roles for individuals who do not fit into the usual roles for males and females in that culture. These roles can vary widely between tribes, because gender roles, when they exist at all, also vary considerably among different Native cultures. However, a modern, pan-Indian status known as Two-Spirit has emerged among LGBT Natives in recent years.
Legal and social aspects
Laws regarding changes to the legal status of transsexual people are different from country to country. Some jurisdictions allow an individual to change their name, and sometimes, their legal gender, to reflect their gender identity. Within the US, some states allow amendments or complete replacement of the original birth certificates. Some states seal earlier records against all but court orders in order to protect the transsexual persons privacy.
In many places, it is not possible to change birth records or other legal designations of sex, although changes are occurring. Estelle Asmodelle’s book documented her struggle to change the Australian birth certificate and passport laws, although there are other individuals who have been instrumental in changing laws and thus attaining more acceptance for transsexual people in general.
Medical treatment for transsexual and transgender people is available in most Western countries. However, transsexual and transgender people challenge the "normative" gender roles of many cultures and often face considerable hatred and prejudice. The film Boys Dont Cry chronicles the case of Brandon Teena, a transsexual man who was raped and murdered after his status was discovered. In 1999 Brandon was memoralised in the first Transgender Day of Remembrance. The Transgeder Day of Rembrance is observed annually on November 20 by members of the transgender community and LGBT+ organisations across the world.Jurisdictions allowing changes to birth records generally allow trans people to marry members of the opposite sex to their gender identity and to adopt children. Jurisdictions which prohibit same sex marriage often require pre-transition marriages to be ended before they will issue an amended birth certificate.Health-practitioner manuals, professional journalistic style guides, and LGBT advocacy groups advise the adoption by others of the name and pronouns identified by the person in question, including present references to the transgender or transsexual persons past. Family members and friends who may be confused about pronoun usage or the definitions of sex are commonly instructed in proper pronoun usage, either by the transsexual person or by professionals or other persons familiar with pronoun usage as it relates to transsexual people. Sometimes transsexual people have to correct their friends and family members many times before they begin to use the transsexual persons desired pronouns consistently. According to Julia Serano, deliberate mis-gendering of transsexual people is "an arrogant attempt to belittle and humiliate trans people."Both "transsexualism" and "gender identity disorders not resulting from physical impairments" are specifically excluded from coverage under the Americans with Disabilities Act Section 12211. Gender dysphoria is not excluded.
Employment issues
Openly transsexual people can have difficulty maintaining employment. Most find it necessary to remain employed during transition in order to cover the costs of living and transition. However, employment discrimination against trans people is rampant and many of them are fired when they come out or are involuntarily outed at work. Transsexual people must decide whether to transition on-the-job, or to find a new job when they make their social transition. Other stresses that transsexual people face in the workplace are being fearful of coworkers negatively responding to their transition, and losing job experience under a previous name—even deciding which rest room to use can prove challenging. Finding employment can be especially challenging for those in mid-transition.
Laws regarding name and gender changes in many countries make it difficult for transsexual people to conceal their trans status from their employers. Because the Harry Benjamin Standards of Care requires one-year of real life experience prior to SRS, some feel this creates a Catch-22 situation which makes it difficult for trans people to remain employed or obtain SRS.
In many countries, laws provide protection from workplace discrimination based on gender identity or gender expression, including masculine women and feminine men. An increasing number of companies are including "gender identity and expression" in their non-discrimination policies. Often these laws and policies do not cover all situations and are not strictly enforced. Californias anti-discrimination laws protect transsexual persons in the workplace and specifically prohibit employers from terminating or refusing to hire a person based on their transsexuality. The European Union provides employment protection as part of gender discrimination protections following the European Court of Justice decisions in P v S and Cornwall County Council.In the United States National Transgender Discrimination Survey, 44% of respondents reported not getting a job they applied for because of being transgender. 36% of trans women reported losing a job due to discrimination compared to 19% of trans men. 54% of trans women and 50% of trans men report having been harassed in the workplace. Transgender people who have been fired due to bias are more than 34 times likely than members of the general population to attempt suicide.
Stealth
Many transsexual men and women choose to live completely as members of their gender without disclosing details of their birth-assigned sex. This approach is sometimes called stealth. Stealth transsexuals choose not to disclose their past for numerous reasons, including fear of discrimination and fear of physical violence.: 63 There are examples of people having been denied medical treatment upon discovery of their trans status, whether it was revealed by the patient or inadvertently discovered by the doctors.
In the media
Before transsexual people were depicted in popular movies and television shows, Aleshia Brevard—a transsexual woman whose surgery took place in 1962: 3 —was actively working as an actress: 141 and model: 200 in Hollywood and New York throughout the 1960s and 70s. Aleshia never portrayed a transsexual person, though she appeared in eight Hollywood-produced films, on most of the popular variety shows of the day, including The Dean Martin Show, and was a regular on The Red Skelton Show and One Life to Live before returning to university to teach drama and acting.
In pageantry
Since 2004, with the goal of crowning the top transsexual of the world, a beauty pageant by the name of The Worlds Most Beautiful Transsexual Contest was held in Las Vegas, Nevada. The pageant accepted pre-operation and post-operation trans women, but required proof of their gender at birth. The winner of the 2004 pageant was a woman named Mimi Marks.Jenna Talackova, the 23-year-old woman who forced Donald Trump and his Miss Universe Canada pageant to end its ban on transgender contestants, competed in the pageant on May 19, 2012, in Toronto. On January 12, 2013, Kylan Arianna Wenzel was the first transgender woman allowed to compete in a Miss Universe Organization pageant since Donald Trump changed the rules to allow women like Wenzel to enter officially. Wenzel was the first transgender woman to compete in a Miss Universe Organization pageant since officials disqualified 23-year-old Miss Canada Jenna Talackova the previous year after learning she was transgender.
See also
List of transgender-related topics
List of transgender-rights organizations
List of LGBT-related organizations
List of transgender people
Transgender
References
Bibliography
Benjamin, Harry (1966). The Transsexual Phenomenon. Julian Press, Incorporated Publishers. OCLC 1138665289.
Brown, Mildred L.; Chloe Ann Rounsley (1996). True Selves: Understanding Transsexualism – For Families, Friends, Coworkers, and Helping Professionals. Jossey-Bass. ISBN 978-0-7879-6702-4. OCLC 51437864.
Feinberg, Leslie (1999). Trans Liberation : Beyond Pink or Blue. Beacon Press. ISBN 978-0-8070-7951-5. OCLC 38732343.
Standards of Care for the Health of Transsexual, Transgender, and Gender-Nonconforming People (PDF) (Report). 7. World Professional Association for Transgender Health. 2012. Archived (PDF) from the original on 11 May 2022.
Kruijver, Frank P. M.; Zhou, Jiang-Ning; Pool, Chris W.; Hofman, Michel A.; Gooren, Louis J. G.; Swaab, Dick F. (1 May 2000). "Male-to-Female Transsexuals Have Female Neuron Numbers in a Limbic Nucleus". The Journal of Clinical Endocrinology and Metabolism. The Endocrine Society. 85 (5): 2034–41. doi:10.1210/jcem.85.5.6564. ISSN 0021-972X. PMID 10843193. Archived from the original on 6 February 2007. Retrieved 25 February 2007.
Rathus, Spencer A.; Jeffery S. Nevid, Lois Fichner-Rathus (2002). Human Sexuality in a World of Diversity. Allyn & Bacon. ISBN 978-0-205-40615-9. OCLC 55502508.
Schreiber, Gerhard (2016). Transsexuality in Theology and Neuroscience. Findings, Controversies, and Perspectives (in German). Walter de Gruyter </s> |