dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 1010 | [
{
"role": "user",
"content": "Abstract: Background\nMost tobacco control programmes for adolescents are based around prevention of uptake, but teenage smoking is still common. It is unclear if interventions that are effective for adults can also help adolescents to quit. This is the update of a Cochrane Review first published in 2006.\nObjectives\nTo evaluate the effectiveness of strategies that help young people to stop smoking tobacco.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group's Specialized Register in June 2017. This includes reports for trials identified in CENTRAL, MEDLINE, Embase and PsyclNFO.\nSelection criteria\nWe included individually and cluster-randomized controlled trials recruiting young people, aged under 20 years, who were regular tobacco smokers. We included any interventions for smoking cessation; these could include pharmacotherapy, psycho-social interventions and complex programmes targeting families, schools or communities. We excluded programmes primarily aimed at prevention of uptake. The primary outcome was smoking status after at least six months' follow-up among those who smoked at baseline.\nData collection and analysis\nTwo review authors independently assessed the eligibility of candidate trials and extracted data. We evaluated included studies for risk of bias using standard Cochrane methodology and grouped them by intervention type and by the theoretical basis of the intervention. Where meta-analysis was appropriate, we estimated pooled risk ratios using a Mantel-Haenszel fixed-effect method, based on the quit rates at six months' follow-up.\nMain results\nForty-one trials involving more than 13,000 young people met our inclusion criteria (26 individually randomized controlled trials and 15 cluster-randomized trials). We judged the majority of studies to be at high or unclear risk of bias in at least one domain. Interventions were varied, with the majority adopting forms of individual or group counselling, with or without additional self-help materials to form complex interventions. Eight studies used primarily computer or messaging interventions, and four small studies used pharmacological interventions (nicotine patch or gum, or bupropion). There was evidence of an intervention effect for group counselling (9 studies, risk ratio (RR) 1.35, 95% confidence interval (CI) 1.03 to 1.77), but not for individual counselling (7 studies, RR 1.07, 95% CI 0.83 to 1.39), mixed delivery methods (8 studies, RR 1.26, 95% CI 0.95 to 1.66) or the computer or messaging interventions (pooled RRs between 0.79 and 1.18, 9 studies in total). There was no clear evidence for the effectiveness of pharmacological interventions, although confidence intervals were wide (nicotine replacement therapy 3 studies, RR 1.11, 95% CI 0.48 to 2.58; bupropion 1 study RR 1.49, 95% CI 0.55 to 4.02). No subgroup precluded the possibility of a clinically important effect. Studies of pharmacotherapies reported some adverse events considered related to study treatment, though most were mild, whereas no adverse events were reported in studies of behavioural interventions. Our certainty in the findings for all comparisons is low or very low, mainly because of the clinical heterogeneity of the interventions, imprecision in the effect size estimates, and issues with risk of bias.\nAuthors' conclusions\nThere is limited evidence that either behavioural support or smoking cessation medication increases the proportion of young people that stop smoking in the long-term. Findings are most promising for group-based behavioural interventions, but evidence remains limited for all intervention types. There continues to be a need for well-designed, adequately powered, randomized controlled trials of interventions for this population of smokers.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Worldwide, between 80,000 and 100,000 young people start smoking every day. Many adolescent tobacco programmes focus on preventing teenagers from starting to smoke, but some programmes have been aimed at helping those teenagers who are already smoking to quit. We set out to investigate whether these programmes can help young people quit smoking for six months or longer. Searches are up to date as of June 2017."
}
] |
query-laysum | 1011 | [
{
"role": "user",
"content": "Abstract: Background\nSeveral types of pressure sources, including underwater bubble devices, mechanical ventilators, and the Infant Flow Driver, are used for providing continuous positive airway pressure (CPAP) to preterm infants with respiratory distress. It is unclear whether the use of bubble CPAP versus other pressure sources is associated with lower rates of CPAP treatment failure, or mortality and other morbidity.\nObjectives\nTo assess the benefits and harms of bubble CPAP versus other pressure sources (mechanical ventilators or Infant Flow Driver) for reducing treatment failure and associated morbidity and mortality in newborn preterm infants with or at risk of respiratory distress.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2023, Issue 1); MEDLINE (1946 to 6 January 2023), Embase (1974 to 6 January 2023), Maternity & Infant Care Database (1971 to 6 January 2023), and the Cumulative Index to Nursing and Allied Health Literature (1982 to 6 January 2023). We searched clinical trials databases and the reference lists of retrieved articles.\nSelection criteria\nWe included randomised controlled trials comparing bubble CPAP with other pressure sources (mechanical ventilators or Infant Flow Driver) for the delivery of nasal CPAP to preterm infants.\nData collection and analysis\nWe used standard Cochrane methods. Two review authors separately evaluated trial quality, extracted data, and synthesised effect estimates using risk ratio (RR), risk difference (RD), and mean difference. We used the GRADE approach to assess the certainty of the evidence for effects on treatment failure, all-cause mortality, neurodevelopmental impairment, pneumothorax, moderate-severe nasal trauma, and bronchopulmonary dysplasia.\nMain results\nWe included 15 trials involving a total of 1437 infants. All trials were small (median number of participants 88). The methods used to generate the randomisation sequence and ensure allocation concealment were unclear in about half of the trial reports. Lack of measures to blind caregivers or investigators was a potential source of bias in all of the included trials. The trials took place during the past 25 years in care facilities internationally, predominantly in India (five trials) and Iran (four trials). The studied pressure sources were commercially available bubble CPAP devices versus a variety of mechanical ventilator (11 trials) or Infant Flow Driver (4 trials) devices.\nMeta-analyses suggest that the use of bubble CPAP compared with mechanical ventilator or Infant Flow Driver CPAP may reduce the rate of treatment failure (RR 0.76, 95% confidence interval (CI) 0.60 to 0.95; (I² = 31%); RD −0.05, 95% CI −0.10 to −0.01; number needed to treat for an additional beneficial outcome 20, 95% CI 10 to 100; 13 trials, 1230 infants; low certainty evidence). The type of pressure source may not affect mortality prior to hospital discharge (RR 0.93, 95% CI 0.64 to 1.36 (I² = 0%); RD −0.01, 95% CI −0.04 to 0.02; 10 trials, 1189 infants; low certainty evidence). No data were available on neurodevelopmental impairment. Meta-analysis suggests that the pressure source may not affect the risk of pneumothorax (RR 0.73, 95% CI 0.40 to 1.34 (I² = 0%); RD −0.01, 95% CI −0.03 to 0.01; 14 trials, 1340 infants; low certainty evidence). Bubble CPAP likely increases the risk of moderate-severe nasal injury (RR 2.29, 95% CI 1.37 to 3.82 (I² = 17%); RD 0.07, 95% CI 0.03 to 0.11; number needed to treat for an additional harmful outcome 14, 95% CI 9 to 33; 8 trials, 753 infants; moderate certainty evidence). The pressure source may not affect the risk of bronchopulmonary dysplasia (RR 0.76, 95% CI 0.53 to 1.10 (I² = 0%); RD −0.04, 95% CI −0.09 to 0.01; 7 trials, 603 infants; low certainty evidence).\nAuthors' conclusions\nGiven the low level of certainty about the effects of bubble CPAP versus other pressure sources on the risk of treatment failure and most associated morbidity and mortality for preterm infants, further large, high-quality trials are needed to provide evidence of sufficient validity and applicability to inform context- and setting-relevant policy and practice.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 15 trials that compared the use of bubble CPAP versus ventilator or Infant Flow Driver CPAP in a total of 1437 preterm babies. Trials were mostly small, and had design flaws that could put their findings at risk of bias."
}
] |
query-laysum | 1012 | [
{
"role": "user",
"content": "Abstract: Background\nLarviciding refers to the regular application of chemical or microbial insecticides to water bodies or water containers to kill the aquatic immature forms of the mosquito (the larvae and pupae).\nObjectives\nTo summarize research evidence evaluating whether larviciding with chemical or microbial insecticides prevents malaria transmission.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), published in the Cochrane Library; MEDLINE; Embase; CAB Abstracts; LILACS; the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP); ClinicalTrials.gov; and the ISRCTN registry up to 6 June 2019.\nSelection criteria\nWe included cluster-randomized controlled trials (cRCTs), interrupted time series (ITS), randomized cross-over studies, non-randomized cross-over studies, and controlled before-and-after studies (CBAs) that compared larviciding with no larviciding.\nData collection and analysis\nWe independently assessed trials for eligibility and risk of bias, and extracted data. We assessed the certainty of evidence using the GRADE approach.\nMain results\nFour studies (one cRCT, two CBAs, and one non-randomized cross-over design) met the inclusion criteria. All used ground application of larvicides (people hand-delivering larvicides); one evaluated chemical and three evaluated microbial agents. Studies were carried out in The Gambia, Tanzania, Kenya, and Sri Lanka. Three studies were conducted in areas where mosquito aquatic habitats were less extensive (< 1 km²), and one where habitats were more extensive (> 1 km²; a cross-over study from The Gambia).\nFor aquatic habitats of less than 1 km², one cRCT randomized eight villages in Sri Lanka to evaluate chemical larviciding using insect growth regulator; and two CBA studies undertaken in Kenya and Tanzania evaluated microbial larvicides. In the cRCT, larviciding across all villages was associated with lower malaria incidence (rate ratio 0.24, 4649 participants, low-certainty evidence) and parasite prevalence (risk ratio (RR) 0.26, 5897 participants, low-certainty evidence) compared to no larviciding. The two CBA studies reported lower malaria prevalence during the intervention period (parasite prevalence RR 0.79, 95% confidence interval (CI) 0.71 to 0.89; 70,902 participants; low-certainty evidence). The Kenyan study also reported a reduction in the incidence of new malaria cases (RR 0.62, 95% CI 0.38 to 1.01; 720 participants; very low-certainty evidence).\nFor aquatic habitats of more than 1 km², the non-randomized cross-over trial using microbial larvicides did not detect an effect for malaria incidence (RR 1.58, 95% CI 0.94 to 2.65; 4226 participants), or parasite prevalence (RR 1.15, 95% CI 0.41 to 3.20; 3547 participants); both were very low-certainty evidence. The Gambia trial also reported the mean haemoglobin level, and there was no difference across the four comparisons (mean difference –0.13, 95% CI –0.40 to 0.13; 3586 participants).\nWe were unable to summarize or pool entomological outcomes due to unreported and missing data.\nAuthors' conclusions\nMost controlled studies on larviciding have been performed with microbial agents. Ground larviciding for non-extensive larval habitats may have an effect on malaria transmission, and we do not know if there is an effect in large-scale aquatic habitats. We found no studies using larviciding application techniques that could cover large aquatic habitats, such as aerial spraying using aircraft.\n\nGiven the provided abstract, please respond to the following query: \"What was the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Larviciding is the regular application of microbial or chemical insecticides to water bodies or water containers. The aim of larviciding is to reduce the adult population of mosquitoes by killing the aquatic immature forms, so that fewer will develop into adults. This should reduce the number of mosquitoes that bite and infect humans with malaria."
}
] |
query-laysum | 1013 | [
{
"role": "user",
"content": "Abstract: Background\nViral load (VL) testing in people living with HIV (PLHIV) helps to monitor antiretroviral therapy (ART). VL is still largely tested using central laboratory-based platforms, which have long test turnaround times and involve sophisticated equipment. VL tests with point-of-care (POC) platforms capable of being used near the patient are potentially easy to use, give quick results, are cost-effective, and could replace central or reference VL testing platforms.\nObjectives\nTo estimate the diagnostic accuracy of POC tests to detect high viral load levels in PLHIV attending healthcare facilities.\nSearch methods\nWe searched eight electronic databases using standard, extensive Cochrane search methods, and did not use any language, document type, or publication status limitations. We also searched the reference lists of included studies and relevant systematic reviews, and consulted an expert in the field from the World Health Organization (WHO) HIV Department for potentially relevant studies. The latest search was 23 November 2020.\nSelection criteria\nWe included any primary study that compared the results of a VL test with a POC platform to that of a central laboratory-based reference test to detect high viral load in PLHIV on HIV/AIDS care or follow-up. We included all forms of POC tests for VL as defined by study authors, regardless of the healthcare facility in which the test was conducted. We excluded diagnostic case-control studies with healthy controls and studies that did not provide sufficient data to create the 2 × 2 tables to calculate sensitivity and specificity. We did not limit our study inclusion to age, gender, or geographical setting.\nData collection and analysis\nTwo review authors independently screened the titles, abstracts, and full texts of the search results to identify eligible articles. They also independently extracted data using a standardized data extraction form and conducted risk of bias assessment using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Using participants as the unit of analysis, we fitted simplified univariable models for sensitivity and specificity separately, employing a random-effects model to estimate the summary sensitivity and specificity at the current and commonly reported World Health Organization (WHO) threshold (≥ 1000 copies/mL). The bivariate models did not converge to give a model estimate.\nMain results\nWe identified 18 studies (24 evaluations, 10,034 participants) defining high viral loads at main thresholds ≥ 1000 copies/mL (n = 20), ≥ 5000 copies/mL (n = 1), and ≥ 40 copies/mL (n = 3). All evaluations were done on samples from PLHIV retrieved from routine HIV/AIDS care centres or health facilities. For clinical applicability, we included 14 studies (20 evaluations, 8659 participants) assessing high viral load at the clinical threshold of ≥ 1000 copies/mL in the meta-analyses. Of these, sub-Saharan Africa, Europe, and Asia contributed 16, three, and one evaluation respectively. All included participants were on ART in only nine evaluations; in the other 11 evaluations the proportion of participants on ART was either partial or not clearly stated. Thirteen evaluations included adults only (n = 13), five mixed populations of adults and children, whilst in the remaining two the age of included populations was not clearly stated. The majority of evaluations included commercially available tests (n = 18). Ten evaluations were POC VL tests conducted near the patient in a peripheral or onsite laboratory, whilst the other 10 were evaluations of POC VL tests in a central or reference laboratory setting. The test types evaluated as POC VL tests included Xpert HIV-1 Viral Load test (n = 8), SAMBA HIV-1 Semi-Q Test (n = 9), Alere Q NAT prototype assay for HIV-1 (n = 2) and m-PIMA HIV-1/2 Viral Load test (n = 1). The majority of evaluations (n = 17) used plasma samples, whilst the rest (n = 3) utilized whole blood samples.\nPooled sensitivity (95% confidence interval (CI)) of POC VL at a threshold of ≥ 1000 copies/mL was 96.6% (94.8 to 97.8) (20 evaluations, 2522 participants), and pooled specificity (95% CI) was 95.7% (90.8 to 98.0) (20 evaluations, 6137 participants). Median prevalence for high viral load (≥ 1000 copies/mL) (n = 20) was 33.4% (range 6.9% to 88.5%).\nLimitations\nThe risk of bias was mostly assessed as unclear across the four domains due to incomplete reporting.\nAuthors' conclusions\nWe found POC VL to have high sensitivity and high specificity for the diagnosis of high HIV viral load in PLHIV attending healthcare facilities at a clinical threshold of ≥ 1000 copies/mL.\n\nGiven the provided abstract, please respond to the following query: \"What are the strengths and limitations of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included sufficient studies done on samples from PLHIV retrieved from routine HIV/AIDS care centres or health facilities, but it was unclear if all included participants were on ART. Also, none of the included tests was a true POC test conducted at the patient's side: half of the included studies (n = 10) evaluated POC tests in onsite laboratories near the patient, and the other half were tests with POC platforms evaluated in a central or reference laboratory (n = 10)."
}
] |
query-laysum | 1015 | [
{
"role": "user",
"content": "Abstract: Background\nDental caries (tooth decay) is a common disease that is preventable by reducing the dietary intake of free sugars and using topical sodium fluoride products. An antibacterial agent known as chlorhexidine may also help prevent caries. A number of over-the-counter and professionally administered chlorhexidine-based preparations are available in a variety of formulations and in a range of strengths. Although previous reviews have concluded that some formulations of chlorhexidine may be effective in inhibiting the progression of established caries in children, there is currently a lack of evidence to either claim or refute a benefit for its use in preventing dental caries.\nObjectives\nTo assess the effects of chlorhexidine-containing oral products (toothpastes, mouthrinses, varnishes, gels, gums and sprays) on the prevention of dental caries in children and adolescents.\nSearch methods\nWe searched the Cochrane Oral Health Group Trials Register (25 February 2015), the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 12), MEDLINE via OVID (1946 to 25 February 2015), EMBASE via OVID (1980 to 25 February 2015) and CINAHL via EBSCO (1937 to 25 February 2015). We handsearched several journals placed no language restrictions on our search. After duplicate citations were removed, the electronic searches retrieved 1075 references to studies.\nSelection criteria\nWe included parallel-group, randomised controlled trials (RCTs) that compared the caries preventive effects of chlorhexidine gels, toothpastes, varnishes, mouthrinses, chewing gums or sprays with each other, placebo or no intervention in children and adolescents. We excluded trials with combined interventions of chlorhexidine and fluoride or comparisons between chlorhexidine and fluoride interventions.\nData collection and analysis\nTwo review authors independently extracted trial data and assessed risk of bias. We resolved disagreements by consensus. We contacted trial authors for clarification or additional study details when necessary. The number of included studies that were suitable for meta-analysis was limited due to the clinical diversity of the included studies with respect to age, composition of intervention, and variation in outcome measures and follow-up. Where we were unable to conduct meta-analysis, we elected to present a narrative synthesis of the results.\nMain results\nWe included eight RCTs that evaluated the effects of chlorhexidine varnishes (1%, 10% or 40% concentration) and chlorhexidine gel (0.12%) on the primary or permanent teeth, or both, of children from birth to 15 years of age at the start of the study. The studies randomised a total of 2876 participants, of whom 2276 (79%) were evaluated. We assessed six studies as being at high risk of bias overall and two studies as being at unclear risk of bias overall. Follow-up assessment ranged from 6 to 36 months.\nSix trials compared chlorhexidine varnish with placebo or no treatment. It was possible to pool the data from two trials in the permanent dentition (one study using 10% chlorhexidine and the other, 40%). This led to an increase in the DMFS increment in the varnish group of 0.53 (95% confidence interval (CI) -0.47 to 1.53; two trials, 690 participants; very low quality evidence). Only one trial (10% concentration chlorhexidine varnish) provided usable data for elevated mutans streptococci levels > 4 with RR 0.93 (95% CI 0.80 to 1.07, 496 participants; very low quality evidence). One trial measured adverse effects (for example, ulcers or tooth staining) and reported that there were none; another trial reported that no side effects of the treatment were noted. No trials reported on pain, quality of life, patient satisfaction or costs.\nTwo trials compared chlorhexidine gel (0.12% concentration) with no treatment in the primary dentition. The presence of new caries gave rise to a 95% confidence interval that was compatible with either an increase or a decrease in caries incidence (RR 1.00, 95% CI 0.36 to 2.77; 487 participants; very low quality evidence). Similarly, data for the effects of chlorhexidine gel on the prevalence of mutans streptococci were inconclusive (RR 1.26, 95% CI 0.95 to 1.66; two trials, 490 participants; very low quality evidence). Both trials measured adverse effects and did not observe any. Neither of these trials reported on the other secondary outcomes such as measures of pain, quality of life, patient satisfaction or direct and indirect costs of interventions.\nAuthors' conclusions\nWe found little evidence from the eight trials on varnishes and gels included in this review to either support or refute the assertion that chlorhexidine is more effective than placebo or no treatment in the prevention of caries or the reduction of mutans streptococci levels in children and adolescents. There were no trials on other products containing chlorhexidine such as sprays, toothpastes, chewing gums or mouthrinses. Further high quality research is required, in particular evaluating the effects on both the primary and permanent dentition and using other chlorhexidine-containing oral products.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this review, carried out through the Cochrane Oral Health Group, is up-to-date at 25 February 2015. We found eight studies that were suitable to include in this review. The studies involved a total of 2876 children from birth to 15 years of age who were at moderate to high risk of tooth decay. Six of the studies looked at the effects of dental professionals applying different strengths of chlorhexidine varnishes to the baby teeth, permanent teeth or both types of teeth in children and adolescents. The other two studies looked at the effects of parents placing chlorhexidine gel on their children's baby teeth. There were no studies that examined other products containing chlorhexidine, such as sprays, toothpastes, chewing gums or mouthrinses."
}
] |
query-laysum | 1016 | [
{
"role": "user",
"content": "Abstract: Background\nMultiple-micronutrient (MMN) deficiencies often coexist among women of reproductive age in low- and middle-income countries. They are exacerbated in pregnancy due to the increased demands of the developing fetus, leading to potentially adverse effects on the mother and baby. A consensus is yet to be reached regarding the replacement of iron and folic acid supplementation with MMNs. Since the last update of this Cochrane Review in 2017, evidence from several trials has become available. The findings of this review will be critical to inform policy on micronutrient supplementation in pregnancy.\nObjectives\nTo evaluate the benefits of oral multiple-micronutrient supplementation during pregnancy on maternal, fetal and infant health outcomes.\nSearch methods\nFor this 2018 update, on 23 February 2018 we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP), and reference lists of retrieved studies. We also contacted experts in the field for additional and ongoing trials.\nSelection criteria\nAll prospective randomised controlled trials evaluating MMN supplementation with iron and folic acid during pregnancy and its effects on pregnancy outcomes were eligible, irrespective of language or the publication status of the trials. We included cluster-randomised trials, but excluded quasi-randomised trials. Trial reports that were published as abstracts were eligible.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We assessed the quality of the evidence using the GRADE approach.\nMain results\nWe identified 21 trials (involving 142,496 women) as eligible for inclusion in this review, but only 20 trials (involving 141,849 women) contributed data. Of these 20 trials, 19 were conducted in low- and middle-income countries and compared MMN supplements with iron and folic acid to iron, with or without folic acid. One trial conducted in the UK compared MMN supplementation with placebo. In total, eight trials were cluster-randomised.\nMMN with iron and folic acid versus iron, with or without folic acid (19 trials)\nMMN supplementation probably led to a slight reduction in preterm births (average risk ratio (RR) 0.95, 95% confidence interval (CI) 0.90 to 1.01; 18 trials, 91,425 participants; moderate-quality evidence), and babies considered small-for-gestational age (SGA) (average RR 0.92, 95% CI 0.88 to 0.97; 17 trials; 57,348 participants; moderate-quality evidence), though the CI for the pooled effect for preterm births just crossed the line of no effect. MMN reduced the number of newborn infants identified as low birthweight (LBW) (average RR 0.88, 95% CI 0.85 to 0.91; 18 trials, 68,801 participants; high-quality evidence). We did not observe any differences between groups for perinatal mortality (average RR 1.00, 95% CI 0.90 to 1.11; 15 trials, 63,922 participants; high-quality evidence). MMN supplementation led to slightly fewer stillbirths (average RR 0.95, 95% CI 0.86 to 1.04; 17 trials, 97,927 participants; high-quality evidence) but, again, the CI for the pooled effect just crossed the line of no effect. MMN supplementation did not have an important effect on neonatal mortality (average RR 1.00, 95% CI 0.89 to 1.12; 14 trials, 80,964 participants; high-quality evidence). We observed little or no difference between groups for the other maternal and pregnancy outcomes: maternal anaemia in the third trimester (average RR 1.04, 95% CI 0.94 to 1.15; 9 trials, 5912 participants), maternal mortality (average RR 1.06, 95% CI 0.72 to 1.54; 6 trials, 106,275 participants), miscarriage (average RR 0.99, 95% CI 0.94 to 1.04; 12 trials, 100,565 participants), delivery via a caesarean section (average RR 1.13, 95% CI 0.99 to 1.29; 5 trials, 12,836 participants), and congenital anomalies (average RR 1.34, 95% CI 0.25 to 7.12; 2 trials, 1958 participants). However, MMN supplementation probably led to a reduction in very preterm births (average RR 0.81, 95% CI 0.71 to 0.93; 4 trials, 37,701 participants). We were unable to assess a number of prespecified, clinically important outcomes due to insufficient or non-available data.\nWhen we assessed primary outcomes according to GRADE criteria, the quality of evidence for the review overall was moderate to high. We graded the following outcomes as high quality: LBW, perinatal mortality, stillbirth, and neonatal mortality. The outcomes of preterm birth and SGA we graded as moderate quality; both were downgraded for funnel plot asymmetry, indicating possible publication bias.\nWe carried out sensitivity analyses excluding trials with high levels of sample attrition (> 20%). We found that results were consistent with the main analyses for all outcomes. We explored heterogeneity through subgroup analyses by maternal height, maternal body mass index (BMI), timing of supplementation, dose of iron, and MMN supplement formulation (UNIMMAP versus non-UNIMMAP). There was a greater reduction in preterm births for women with low BMI and among those who took non-UNIMMAP supplements. We also observed subgroup differences for maternal BMI and maternal height for SGA, indicating greater impact among women with greater BMI and height. Though we found that MMN supplementation made little or no difference to perinatal mortality, the analysis demonstrated substantial statistical heterogeneity. We explored this heterogeneity using subgroup analysis and found differences for timing of supplementation, whereby higher impact was observed with later initiation of supplementation. For all other subgroup analyses, the findings were inconclusive.\nMMN versus placebo (1 trial)\nA single trial in the UK found little or no important effect of MMN supplementation on preterm births, SGA, or LBW but did find a reduction in maternal anaemia in the third trimester (RR 0.66, 95% CI 0.51 to 0.85), when compared to placebo. This trial did not measure our other outcomes.\nAuthors' conclusions\nOur findings suggest a positive impact of MMN supplementation with iron and folic acid on several birth outcomes. MMN supplementation in pregnancy led to a reduction in babies considered LBW, and probably led to a reduction in babies considered SGA. In addition, MMN probably reduced preterm births. No important benefits or harms of MMN supplementation were found for mortality outcomes (stillbirths, perinatal and neonatal mortality). These findings may provide some basis to guide the replacement of iron and folic acid supplements with MMN supplements for pregnant women residing in low- and middle-income countries.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Combining multiple micronutrients into one supplement has been suggested as a cost-effective way to achieve multiple benefits for women during pregnancy. Micronutrient deficiencies are known to interact, and a greater effect may be achieved by multiple supplementation rather than single-nutrient supplementation. However, interactions could also lead to poor absorption of some of the nutrients. High doses of some nutrients may also cause harm to the mother or her baby."
}
] |
query-laysum | 1017 | [
{
"role": "user",
"content": "Abstract: Background\nCommunication and language development are areas of particular weakness for young children with Down syndrome. Caregivers' interaction with children influences language development, so many early interventions involve training parents how best to respond to their children and provide appropriate language stimulation. Thus, these interventions are mediated through parents, who in turn are trained and coached in the implementation of interventions by clinicians. As the interventions involve a considerable commitment from clinicians and families, we undertook this review to synthesise the evidence of their effectiveness.\nObjectives\nTo assess the effects of parent-mediated interventions for improving communication and language development in young children with Down syndrome. Other outcomes are parental behaviour and responsivity, parental stress and satisfaction, and children's non-verbal means of communicating, socialisation and behaviour.\nSearch methods\nIn January 2018 we searched CENTRAL, MEDLINE, Embase and 14 other databases. We also searched three trials registers, checked the reference lists of relevant reports identified by the electronic searches, searched the websites of professional organizations, and contacted their staff and other researchers working in the field to identify other relevant published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs that compared parent-mediated interventions designed to improve communication and language versus teaching/treatment as usual (TAU) or no treatment or delayed (wait-listed) treatment, in children with Down syndrome aged between birth and six years. We included studies delivering the parent-mediated intervention in conjunction with a clinician-mediated intervention, as long as the intervention group was the only group to receive the former and both groups received the latter.\nData collection and analysis\nWe used standard Cochrane methodological procedures for data collection and analysis.\nMain results\nWe included three studies involving 45 children aged between 29 months and six years with Down syndrome. Two studies compared parent-mediated interventions versus TAU; the third compared a parent-mediated plus clinician-mediated intervention versus a clinician-mediated intervention alone. Treatment duration varied from 12 weeks to six months. One study provided nine group sessions and four individualised home-based sessions over a 13-week period. Another study provided weekly, individual clinic-based or home-based sessions lasting 1.5 to 2 hours, over a six-month period. The third study provided one 2- to 3-hour group session followed by bi-weekly, individual clinic-based sessions plus once-weekly home-based sessions for 12 weeks. Because of the different study designs and outcome measures used, we were unable to conduct a meta-analysis.\nWe judged all three studies to be at high risk of bias in relation to blinding of participants (not possible due to the nature of the intervention) and blinding of outcome assessors, and at an unclear risk of bias for allocation concealment. We judged one study to be at unclear risk of selection bias, as authors did not report the methods used to generate the random sequence; at high risk of reporting bias, as they did not report on one assessed outcome; and at high risk of detection bias, as the control group had a cointervention and only parents in the intervention group were made aware of the target words for their children. The sample sizes of each included study were very small, meaning that they are unlikely to be representative of the target population.\nThe findings from the three included studies were inconsistent. Two studies found no differences in expressive or receptive language abilities between the groups, whether measured by direct assessment or parent reports. However, they did find that children in the intervention group could use more targeted vocabulary items or utterances with language targets in certain contexts postintervention, compared to those in the control group; this was not maintained 12 months later. The third study found gains for the intervention group on total-language measures immediately postintervention.\nOne study did not find any differences in parental stress scores between the groups at any time point up to 12 months postintervention. All three studies noted differences in most measures of how the parents talked to and interacted with their children postintervention, and in one study most strategies were maintained in the intervention group at 12 months postintervention. No study reported evidence of language attrition following the intervention in either group, while one study found positive outcomes on children's socialisation skills in the intervention group. One study looked at adherence to the treatment through attendance data, finding that mothers in the intervention group attended seven out of nine group sessions and were present for four home visits. No study measured parental use of the strategies outside of the intervention sessions.\nA grant from the Hospital for Sick Children Foundation (Toronto, Ontario, Canada) funded one study. Another received partial funding from the National Institute of Child Health and Human Development and the Department of Education in the USA. The remaining study did not specify any funding sources.\nIn light of the serious limitations in methodology, and the small number of studies included, we considered the overall quality of the evidence, as assessed by GRADE, to be very low. This means that we have very little confidence in the results, and further research is very likely to have an important impact on our confidence in the estimate of treatment effect.\nAuthors' conclusions\nThere is currently insufficient evidence to determine the effects of parent-mediated interventions for improving the language and communication of children with Down syndrome. We found only three small studies of very low quality. This review highlights the need for well-designed studies, including RCTs, to evaluate the effectiveness of parent-mediated interventions. Trials should use valid, reliable and similar measures of language development, and they should include measures of secondary outcomes more distal to the intervention, such as family well-being. Treatment fidelity, in particular parental dosage of the intervention outside of prescribed sessions, also needs to be documented.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Do parent-mediated interventions improve communication and language development in young children with Down syndrome?"
}
] |
query-laysum | 1018 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a Cochrane Review first published in 2015. The conclusions have not changed.\nHypodermic needles of different sizes (gauges and lengths) can be used for vaccination procedures. The gauge (G) refers to the outside diameter of the needle tubing. The higher the gauge number, the smaller the diameter of the needle (e.g. a 23 G needle is 0.6 mm in diameter, whereas a 25 G needle is 0.5 mm in diameter). Many vaccines are recommended for injection into muscle (intramuscularly), although some are delivered subcutaneously (under the skin) and intradermally (into skin). Choosing an appropriate length and gauge of a needle may be important to ensure that a vaccine is delivered to the appropriate site and produces the maximum immune response while causing the least possible harm. Guidelines conflict regarding the sizes of needles that should be used for vaccinating children and adolescents.\nObjectives\nTo assess the effects of using needles of different sizes for administering vaccines to children and adolescents on vaccine immunogenicity (the ability of the vaccine to elicit an immune response), procedural pain, and other reactogenicity events (adverse events following vaccine administration).\nSearch methods\nWe updated our searches of CENTRAL, MEDLINE, Embase, and CINAHL to October 2017. We also searched proceedings of vaccine conferences and two trials registers.\nSelection criteria\nRandomised controlled trials evaluating the effects of using hypodermic needles of any gauge or length to administer any type of vaccine to people aged from birth to 24 years.\nData collection and analysis\nThree review authors independently extracted trial data and assessed the risk of bias. We contacted trial authors for additional information. We rated the quality of evidence using the GRADE system.\nMain results\nWe included five trials involving 1350 participants in the original review. The updated review identified no new trials. The evidence from two small trials (one trial including infants and one including adolescents) was insufficient to allow any definitive statements to be made about the effects of the needles evaluated in the trials on vaccine immunogenicity and reactogenicity.\nThe remaining three trials (1135 participants) contributed data to comparisons between 25 G 25 mm, 23 G 25 mm, and 25 G 16 mm needles. These trials included infants predominantly aged from two to six months undergoing intramuscular vaccination in the anterolateral thigh using the World Health Organization (WHO) injection technique (skin stretched flat, needle inserted at a 90° angle and up to the needle hub in healthy infants). The vaccines administered were combination vaccines containing diphtheria, tetanus, and whole-cell pertussis antigens (DTwP). In some trials, the vaccines also contained Haemophilus influenzae type b (DTwP-Hib) and hepatitis B (DTwP-Hib-Hep B) antigen components.\nPrimary outcomes\nIncidence of vaccine-preventable diseases: No trials reported this outcome.\nProcedural pain and crying: Using a wider gauge 23 G 25 mm needle may slightly reduce procedural pain (low-quality evidence) and probably leads to a slight reduction in the duration of crying time immediately after vaccination (moderate-quality evidence) compared with a narrower gauge 25 G 25 mm needle (one trial, 320 participants). The effects are probably not large enough to be clinically relevant.\nSecondary outcomes\nImmune response: There is probably little or no difference in immune response, defined in terms of the proportion of seroprotected infants, between use of 25 G 25 mm, 23 G 25 mm, or 25 G 16 mm needles to administer a series of three doses of a DTwP-Hib vaccine at ages two, three, and four months (moderate-quality evidence, one trial, numbers of participants in analyses range from 309 to 402. The immune response to the pertussis antigen was not measured).\nSevere and non-severe local reactions: 25 mm needles (either 25 G or 23 G) probably lead to fewer severe and non-severe local reactions after DTwP-Hib vaccination compared with 25 G 16 mm needles (moderate-quality evidence, one trial, 447 to 458 participants in analyses). We estimate that one fewer infant will experience a severe local reaction (extensive redness and swelling) after the first vaccine dose for every 25 infants vaccinated with the longer rather than the shorter needle (number needed to treat for an additional beneficial outcome (NNTB) with a 25 G 25 mm needle: 25 (95% confidence interval (CI) 15 to 100); NNTB with a 23 G 25 mm needle: 25 (95% CI 17 to 100)). We estimate that one fewer infant will experience a non-severe local reaction (any redness, swelling, tenderness, or hardness (composite outcome)) at 24 hours after the first vaccine dose for every 5 or 6 infants vaccinated with a 25 mm rather than a 16 mm needle (NNTB with a 25 G 25 mm needle: 5 (95% CI 4 to 10); NNTB with a 23 G 25 mm needle: 6 (95% CI 4 to 13)). The results are similar after the second and third vaccine doses.\nUsing a narrow gauge 25 G 25 mm needle may produce a small reduction in the incidence of local reactions after each dose of a DTwP vaccine compared with a wider gauge 23 G 25 mm needle, but the effect estimates are imprecise (low-quality evidence, two trials, 100 to 459 participants in analyses).\nSystemic reactions: The comparative effects of 23 G 25 mm, 25 G 25 mm, and 25 G 16 mm needles on the incidence of postvaccination fever and other systemic events such as drowsiness, loss of appetite, and vomiting are uncertain due to the very low quality of the evidence.\nAuthors' conclusions\nUsing 25 mm needles (either 23 G or 25 G) for intramuscular vaccination procedures in the anterolateral thigh of infants using the WHO injection technique probably reduces the occurrence of local reactions while achieving a comparable immune response to 25 G 16 mm needles. These findings are applicable to healthy infants aged two to six months receiving combination DTwP vaccines with a reactogenic whole-cell pertussis antigen component. These vaccines are predominantly used in low- and middle-income countries. The applicability of the findings to vaccines with acellular pertussis components and other vaccines with different reactogenicity profiles is uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found moderate-quality evidence that infants vaccinated in the thigh with 25 mm needles probably have fewer severe reactions (extensive redness and swelling in the thigh) after DTwP-Hib vaccination than infants vaccinated with 16 mm needles. We also found that the longer needles probably lead to fewer non-severe reactions such as mild swelling, tenderness, and redness after vaccination. The immune response to the vaccine is probably similar with the long and the short needles.\nWe found low-quality evidence that the wide, long needle may slightly reduce the pain of the vaccination procedure compared with the narrow, long needle. We found moderate-quality evidence that the wide, long needle probably slightly reduces the duration of crying immediately following vaccination compared with the narrow, long needle. The differences in pain and crying between use of the wide and narrow needles are probably too small to be of any practical importance.\nWe found low-quality evidence that infants vaccinated with the narrow, long needle may have slightly fewer non-severe reactions than infants vaccinated with the wide, long needle.\nWe do not know if needle size has an effect on fever or other reactions that sometimes occur after vaccination including drowsiness, loss of appetite, and vomiting due to the very low quality of the evidence.\nThe evidence in our review is current to October 2017."
}
] |
query-laysum | 1019 | [
{
"role": "user",
"content": "Abstract: Background\nAcute respiratory infections (ARIs) last for less than 30 days and are the most common acute diseases affecting people. Exercise has been shown to improve health generally, but it is uncertain whether exercise may be effective in reducing the occurrence, severity, and duration of ARIs. This is an update of our review published in 2015.\nObjectives\nTo evaluate the effectiveness of exercise for altering the occurrence, severity, or duration of acute respiratory infections.\nSearch methods\nWe searched CENTRAL (2020, Issue 2), MEDLINE (1948 to March week 1, 2020), Embase (1974 to 05 March 2020), CINAHL (1981 to 05 March 2020), LILACS (1982 to 05 March 2020), SPORTDiscus (1985 to 05 March 2020), PEDro (searched 05 March 2020), OTseeker (searched 05 March 2020), and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov (searched 05 March 2020).\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs (method of allocation that is not truly random, e.g. based on date of birth, medical record number) of exercise for ARIs in the general population.\nData collection and analysis\nTwo review authors independently extracted data from the included trials using a standard form. One review author entered data, which a second review author checked. We contacted trial authors to request missing data. There were sufficient differences in the populations trialed and in the nature of the interventions to use the random-effects model (which makes fewer assumptions than the fixed-effect model) in the analysis.\nMain results\nWe included three new trials for this update (473 participants) for a total of 14 trials involving 1377 adults, published between 1990 and 2018. Nine trials were conducted in the USA, and one each in Brazil, Canada, Portugal, Spain, and Turkey.\nSample sizes ranged from 16 to 419 participants, aged from 18 to 85 years. The proportion of female participants ranged from 52% to 100%. Follow-up duration ranged from 1 to 36 weeks (median = 12 weeks).\nModerate-intensity aerobic exercise (walking, bicycling, treadmill, or a combination) was evaluated in 11 trials, and was most commonly prescribed at least three times a week for 30 to 45 minutes.\nThere was no difference between exercise and no exercise in the number of ARI episodes per person per year (risk ratio (RR) 1.00, 95% confidence interval (CI) 0.77 to 1.30; 4 trials; 514 participants; low-certainty evidence); proportion of participants who experienced at least one ARI over the study period (RR 0.88, 95% CI 0.72 to 1.08; 5 trials; 520 participants; low-certainty evidence); and the number of symptom days per episode of illness (mean difference (MD) −0.44 day, 95% CI −2.33 to 1.46; 6 trials; 557 participants; low-certainty evidence). Exercise reduced the severity of ARI symptoms measured on the Wisconsin Upper Respiratory Symptom Survey (WURSS-24) (MD −103.57, 95% CI −198.28 to −8.87; 2 trials; 373 participants; moderate-certainty evidence) and the number of symptom days during follow-up period (MD −2.24 days, 95% CI −3.50 to −0.98; 4 trials; 483 participants; low-certainty evidence).\nExcercise did not have a significant effect on laboratory parameters (blood lymphocytes, salivary secretory immunoglobulin, and neutrophils), quality of life outcomes, cost-effectiveness, and exercise-related injuries. There was no difference in participant dropout between the intervention and control groups.\nOverall, the certainty of the evidence was low, downgraded mainly due to limitations in study design and implementation, imprecision, and inconsistency.\nSeven trials were funded by public agencies; five trials did not report funding; and two trials were funded by private companies.\nAuthors' conclusions\nExercise did not reduce the number of ARI episodes, proportion of participants experiencing at least one ARI during the study, or the number of symptom days per episode of illness. However, exercise reduced the severity of ARI symptoms (two studies) and the number of symptom days during the study follow-up period (four studies). Small study size, risk of bias, and heterogeneity in the populations studied contributed to the uncertainty of the findings. Larger trials that are designed to avoid risk of bias associated with participant selection, blinding of outcomes assessors, and with adequate reporting of all outcomes proposed for measurement in trials, would help to provide more robust evidence.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does exercise change the occurrence, severity, or duration of acute respiratory infections such as the common cold, influenza, cough, and sore throat?"
}
] |
query-laysum | 1020 | [
{
"role": "user",
"content": "Abstract: Background\nChronic heart failure (HF) is a growing global health challenge. People with HF experience substantial burden that includes low exercise tolerance, poor health-related quality of life (HRQoL), increased risk of mortality and hospital admission, and high healthcare costs. The previous (2014) Cochrane systematic review reported that exercise-based cardiac rehabilitation (CR) compared to no exercise control shows improvement in HRQoL and hospital admission among people with HF, as well as possible reduction in mortality over the longer term, and that these reductions appear to be consistent across patient and programme characteristics. Limitations noted by the authors of this previous Cochrane Review include the following: (1) most trials were undertaken in patients with HF with reduced (< 45%) ejection fraction (HFrEF), and women, older people, and those with preserved (≥ 45%) ejection fraction HF (HFpEF) were under-represented; and (2) most trials were undertaken in the hospital/centre-based setting.\nObjectives\nTo determine the effects of exercise-based cardiac rehabilitation on mortality, hospital admission, and health-related quality of life of people with heart failure.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and three other databases on 29 January 2018. We also checked the bibliographies of systematic reviews and two trial registers.\nSelection criteria\nWe included randomised controlled trials that compared exercise-based CR interventions with six months' or longer follow-up versus a no exercise control that could include usual medical care. The study population comprised adults (> 18 years) with evidence of HF - either HFrEF or HFpEF.\nData collection and analysis\nTwo review authors independently screened all identified references and rejected those that were clearly ineligible for inclusion in the review. We obtained full papers of potentially relevant trials. Two review authors independently extracted data from the included trials, assessed their risk of bias, and performed GRADE analyses.\nMain results\nWe included 44 trials (5783 participants with HF) with a median of six months' follow-up. For this latest update, we identified 11 new trials (N = 1040), in addition to the previously identified 33 trials. Although the evidence base includes predominantly patients with HFrEF with New York Heart Association classes II and III receiving centre-based exercise-based CR programmes, a growing body of studies include patients with HFpEF and are undertaken in a home-based setting. All included studies included a no formal exercise training intervention comparator. However, a wide range of comparators were seen across studies that included active intervention (i.e. education, psychological intervention) or usual medical care alone. The overall risk of bias of included trials was low or unclear, and we downgraded results using the GRADE tool for all but one outcome.\nCardiac rehabilitation may make little or no difference in all-cause mortality over the short term (≤ one year of follow-up) (27 trials, 28 comparisons (2596 participants): intervention 67/1302 (5.1%) vs control 75/1294 (5.8%); risk ratio (RR) 0.89, 95% confidence interval (CI) 0.66 to 1.21; low-quality GRADE evidence) but may improve all-cause mortality in the long term (> 12 months follow up) (6 trials/comparisons (2845 participants): intervention 244/1418 (17.2%) vs control 280/1427 (19.6%) events): RR 0.88, 95% CI 0.75 to 1.02; high-quality evidence). Researchers provided no data on deaths due to HF. CR probably reduces overall hospital admissions in the short term (up to one year of follow-up) (21 trials, 21 comparisons (2182 participants): (intervention 180/1093 (16.5%) vs control 258/1089 (23.7%); RR 0.70, 95% CI 0.60 to 0.83; moderate-quality evidence, number needed to treat: 14) and may reduce HF-specific hospitalisation (14 trials, 15 comparisons (1114 participants): (intervention 40/562 (7.1%) vs control 61/552 (11.1%) RR 0.59, 95% CI 0.42 to 0.84; low-quality evidence, number needed to treat: 25). After CR, a clinically important improvement in short-term disease-specific health-related quality of life may be evident (Minnesota Living With Heart Failure questionnaire - 17 trials, 18 comparisons (1995 participants): mean difference (MD) -7.11 points, 95% CI -10.49 to -3.73; low-quality evidence). Pooling across all studies, regardless of the HRQoL measure used, shows there may be clinically important improvement with exercise (26 trials, 29 comparisons (3833 participants); standardised mean difference (SMD) -0.60, 95% CI -0.82 to -0.39; I² = 87%; Chi² = 215.03; low-quality evidence). ExCR effects appeared to be consistent different models of ExCR delivery: centre vs. home-based, exercise dose, exercise only vs. comprehensive programmes, and aerobic training alone vs aerobic plus resistance programmes.\nAuthors' conclusions\nThis updated Cochrane Review provides additional randomised evidence (11 trials) to support the conclusions of the previous version (2014) of this Cochane Review. Compared to no exercise control, CR appears to have no impact on mortality in the short term (< 12 months' follow-up). Low- to moderate-quality evidence shows that CR probably reduces the risk of all-cause hospital admissions and may reduce HF-specific hospital admissions in the short term (up to 12 months). CR may confer a clinically important improvement in health-related quality of life, although we remain uncertain about this because the evidence is of low quality. Future ExCR trials need to continue to consider the recruitment of traditionally less represented HF patient groups including older, female, and HFpEF patients, and alternative CR delivery settings including home- and using technology-based programmes.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 44 studies that included 5783 people with HF, mainly HFrEF. The findings of this update are broadly consistent with those of the previous (2014) version of this Cochrane Review. They show important benefits of exercise-based rehabilitation that include a probable reduction in the risk of overall hospital admissions in the short term, as well as the potential for reduction in heart failure admissions. The effect of exercise-based rehabilitation on health-related quality of life is uncertain due to very low-quality evidence. Exercise-based rehabilitation may make little or no difference in all-cause mortality in trials with follow-up less than 12 months. Further evidence is needed to better show the effects of exercise rehabilitation among people with HFpEF and the impact of alternative models of delivery, such as home-based programmes."
}
] |
query-laysum | 1021 | [
{
"role": "user",
"content": "Abstract: Background\nChronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is an uncommon progressive or relapsing paralysing disease caused by inflammation of the peripheral nerves. If the hypothesis that it is due to autoimmunity is correct, removal of autoantibodies in the blood by plasma exchange should be beneficial.\nObjectives\nTo assess the effects of plasma exchange for treating CIDP.\nSearch methods\nOn 30 June 2015, we searched the Cochrane Neuromuscular Disease Group Specialized Register, the Cochrane Central Register for Controlled Trials (CENTRAL), MEDLINE, EMBASE, CINAHL Plus, and LILACS. We also scrutinised the bibliographies of the trials, contacted the trial authors and other disease experts, and searched trials registries for ongoing studies.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs in participants of any age comparing plasma exchange with sham treatment or no treatment.\nData collection and analysis\nTwo review authors independently selected the trials, extracted the data, and assessed risk of bias. Where possible the review authors combined data according to the methods of the Cochrane Neuromuscular Disease Review Group.\nMain results\nPrimary outcome measure: one cross-over trial including 18 participants showed after four weeks, 2 (95% confidence interval (CI) 0.9 to 3.1) points more improvement on an 11-point disability scale with plasma exchange (10 exchanges over four weeks) than with sham exchange. Rapid deterioration after plasma exchange occurred in eight of 12 who had improved.\nSecondary outcome measures: when we combined the results of this cross-over trial and a trial with 29 participants treated in a parallel-group design, there were 31 points (95% CI 18 to 45) more improvement on an impairment scale (maximum score 280) after plasma exchange (six exchanges over three weeks) than after sham exchange. There were significant improvements in both trials in an electrophysiological measure, the proximally evoked compound muscle action potential, after three or four weeks. Non-randomised evidence indicates that plasma exchange induces adverse events in 3% to 17% of procedures. These events are sometimes serious. Both trials had a low risk of bias. A trial that showed no significant difference in the benefit between plasma exchange and intravenous immunoglobulin is included in the Cochrane review of intravenous immunoglobulin for this condition.\nAuthors' conclusions\nModerate- to high-quality evidence from two small trials shows that plasma exchange provides significant short-term improvement in disability, clinical impairment, and motor nerve conduction velocity in CIDP but rapid deterioration may occur afterwards. Adverse events related to difficulty with venous access, use of citrate, and haemodynamic changes are not uncommon. We need more research to identify agents that will prolong the beneficial action of plasma exchange.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What are the benefits and harms of plasma exchange for chronic inflammatory demyelinating polyradiculoneuropathy (CIDP)?"
}
] |
query-laysum | 1022 | [
{
"role": "user",
"content": "Abstract: Background\nCrohn's disease (CD) is a chronic, relapsing and remitting disease of the gastrointestinal tract that can cause significant morbidity and disability. Current treatment guidelines recommend early intervention with immunosuppressant or biological therapy in high-risk patients with a severe disease phenotype at presentation. The feasibility of therapeutic de-escalation once remission is achieved is a commonly encountered question in clinical practice, driven by patient and clinician concerns regarding safety, adverse events, cost and national regulations. Withdrawal of immunosuppressant and biologic drugs in patients with quiescent CD may limit adverse events and reduce healthcare costs. Alternatively, stopping these drug therapies may result in negative outcomes such as disease relapse, drug desensitization, bowel damage and need for surgery.\nObjectives\nTo assess the feasibility and safety of discontinuing immunosuppressant or biologic drugs, administered alone or in combination, in patients with quiescent CD.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and the Cochrane IBD Group Specialized Register from inception to 19 December 2017. We also searched the reference lists of potentially relevant manuscripts and conference proceedings to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs) and prospective cohort studies that followed patients for a minimum duration of six months after drug discontinuation were considered for inclusion. The patient population of interest was adults (> 18 years) with CD (as defined by conventional clinical, endoscopic or histologic criteria) who had achieved remission while receiving immunosuppressant or biologic drugs administered alone or in combination. Patients then discontinued the drug regimen following a period of maintenance therapy of at least six months. The comparison was usual care (i.e. continuation of the drug regimen).\nData collection and analysis\nThe primary outcome measure was the proportion of patients who relapsed following discontinuation of immunosuppressant or biologic drugs, administered alone or in combination. Secondary outcomes included: the proportion of patients who responded to the reintroduction of immunosuppressant or biologic drugs, given as monotherapy or combination therapy; the proportion of patients who required surgery following relapse; the proportion of patients who required hospitalization for CD following relapse; the proportion of patients who developed new CD-related complications (e.g. fistula, abscesses, strictures) following relapse; the proportion of patients with elevated biomarkers of inflammation (CRP, fecal calprotectin) in those who stop and those who continue therapy; the proportion of patients with anti-drug antibodies and low serum trough drug levels; time to relapse; and the proportion of patients with adverse events, serious adverse events and withdrawal due to adverse events. For dichotomous outcomes, we calculated the risk ratio (RR) and 95% confidence interval (95% CI). Data were analyzed on an intention-to-treat basis where patients with missing outcome data were assumed to have relapsed. The overall quality of the evidence supporting the primary and secondary outcomes was assessed using the GRADE criteria.\nMain results\nA total of six RCTs (326 patients) evaluating therapeutic discontinuation in patients with quiescent CD were eligible for inclusion. In four RCTs azathioprine monotherapy was discontinued, and in two RCTs azathioprine was discontinued from a combination therapy regimen consisting of azathioprine with infliximab. No studies of biologic monotherapy withdrawal were eligible for inclusion. The majority of studies received unclear or low risk of bias ratings, with the exception of three open-label RCTs, which were rated as high risk of bias for blinding. Four RCTs (215 participants) compared discontinuation to continuation of azathioprine monotherapy, while two studies (125 participants) compared discontinuation of azathioprine from a combination regimen to continuation of combination therapy. Continuation of azathioprine monotherapy was shown to be superior to withdrawal for risk of clinical relapse. Thirty-two per cent (36/111) of azathioprine withdrawal participants relapsed compared to 14% (14/104) of participants who continued with azathioprine therapy (RR 0.42, 95% CI 0.24 to 0.72, GRADE low quality evidence). However, it is uncertain if there are any between-group differences in new CD-related complications (RR 0.34, 95% CI 0.06 to 2.08, GRADE low quality evidence), adverse events (RR 0.88, 95% CI 0.67 to 1.17, GRADE low quality evidence), serious adverse events (RR 3.29, 95% CI 0.35 to 30.80, GRADE low quality evidence) or withdrawal due to adverse events (RR 2.59, 95% CI 0.35 to 19.04, GRADE low quality evidence). Common adverse events included infections, mild leukopenia, abdominal symptoms, arthralgias, headache and elevated liver enzymes. No differences between azathioprine withdrawal from combination therapy versus continuation of combination therapy were observed for clinical relapse. Among patients who continued combination therapy with azathioprine and infliximab, 48% (27/56) had a clinical relapse compared to 49% (27/55) of patients discontinued azathioprine but remained on infliximab (RR 1.02, 95% CI 0.68 to 1.52, P = 0.32; GRADE low quality evidence). The effects on adverse events (RR 1.11, 95% CI 0.44 to 2.81, GRADE low quality of evidence) or serious adverse events are uncertain (RR 1.00, 95% CI 0.21 to 4.66; GRADE very low quality of evidence). Common adverse events in the combination therapy studies included infections, liver test elevations, arthralgias and infusion reactions.\nAuthors' conclusions\nThe effects of withdrawal of immunosuppressant therapy in people with quiescent Crohn's disease are uncertain. Low quality evidence suggests that continuing azathioprine monotherapy may be superior to withdrawal for avoiding clinical relapse, while very low quality evidence suggests that there may be no difference in clinical relapse rates between discontinuing azathioprine from a combination therapy regimen, compared to continuing combination therapy. It is unclear whether withdrawal of azathioprine, initially administered alone or in combination, impacts on the development of CD-related complications, adverse events, serious adverse events or withdrawal due to adverse events. Further high-quality research is needed in this area, particularly double-blind RCTs in which biologic therapy or an immunosuppressant other than azathioprine is withdrawn.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The effects of withdrawal of immunosuppressant therapy in people with Crohn's disease in remission are uncertain. Low quality evidence suggests that continuing azathioprine monotherapy may be superior to withdrawal of azathioprine for avoiding clinical relapse in people with Crohn's disease in remission. Low quality evidence suggests that stopping the immunosuppressive after combination therapy does not seem to impact on the risk of relapsing. It is unclear whether the withdrawal of azathioprine, initially administered alone or in combination, impacts on the development of Crohn's disease-related complications, side effects, serious side effects, or withdrawal from the studies due to side effects. Additional research is needed in this area to better inform clinical practice, particularly high-quality randomized controlled trials examining outcomes when biologic therapy is withdrawn."
}
] |
query-laysum | 1023 | [
{
"role": "user",
"content": "Abstract: Background\nBreathlessness is a cardinal symptom of chronic obstructive pulmonary disease (COPD). Long-term oxygen therapy (LTOT) is given to improve survival time in people with COPD and severe chronic hypoxaemia at rest. The efficacy of oxygen therapy for breathlessness and health-related quality of life (HRQOL) in people with COPD and mild or no hypoxaemia who do not meet the criteria for LTOT has not been established.\nObjectives\nTo determine the efficacy of oxygen versus air in mildly hypoxaemic or non-hypoxaemic patients with COPD in terms of (1) breathlessness; (2) HRQOL; (3) patient preference whether to continue therapy; and (4) oxygen-related adverse events.\nSearch methods\nWe searched the Cochrane Airways Group Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and Embase, to 12 July 2016, for randomised controlled trials (RCTs). We handsearched the reference lists of included articles.\nSelection criteria\nWe included RCTs of the effects of non-invasive oxygen versus air on breathlessness, HRQOL or patient preference to continue therapy among people with COPD and mild or no hypoxaemia (partial pressure of oxygen (PaO2) > 7.3 kPa) who were not already receiving LTOT. Two review authors independently assessed articles for inclusion in the review.\nData collection and analysis\nTwo review authors independently collected and analysed data. We assessed risk of bias by using the Cochrane 'Risk of bias tool'. We pooled effects recorded on different scales as standardised mean differences (SMDs) with 95% confidence intervals (CIs) using random-effects models. Lower SMDs indicated decreased breathlessness and reduced HRQOL. We performed subanalyses and sensitivity analyses and assessed the quality of evidence according to the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach.\nMain results\nCompared with the previous review, which was published in 2011, we included 14 additional studies (493 participants), excluded one study and included data for meta-analysis of HRQOL. In total, we included in this review 44 studies including 1195 participants, and we included 33 of these (901 participants)in the meta-analysis.\nWe found that breathlessness during exercise or daily activities was reduced by oxygen compared with air (32 studies; 865 participants; SMD -0.31, 95% CI -0.43 to -0.20; I2 = 29%; low-quality evidence). This translates to a decrease in breathlessness of about 0.7 points on a 0 to 10 numerical rating scale. In contrast, we found no effect of short-burst oxygen given before exercise (four studies; 90 participants; SMD -0.03, 95% CI -0.28 to 0.22; I2 = 0%; low-quality evidence). Oxygen reduced breathlessness measured during exercise tests (30 studies; 591 participants; SMD -0.34, 95% CI -0.46 to -0.22; I2 = 29%; moderate-quality evidence), whereas evidence of an effect on breathlessness measured in daily life was limited (two studies; 274 participants; SMD -0.13, 95% CI, -0.37 to 0.11; I2 = 0%; low-quality evidence).\nOxygen did not clearly affect HRQOL (five studies; 267 participants; SMD 0.12, 95% CI -0.04 to 0.28; I2 = 0%; low-quality evidence). Patient preference and adverse events could not be analysed owing to insufficient data.\nAuthors' conclusions\nWe are moderately confident that oxygen can relieve breathlessness when given during exercise to mildly hypoxaemic and non-hypoxaemic people with chronic obstructive pulmonary disease who would not otherwise qualify for home oxygen therapy. Most evidence pertains to acute effects during exercise tests, and no evidence indicates that oxygen decreases breathlessness in the daily life setting. Findings show that oxygen does not affect health-related quality of life.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence regarding effects of oxygen compared with air on breathlessness in people with chronic obstructive pulmonary disease (COPD) with only mildly or moderately decreased blood oxygen levels."
}
] |
query-laysum | 1024 | [
{
"role": "user",
"content": "Abstract: Background\nTophi develop in untreated or uncontrolled gout. This is an update of a Cochrane Review first published in 2014.\nObjectives\nTo assess the benefits and harms of non-surgical and surgical treatments for the management of tophi in gout.\nSearch methods\nWe updated the search of Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and Embase databases to 28 August 2020.\nSelection criteria\nWe included all published randomised controlled trials (RCTs) or controlled clinical trials examining interventions for tophi in gout in adults.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included one trial in our original review. We added four more trials (1796 participants) in this update. One had three arms; pegloticase infusion every two weeks (biweekly), monthly pegloticase infusion (pegloticase infusion alternating with placebo infusion every two weeks) and placebo. Two studies looked at lesinurad 200 mg or 400 mg in combination with allopurinol. One trial studied lesinurad 200 mg or 400 mg in combination with febuxostat. One trial compared febuxostat 80 mg and 120 mg to allopurinol.\nTwo trials were at unclear risk of performance and detection bias due to lack of information on blinding of participants and personnel. All other trials were at low risk of bias.\nModerate-certainty evidence (downgraded for imprecision; one study; 79 participants) showed that biweekly pegloticase resolved tophi in 21/52 participants compared with 2/27 on placebo (risk ratio (RR) 5.45, 95% confidence interval (CI) 1.38 to 21.54; number needed to treat for a benefit (NNTB) 3, 95% CI 2 to 6). Similar proportions of participants receiving biweekly pegloticase (80/85) had an adverse event compared to placebo (41/43) (RR 0.99, 95% CI 0.91 to 1.07). However, more participants on biweekly pegloticase (15/85) withdrew due to an adverse event compared to placebo (1/43) (RR 7.59, 95% CI 1.04 to 55.55; number needed to treat for a harm (NNTH) 7, 95% CI 4 to 16).\nMore participants on monthly pegloticase (11/52) showed complete resolution of tophi compared with placebo (2/27) (RR 2.86, 95% CI 0.68 to 11.97; NNTB 8, 95% CI 4 to 91). Similar numbers of participants on monthly pegloticase (84/84) had an adverse event compared to placebo (41/43) (RR 1.05, 95% CI 0.98 to 1.14). More participants on monthly pegloticase (16/84) withdrew due to adverse events compared to placebo (1/43) (RR 8.19, 95% CI 1.12 to 59.71; NNTH 6, 95% CI 4 to 14). Infusion reaction was the most common reason for withdrawal.\nModerate-certainty evidence (2 studies; 103 participants; downgraded for imprecision) showed no clinically significant difference for complete resolution of target tophus in the lesinurad 200 mg plus allopurinol arm (11/53) compared to the placebo plus allopurinol arm (16/50) (RR 0.40, 95% CI 0.04 to 4.57), or in the lesinurad 400 mg plus allopurinol arm (12/48) compared to the placebo plus allopurinol arm (16/50) (RR 0.79, 95% CI 0.42 to 1.49).\nAn extension study examined lesinurad 200 mg or 400 mg in combination with febuxostat, or placebo (low-certainty evidence, downgraded for indirectness and imprecision). Participants on lesinurad in the original study continued (CONT) on the same dose. Lesinurad 400 mg plus febuxostat may be beneficial for tophi resolution; 43/65 in the lesinurad 400 mg CONT arm compared to 38/64 in the lesinurad 200 mg CONT arm had tophi resolution (RR 1.11, 95% CI 0.85 to 1.46). Lesinurad 400 mg plus febuxostat may result in no difference in adverse events; 57/65 in the lesinurad 400 mg CONT arm had an adverse event compared to 50/64 in lesinurad 200 mg CONT arm (RR 1.12, 95% CI 0.96 to 1.32). Lesinurad 400 mg plus febuxostat may result in no difference in withdrawals due to adverse events; 10/65 participants in the lesinurad 400 mg CONT arm withdrew due to an adverse event compared to 10/64 participants in the lesinurad 200 mg CONT arm (RR 0.98, 95% CI 0.44 to 2.20). Lesinurad 400 mg plus febuxostat may result in no difference in mean serum uric acid (sUA), which was 3 mg/dl in the lesinurad 400 mg CONT group compared to 3.9 mg/dl in the lesinurad 200 mg CONT group (mean difference -0.90, 95% CI -1.51 to -0.29).\nParticipants who were not on lesinurad in the original study were randomised (CROSS) to lesinurad 200 mg or 400 mg, both in combination with febuxostat. Low-certainty evidence downgraded for indirectness and imprecision showed that lesinurad 400 mg (CROSS) may result in tophi resolution (17/34) compared to lesinurad 200 mg (CROSS) (14/33) (RR 1.18, 95% CI 0.70 to 1.98). Lesinurad 400 mg in combination with febuxostat may result in no difference in adverse events (33/34 in the lesinurad 400 mg CROSS arm compared to 27/33 in the lesinurad 200 mg (CROSS); RR 1.19, 95% CI 1.00 to 1.41). Lesinurad 400 mg plus febuxostat may result in no difference in withdrawals due to adverse events, 5/34 in the lesinurad 400 mg CROSS arm withdrew compared to 2/33 in the lesinurad 200 mg CROSS arm (RR 2.43, 95% CI 0.51 to 11.64). Lesinurad 400 mg plus febuxostat results in no difference in sUA (4.2 mg/dl in lesinurad 400 mg CROSS) compared to lesinurad 200 mg (3.8 mg/dl in lesinurad 200 mg CROSS), mean difference 0.40 mg/dl, 95% CI -0.75 to 1.55.\nAuthors' conclusions\nModerate-certainty evidence showed that pegloticase is probably beneficial for resolution of tophi in gout. Although there was little difference in adverse events when compared to placebo, participants on pegloticase had more withdrawals due to adverse events. Lesinurad 400 mg plus febuxostat may be beneficial for tophi resolution compared with lesinurad 200 mg plus febuxostat; there was no difference in adverse events between these groups. We were unable to determine whether lesinurad plus febuxostat is more effective than placebo. Lesinurad (400 mg or 200 mg) plus allopurinol is probably not beneficial for tophi resolution, and there was no difference in adverse events between these groups. RCTs on interventions for managing tophi in gout are needed, and the lack of trial data is surprising given that allopurinol is a well-established treatment for gout.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum | 1025 | [
{
"role": "user",
"content": "Abstract: Background\nChronic obstructive pulmonary disease (COPD) is a common, preventable, and treatable respiratory disease. COPD exacerbations are associated with worse quality of life, increased hospitalisations, and increased mortality. Currently available pharmacological interventions have variable impact on exacerbation frequency. The anti-inflammatory effects of statins may lead to decreased pulmonary and systemic inflammation, resulting in fewer exacerbations of COPD. Several observational studies have shown potential benefits of statins for patients with COPD.\nObjectives\nThis review aims to evaluate available evidence on benefits and harms associated with statin therapy compared with placebo as adjunct therapy for patients with COPD. Primary objectives include the following.\n• To determine whether statins reduce mortality rates in COPD.\n• To determine whether statins reduce exacerbation frequency, improve quality of life, or improve lung function in COPD.\n• To determine whether statins are associated with adverse effects.\nSearch methods\nWe identified trials from the Cochrane Airways Trials Register, which contains studies identified through multiple electronic searches and handsearches of other sources. We also searched trial registries and reference lists of primary studies. We conducted the most recent search on 20 May 2019.\nSelection criteria\nParallel, randomised controlled trials recruiting adults with COPD.\nData collection and analysis\nWe used standard methods as expected by Cochrane. Prespecified primary outcomes were number of exacerbations, all-cause mortality, and COPD-specific mortality.\nMain results\nEight studies including 1323 participants with COPD were included in the review. Participants had a mean age of 61.4 to 72 years, and most were male (median 73.4%). Mean baseline forced expiratory volume in one second (FEV₁) ranged from 41% to 90% predicted. All studies compared moderate- or high-intensity statin therapy versus placebo. The duration of treatment ranged from 12 weeks to 36 months.\nWe found no statistically significant difference between statins and placebo in our primary outcome of number of exacerbations per person-year (mean difference (MD) -0.03, 95% confidence interval (CI) -0.25 to 0.19, 1 trial, 877 participants), including number of exacerbations requiring hospitalisation per person-year (MD 0.00, 95% CI -0.10 to 0.10, 1 trial, 877 exacerbations). This evidence was of moderate quality after downgrading for unclear risk of bias. Our primary outcomes of all-cause mortality (odds ratio (OR) 1.03, 95% CI 0.61 to 1.74, 2 trials, 952 participants) and COPD-specific mortality (OR 1.25, 95% CI 0.38 to 4.13, 1 trial, 877 participants) showed no significant difference between statins and placebo, with wide confidence intervals suggesting uncertainty about the precision of the results. This evidence was of low quality after downgrading for unclear risk of bias and imprecision.\nResults of the secondary outcomes analysis showed no clear differences between statins and placebo for FEV₁ (% predicted) (MD 1.18, 95% CI -2.6 to 4.97, 6 trials, 325 participants) but did show a statistically significant improvement in FEV₁/forced vital capacity (FVC) (MD 2.66, 95% CI 0.12 to 5.2; P = 0.04; 6 trials, 325 participants). A sensitivity analysis excluding two trials at high risk of bias showed no statistically significant difference in FEV₁/FVC (MD 2.05, 95% CI -0.87 to -4.97; P = 0.17; 4 trials, 255 participants). We also found no significant differences between the two groups in functional capacity measured by six-minute walk distance in metres (MD 1.79, 95% CI -52.51 to 56.09, 3 trials, 71 participants), with wide confidence intervals suggesting uncertainty about the precision of the results. Results show no clear difference in quality of life, which was reported in three trials, and a slight reduction in C-reactive protein (CRP) in the intervention group, which was statistically significant (MD -1.03, 95% CI -1.95 to -0.11; I² = 0%, P = 0.03; 3 trials, 142 participants). We noted a significant reduction in interleukin (IL)-6 in the intervention group (MD -2.11, 95% CI -2.65 to -1.56; I² = 0%, P ≤ 0.00001; 2 trials, 125 participants). All trials mentioned adverse events and indicated that statins were generally well tolerated. One study reported adverse events in detail and indicated that rates of all non-fatal adverse events (the number of serious adverse events per person-year) were similar in both groups (0.63 ± 1.56 events (intervention group) and 0.62 ± 1.48 events (control group); P > 0.20) for all comparisons, except for non-fatal serious adverse events involving the gastrointestinal tract, which were more frequent in the intervention group (in 30 patients (0.05 events per person-year) vs 17 patients (0.02 events per person-year); P = 0.02). Another trial lists the total numbers and percentages of adverse events in the intervention group (12 (26%)) and in the control group (21 (43%)) and of serious adverse events in the intervention group (4 (9%)) and in the control group (3 (6%)).The other trials stated that researchers found no significant adverse effects of statins but did not report adverse events in detail.\nAuthors' conclusions\nA small number of trials providing low- or moderate-quality evidence were suitable for inclusion in this review. They showed that use of statins resulted in a reduction in CRP and IL-6, but that this did not translate into clear clinical benefit for people with COPD. Further randomised controlled trials are needed to explore this topic.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included eight studies involving 1323 participants comparing the benefits and harms of statins and placebo (an identical looking treatment with no therapeutic benefit) in people with COPD."
}
] |
query-laysum | 1026 | [
{
"role": "user",
"content": "Abstract: Background\nThe standard method of diagnosing HIV in infants and children less than 18 months is with a nucleic acid amplification test reverse transcriptase polymerase chain reaction test (NAT RT-PCR) detecting viral ribonucleic acid (RNA). Laboratory testing using the RT-PCR platform for HIV infection is limited by poor access, logistical support, and delays in relaying test results and initiating therapy in low-resource settings. The use of rapid diagnostic tests at or near the point-of-care (POC) can increase access to early diagnosis of HIV infection in infants and children less than 18 months of age and timely initiation of antiretroviral therapy (ART).\nObjectives\nTo summarize the diagnostic accuracy of point-of-care nucleic acid-based testing (POC NAT) to detect HIV-1/HIV-2 infection in infants and children aged 18 months or less exposed to HIV infection.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (until 2 February 2021), MEDLINE and Embase (until 1 February 2021), and LILACS and Web of Science (until 2 February 2021) with no language or publication status restriction. We also searched conference websites and clinical trial registries, tracked reference lists of included studies and relevant systematic reviews, and consulted experts for potentially eligible studies.\nSelection criteria\nWe defined POC tests as rapid diagnostic tests conducted at or near the patient site. We included any primary study that compared the results of a POC NAT to a reference standard of laboratory NAT RT-PCR or total nucleic acid testing to detect the presence or absence of HIV infection denoted by HIV viral nucleic acids in infants and children aged 18 months or less who were exposed to HIV-1/HIV-2 infection. We included cross-sectional, prospective, and retrospective study designs and those that provided sufficient data to create the 2 × 2 table to calculate sensitivity and specificity. We excluded diagnostic case control studies with healthy controls.\nData collection and analysis\nWe extracted information on study characteristics using a pretested standardized data extraction form. We used the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) tool to assess the risk of bias and applicability concerns of the included studies. Two review authors independently selected and assessed the included studies, resolving any disagreements by consensus. The unit of analysis was the participant. We first conducted preliminary exploratory analyses by plotting estimates of sensitivity and specificity from each study on forest plots and in receiver operating characteristic (ROC) space. For the overall meta-analyses, we pooled estimates of sensitivity and specificity using the bivariate meta-analysis model at a common threshold (presence or absence of infection).\nMain results\nWe identified a total of 12 studies (15 evaluations, 15,120 participants). All studies were conducted in sub-Saharan Africa. The ages of included infants and children in the evaluations were as follows: at birth (n = 6), ≤ 12 months (n = 3), ≤ 18 months (n = 5), and ≤ 24 months (n = 1). Ten evaluations were field evaluations of the POC NAT test at the point of care, and five were laboratory evaluations of the POC NAT tests.The POC NAT tests evaluated included Alere q HIV-1/2 Detect qualitative test (recently renamed m-PIMA q HIV-1/2 Detect qualitative test) (n = 6), Xpert HIV-1 qualitative test (n = 6), and SAMBA HIV-1 qualitative test (n = 3).\nPOC NAT pooled sensitivity and specificity (95% confidence interval (CI)) against laboratory reference standard tests were 98.6% (96.1 to 99.5) (15 evaluations, 1728 participants) and 99.9% (99.7 to 99.9) (15 evaluations, 13,392 participants) in infants and children ≤ 18 months.\nRisk of bias in the included studies was mostly low or unclear due to poor reporting. Five evaluations had some concerns for applicability for the index test, as they were POC tests evaluated in a laboratory setting, but there was no difference detected between settings in sensitivity (−1.3% (95% CI −4.1 to 1.5)); and specificity results were similar.\nAuthors' conclusions\nFor the diagnosis of HIV-1/HIV-2 infection, we found the sensitivity and specificity of POC NAT tests to be high in infants and children aged 18 months or less who were exposed to HIV infection.\n\nGiven the provided abstract, please respond to the following query: \"To whom do the results of this review apply?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Infants and children aged 18 months or less who were exposed to HIV infection."
}
] |
query-laysum | 1027 | [
{
"role": "user",
"content": "Abstract: Background\nChronic hepatitis B is associated with high morbidity and mortality. Chronic hepatitis B requires long-term management aiming at reduction of the risks of hepatocellular inflammatory necrosis, liver fibrosis, decompensated liver cirrhosis, liver failure, and liver cancer, and improving health-related quality of life. The Chinese herbal medicine formula Xiao Chai Hu Tang has been used to decrease discomfort and replication of the virus in people with chronic hepatitis B. However, the benefits and harms of Xiao Chai Hu Tang formula have never been established with rigorous review methodology.\nObjectives\nTo assess the benefits and harms of Xiao Chai Hu Tang formula versus placebo or no intervention in people with chronic hepatitis B.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE Ovid, Embase Ovid, and seven other databases to 1 March 2019. We also searched the World Health Organization International Clinical Trials Registry Platform (www.who.int/ictrp), ClinicalTrials.gov (www.clinicaltrials.gov/), and the Chinese Clinical Trial Registry for ongoing or unpublished trials to 1 March 2019.\nSelection criteria\nWe included randomised clinical trials, irrespective of publication status, language, and blinding, comparing Xiao Chai Hu Tang formula versus no intervention or placebo in people with chronic hepatitis B. We included participants of any sex and age, diagnosed with chronic hepatitis B according to guidelines or as defined by the trialists. We allowed co-interventions when the co-interventions were administered equally to all the intervention groups.\nData collection and analysis\nReview authors independently retrieved data from reports and after correspondence with investigators. Our primary outcomes were all-cause mortality, serious adverse events, and health-related quality of life. Our secondary outcomes were hepatitis B-related mortality, hepatitis B-related morbidity, and adverse events considered 'not to be serious'. We presented the meta-analysed results as risk ratios (RR) with 95% confidence intervals (CI). We assessed the risks of bias using risk of bias domains with predefined definitions. We used GRADE methodology to evaluate our certainty in the evidence.\nMain results\nWe included 10 randomised clinical trials with 934 participants, but only five trials with 490 participants provided data for analysis. All the trials compared Xiao Chai Hu Tang formula with no intervention. All trials appeared to have been conducted and published only in China. The included trials assessed heterogeneous forms of Xiao Chai Hu Tang formula, administered for three to eight months. One trial included participants with hepatitis B and comorbid tuberculosis, and one trial included participants with hepatitis B and liver cirrhosis. The remaining trials included participants with hepatitis B only. All the trials were at high risk of bias, and the certainty of evidence for all outcomes that provided data for analyses was very low. We downgraded the evidence by one or two levels because of outcome risk of bias, inconsistency or heterogeneity of results (opposite direction of effect), indirectness of evidence (use of surrogate outcomes instead of clinically relevant outcomes), imprecision of results (the CIs were wide), and publication bias (small sample size of the trials). Additionally, 47 trials lacked the necessary methodological information needed to ensure the inclusion of these trials in our review.\nNone of the included trials aimed to assess clinically relevant outcomes such as all-cause mortality, serious adverse events, health-related quality of life, hepatitis B-related mortality, or hepatitis B-related morbidity. The effects of Xiao Chai Hu Tang formula on the proportion of participants with adverse events considered 'not to be serious' is uncertain (RR 0.43, 95% CI 0.02 to 11.98; I2 = 69%; very low-certainty evidence). Only three trials with 222 participants reported the proportion of people with detectable hepatitis B virus DNA (HBV-DNA), but the evidence that Xiao Chai Hu Tang formula reduces the presence of HBV-DNA in the blood (a surrogate outcome) is uncertain (RR 0.62, 95% CI 0.45 to 0.85; I2 = 0%; very low-certainty evidence). Only two trials with 160 participants reported the proportion of people with detectable hepatitis B virus e-antigen (HBeAg; a surrogate outcome) (RR 0.72, 95% CI 0.50 to 1.02; I2 = 38%; very low-certainty evidence) and the evidence is uncertain. The evidence is also uncertain for separately reported adverse events considered 'not to be serious'.\nFunding: two of the 10 included trials received academic funding from government or hospital. None of the remaining eight trials reported information on funding.\nAuthors' conclusions\nThe clinical effects of Xiao Chai Hu Tang formula for chronic hepatitis B remain unclear. The included trials were small and of low methodological quality. Despite the wide use of Xiao Chai Hu Tang formula, we lack data on all-cause mortality, serious adverse events, health-related quality of life, hepatitis B-related mortality, and hepatitis B-related morbidity. The evidence in this systematic review comes from data obtained from a maximum three trials. We graded the certainty of evidence as very low for adverse events considered not to be serious and the surrogate outcomes HBeAg and HBV-DNA. We found a large number of trials which lacked clear description of their design and conduct, and hence, these trials are not included in the present review. As all identified trials were conducted in China, there might be a concern about the applicability of this review outside China. Large-sized, high-quality randomised sham-controlled trials with homogeneous groups of participants and transparent funding are lacking.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the benefits and harms of Xiao Chai Hu Tang formula versus placebo or no intervention in people with chronic hepatitis B virus infection."
}
] |
query-laysum | 1028 | [
{
"role": "user",
"content": "Abstract: Background\nChronic spontaneous urticaria (CSU) is characterised by the development of crops of red, itchy, raised weals or hives with no identifiable external cause.\nObjectives\nTo assess the effects of H1-antihistamines for CSU.\nSearch methods\nWe searched the following databases up to June 2014: Cochrane Skin Group Specialised Register, CENTRAL (2014, Issue 5), MEDLINE (from 1946), EMBASE (from 1974) and PsycINFO (from 1806). We searched five trials registers and checked articles for references to relevant randomised controlled trials.\nSelection criteria\nWe included randomised controlled trials of H1-antihistamines for CSU. Interventions included single therapy or a combination of H1-antihistamines compared with no treatment (placebo) or another active pharmacological compound at any dose.\nData collection and analysis\nWe used standard methodological procedures as expected by The Cochrane Collaboration.\nOur primary outcome measures were proportion of participants with complete suppression of urticaria: 'good or excellent' response, 50% or greater improvement in quality of life measures, and adverse events. We present risk ratios (RR) with 95% confidence intervals (CIs).\nMain results\nWe identified 73 studies (9759 participants); 34 studies provided data for 23 comparisons. The duration of the intervention was up to two weeks (short-term) or longer than two weeks and up to three months (intermediate-term).\nCetirizine 10 mg once daily in the short term and in the intermediate term led to complete suppression of urticaria by more participants than was seen with placebo (RR 2.72, 95% CI 1.51 to 4.91). For this same outcome, comparison of desloratadine versus placebo in the intermediate term (5 mg) (RR 37.00, 95% CI 2.31 to 593.70) and in the short term (20 mg) (RR 15.97, 95% CI 1.04 to 245.04) favoured desloratadine, but no differences were seen between 5 mg and 10 mg for short-term treatment.\nLevocetirizine 20 mg per day (short-term) was more effective for complete suppression of urticaria compared with placebo (RR 20.87, 95% CI 1.37 to 317.60), and at 5 mg was effective in the intermediate term (RR 52.88, 95% CI 3.31 to 843.81) but not in the short term, nor was 10 mg effective in the short term.\nRupatadine at 10 mg and 20 mg in the intermediate term achieved a 'good or excellent response' compared with placebo (RR 1.35, 95% CI 1.03 to 1.77).\nLoratadine (10 mg) versus placebo (RR 1.86, 95% CI 0.91 to 3.79) and loratadine (10 mg) versus cetirizine (10 mg) (RR 1.05, 95% CI 0.76 to 1.43) over short-term and intermediate-term treatment showed no significant difference for 'good or excellent response' or for complete suppression of urticaria, respectively.\nLoratadine (10 mg) versus desloratadine (5 mg) (intermediate-term) showed no statistically significant difference for complete suppression of urticaria (RR 0.91, 95% CI 0.78 to 1.06) or for 'good or excellent response' (RR 1.04, 95% CI 0.64 to 1.71). For loratadine (10 mg) versus mizolastine (10 mg) (intermediate-term), no statistically significant difference was seen for complete suppression of urticaria (RR 0.86, 95% CI 0.64 to 1.16) or for 'good or excellent response' (RR 0.88, 95% CI 0.55 to 1.42).\nLoratadine (10 mg) versus emedastine (2 mg) (intermediate-term) showed no statistically significant difference for complete suppression (RR 1.04, 95% CI 0.78 to 1.39) or for 'good or excellent response' (RR 1.09, 95% CI 0.96 to 1.24); the quality of the evidence was moderate for this comparison.\nNo difference in short-term treatment was noted between loratadine (10 mg) and hydroxyzine (25 mg) in terms of complete suppression (RR 1.00, 95% CI 0.32 to 3.10).\nWhen desloratadine (5 to 20 mg) was compared with levocetirizine (5 to 20 mg), levocetirizine appeared to be the more effective (P value < 0.02).\nIn a comparison of fexofenadine versus cetirizine, more participants in the cetirizine group showed complete suppression of urticaria (P value < 0.001).\nAdverse events leading to withdrawals were not significantly different in the following comparisons: cetirizine versus placebo at 10 mg and 20 mg (RR 3.00, 95% CI 0.68 to 13.22); desloratadine 5 mg versus placebo (RR 1.46, 95% CI 0.42 to 5.10); loratadine 10 mg versus mizolastine 10 mg (RR 0.38, 95% CI 0.04 to 3.60); loratadine 10 mg versus emedastine 2 mg (RR 1.09, 95% CI 0.07 to 17.14); cetirizine 10 mg versus hydroxyzine 25 mg (RR 0.78, 95% CI 0.25 to 2.45); and hydroxyzine 25 mg versus placebo (RR 3.64, 95% CI 0.77 to 17.23), all intermediate term.\nNo difference was seen between loratadine 10 mg versus mizolastine 10 mg in the proportion of participants with at least 50% improvement in quality of life (RR 3.21, 95% CI 0.32 to 32.33).\nAuthors' conclusions\nAlthough the results of our review indicate that at standard doses of treatment, several antihistamines are effective when compared with placebo, all results were gathered from a few studies or, in some cases, from single-study estimates. The quality of the evidence was affected by the small number of studies in each comparison and the small sample size for many of the outcomes, prompting us to downgrade the quality of evidence for imprecision (unless stated for each comparison, the quality of the evidence was low).\nNo single H1-antihistamine stands out as most effective. Cetirizine at 10 mg once daily in the short term and in the intermediate term was found to be effective in completely suppressing urticaria. Evidence is limited for desloratadine given at 5 mg once daily in the intermediate term and at 20 mg in the short term. Levocetirizine at 5 mg in the intermediate but not short term was effective for complete suppression. Levocetirizine 20 mg was effective in the short term, but 10 mg was not. No difference in rates of withdrawal due to adverse events was noted between active and placebo groups. Evidence for improvement in quality of life was insufficient.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated clinical trials in which one therapy was compared against another or against placebo (direct comparisons). We found that for general use, 10 mg once daily of cetirizine for short-term and intermediate-term duration was effective in completely suppressing urticaria, although not in all individuals. Some benefit may be associated with use of desloratadine at 5 mg for at least an intermediate term and at 20 mg in the short term. Levocetirizine at 5 mg was effective for complete suppression in the intermediate term but not in the short term. A higher dose of 20 mg was effective in the short term, but 10 mg was not.\nAdverse events, such as headache or dry mouth, are tolerable with most antihistamines. Evidence is less clear for improvement in quality of life (e.g. reduction in sleep disturbance from itching, less distress from the appearance of hives) as many studies did not address this.\nWe cannot say whether one antihistamine works better than all the rest, as we did not have head-to-head evidence for every possible treatment comparison."
}
] |
query-laysum | 1029 | [
{
"role": "user",
"content": "Abstract: Background\nHealthy sleep is an important component of childhood development. Changes in sleep architecture, including sleep stage composition, quantity, and quality from infancy to adolescence are a reflection of neurologic maturation. Hospital admission for acute illness introduces modifiable risk factors for sleep disruption that may negatively affect active brain development during a period of illness and recovery. Thus, it is important to examine non-pharmacologic interventions for sleep promotion in the pediatric inpatient setting.\nObjectives\nTo evaluate the effect of non-pharmacological sleep promotion interventions in hospitalized children and adolescents on sleep quality and sleep duration, child or parent satisfaction, cost-effectiveness, delirium incidence, length of mechanical ventilation, length of stay, and mortality.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, three other databases, and three trials registers to December 2021. We searched Google Scholar, and two websites, handsearched conference abstracts, and checked reference lists of included studies.\nSelection criteria\nRandomized controlled trials (RCTs) or quasi-RCTs, including cross-over trials, investigating the effects of any non-pharmacological sleep promotion intervention on the sleep quality or sleep duration (or both) of children aged 1 month to 18 years in the pediatric inpatient setting (intensive care unit [ICU] or general ward setting).\nData collection and analysis\nTwo review authors independently assessed trial eligibility, evaluated risk of bias, extracted and synthesized data, and used the GRADE approach to assess certainty of evidence. The primary outcomes were changes in both objective and subjective validated measures of sleep in children; secondary outcomes were child and parent satisfaction, cost-effectiveness ratios, delirium incidence or delirium-free days at time of hospital discharge, duration of mechanical ventilation, length of hospital stay, and mortality.\nMain results\nWe included 10 trials (528 participants; aged 3 to 22 years) in inpatient pediatric settings. Seven studies were conducted in the USA, two in Canada, and one in Brazil. Eight studies were funded by government, charity, or foundation grants. Two provided no information on funding.\nEight studies investigated behavioral interventions (massage, touch therapy, and bedtime stories); two investigated physical activity interventions. Duration and timing of interventions varied widely. All studies were at high risk of performance bias due to the nature of the intervention, as participants, parents, and staff could not be masked to group assignment.\nWe were unable to perform a quantitative synthesis due to substantial clinical heterogeneity.\nBehavioral interventions versus usual care\nFive studies (145 participants) provided low-certainty evidence of no clear difference between multicomponent relaxation interventions and usual care on objective sleep measures. Overall, evidence from single studies found no clear differences in daytime or nighttime sleep measures (33 participants); any sleep parameter (48 participants); or daytime or nighttime sleep or nighttime arousals (20 participants). One study (34 participants) reported no effect of massage on nighttime sleep, sleep efficiency (SE), wake after sleep onset (WASO), or total sleep time (TST) in adolescents with cancer. Evidence from a cross-over study in 10 children with burns suggested touch therapy may increase TST (391 minutes, interquartile range [IQR] 251 to 467 versus 331 minutes, IQR 268 to 373; P = 0.02); SE (76, IQR 53 to 90 versus 66, IQR 55 to 78; P = 0.04); and the number of rapid eye movement (REM) periods (4.5, IQR 2 to 5 versus 3.5, IQR 2 to 4; P = 0.03); but not WASO, sleep latency (SL), total duration of REM, or per cent of slow wave sleep.\nFour studies (232 participants) provided very low-certainty evidence on subjective measures of sleep. Evidence from single studies found that sleep efficiency may increase, and the percentage of nighttime wakefulness may decrease more over a five-day period following a massage than usual care (72 participants). One study (48 participants) reported an improvement in Children's Sleep Habits Questionnaire scores after discharge in children who received a multicomponent relaxation intervention compared to usual care. In another study, mean sleep duration per sleep episode was longer (23 minutes versus 15 minutes), and time to fall asleep was shorter (22 minutes versus 27 minutes) following a bedtime story versus no story (18 participants); and children listening to a parent-recorded story had longer SL than when a parent was present (mean 57.5 versus 43.5 minutes); both groups reported longer SL than groups who had a stranger-recorded story, and those who had no story and absent parents (94 participants; P < 0.001).\nIn one study (34 participants), 87% (13/15) of participants felt they slept better following massage, with most parents (92%; 11/12) reporting they wanted their child to receive a massage again. Another study (20 participants) reported that parents thought the music, touch, and reading components of the intervention were acceptable, feasible, and had positive effects on their children (very low-certainty evidence).\nPhysical activity interventions versus usual care\nOne study (29 participants) found that an enhanced physical activity intervention may result in little or no improvement in TST or SE compared to usual care (low-certainty evidence). Another study (139 participants), comparing play versus no play found inconsistent results on subjective measures of sleep across different ages (TST was 49% higher for the no play groups in 4- to 7-year olds, 10% higher in 7- to 11-year olds, and 22% higher in 11- to 14-year olds). This study also found inconsistent results between boys and girls (girls in the first two age groups in the play group slept more than the no play group).\nNo study evaluated child or parent satisfaction for behavioral interventions, or cost-effectiveness, delirium incidence or delirium-free days at hospital discharge, length of mechanical ventilation, length of hospital stay, or mortality for either behavioral or physical activity intervention.\nAuthors' conclusions\nThe included studies were heterogeneous, so we could not quantitatively synthesize the results. Our narrative summary found inconsistent, low to very low-certainty evidence. Therefore, we are unable to determine how non-pharmacologic sleep promotion interventions affect sleep quality or sleep duration compared with usual care or other interventions.\nThe evidence base should be strengthened through design and conduct of randomized trials, which use validated and highly reliable sleep assessment tools, including objective measures, such as polysomnography and actigraphy.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This evidence is current to December 2021."
}
] |
query-laysum | 1030 | [
{
"role": "user",
"content": "Abstract: Background\nSudden infant death syndrome (SIDS) has been most recently defined as the sudden unexpected death of an infant less than one year of age, with onset of the fatal episode apparently occurring during sleep, that remains unexplained after a thorough investigation, including the performance of a complete autopsy and a review of the circumstances of death and clinical history. Despite the success of several prevention campaigns, SIDS remains a leading cause of infant mortality. In 1994, a 'triple risk model' for SIDS was proposed that described SIDS as an event that results from the intersection of three factors: a vulnerable infant; a critical development period in homeostatic control (age related); and an exogenous stressor. The association between pacifier (dummy) use and reduced incidence of SIDS has been shown in epidemiological studies since the early 1990s. Pacifier use, given its low cost, might be a cost-effective intervention for SIDS prevention if it is confirmed effective in randomised controlled trials.\nObjectives\nTo determine whether the use of pacifiers during sleep versus no pacifier during sleep reduces the risk of SIDS.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 2), MEDLINE via PubMed, Embase, and CINAHL to 16 March 2016. We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nPublished and unpublished controlled trials using random and quasi-random allocations of infants born at term and at preterm (less than 37 weeks' gestation) or with low birth weight (< 2500 g). Infants must have been randomised by one month' postmenstrual age. We planned to include studies reported only by abstracts, and cluster and cross-over randomised trials.\nData collection and analysis\nTwo review authors independently reviewed studies from searches. We found no eligible studies.\nMain results\nWe identified no randomised controlled trials examining infant pacifiers for reduction in risk of SIDS.\nAuthors' conclusions\nWe found no randomised control trial evidence on which to support or refute the use of pacifiers for the prevention of SIDS.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sudden unexpected death of an infant generally occurs during sleep from birth to one year of age but mainly occurs between one and four months of age. Despite the success of several prevention campaigns, sudden infant death syndrome remains a leading cause of infant mortality (death). A variety of factors have been identified as increasing the risk of sudden infant death syndrome such as male sex, preterm (before the due date) birth, overheating, exposure to cigarette smoke, and infants lying on their stomachs. Pacifier use has been proposed as a non-invasive intervention to reduce the risk of SIDS. This review was undertaken to examine whether infant pacifiers can reduce the risk of SIDS."
}
] |
query-laysum | 1031 | [
{
"role": "user",
"content": "Abstract: Background\nWorldwide, many countries have adopted colorectal cancer (CRC) screening programmes, often based on faecal occult blood tests (FOBTs). CRC screening aims to detect advanced neoplasia (AN), which is defined as CRC or advanced adenomas. FOBTs fall into two categories based on detection technique and the detected blood component: qualitative guaiac-based FOBTs (gFOBTs) and faecal immunochemical tests (FITs), which can be qualitative and quantitative. Screening with gFOBTs reduces CRC-related mortality.\nObjectives\nTo compare the diagnostic test accuracy of gFOBT and FIT screening for detecting advanced colorectal neoplasia in average-risk individuals.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, BIOSIS Citation Index, Science Citation Index Expanded, and Google Scholar. We searched the reference lists and PubMed-related articles of included studies to identify additional studies.\nSelection criteria\nWe included prospective and retrospective studies that provided the number of true positives, false positives, false negatives, and true negatives for gFOBTs, FITs, or both, with colonoscopy as reference standard. We excluded case-control studies. We included studies in which all participants underwent both index test and reference standard (\"reference standard: all\"), and studies in which only participants with a positive index test underwent the reference standard while participants with a negative test were followed for at least one year for development of interval carcinomas (\"reference standard: positive\"). The target population consisted of asymptomatic, average-risk individuals undergoing CRC screening. The target conditions were CRC and advanced neoplasia (advanced adenomas and CRC combined).\nData collection and analysis\nTwo review authors independently screened and selected studies for inclusion. In case of disagreement, a third review author made the final decision. We used the Rutter and Gatsonis hierarchical summary receiver operating characteristic model to explore differences between tests and identify potential sources of heterogeneity, and the bivariate hierarchical model to estimate sensitivity and specificity at common thresholds: 10 µg haemoglobin (Hb)/g faeces and 20 µg Hb/g faeces. We performed indirect comparisons of the accuracy of the two tests and direct comparisons when both index tests were evaluated in the same population.\nMain results\nWe ran the initial search on 25 June 2019, which yielded 63 studies for inclusion. We ran a top-up search on 14 September 2021, which yielded one potentially eligible study, currently awaiting classification.\nWe included a total of 33 \"reference standard: all\" published articles involving 104,640 participants. Six studies evaluated only gFOBTs, 23 studies evaluated only FITs, and four studies included both gFOBTs and FITs. The cut-off for positivity of FITs varied between 2.4 μg and 50 µg Hb/g faeces. For each Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 domain, we assessed risk of bias as high in less than 20% of studies. The summary curve showed that FITs had a higher discriminative ability than gFOBTs for AN (P < 0.001) and CRC (P = 0.004). For the detection of AN, the summary sensitivity of gFOBTs was 15% (95% confidence interval (CI) 12% to 20%), which was significantly lower than FITs at both 10 μg and 20 μg Hb/g cut-offs with summary sensitivities of 33% (95% CI 27% to 40%; P < 0.001) and 26% (95% CI 21% to 31%, P = 0.002), respectively. Results were simulated in a hypothetical cohort of 10,000 screening participants with 1% CRC prevalence and 10% AN prevalence. Out of 1000 participants with AN, gFOBTs missed 850, while FITs missed 670 (10 μg Hb/g cut-off) and 740 (20 μg Hb/g cut-off). No significant differences in summary specificity for AN detection were found between gFOBTs (94%; 95% CI 92% to 96%), and FITs at 10 μg Hb/g cut-off (93%; 95% CI 90% to 95%) and at 20 μg Hb/g cut-off (97%; 95% CI 95% to 98%). So, among 9000 participants without AN, 540 were offered (unnecessary) colonoscopy with gFOBTs compared to 630 (10 μg Hb/g) and 270 (20 μg Hb/g) with FITs. Similarly, for the detection of CRC, the summary sensitivity of gFOBTs, 39% (95% CI 25% to 55%), was significantly lower than FITs at 10 μg and 20 μg Hb/g cut-offs: 76% (95% CI 57% to 88%: P = 0.001) and 65% (95% CI 46% to 80%; P = 0.035), respectively. So, out of 100 participants with CRC, gFOBTs missed 61, and FITs missed 24 (10 μg Hb/g) and 35 (20 μg Hb/g). No significant differences in summary specificity for CRC were found between gFOBTs (94%; 95% CI 91% to 96%), and FITs at the 10 μg Hb/g cut-off (94%; 95% CI 87% to 97%) and 20 μg Hb/g cut-off (96%; 95% CI 91% to 98%). So, out of 9900 participants without CRC, 594 were offered (unnecessary) colonoscopy with gFOBTs versus 594 (10 μg Hb/g) and 396 (20 μg Hb/g) with FITs.\nIn five studies that compared FITs and gFOBTs in the same population, FITs showed a higher discriminative ability for AN than gFOBTs (P = 0.003).\nWe included a total of 30 \"reference standard: positive\" studies involving 3,664,934 participants. Of these, eight were gFOBT-only studies, 18 were FIT-only studies, and four studies combined both gFOBTs and FITs. The cut-off for positivity of FITs varied between 5 µg to 250 µg Hb/g faeces. For each QUADAS-2 domain, we assessed risk of bias as high in less than 20% of studies. The summary curve showed that FITs had a higher discriminative ability for detecting CRC than gFOBTs (P < 0.001). The summary sensitivity for CRC of gFOBTs, 59% (95% CI 55% to 64%), was significantly lower than FITs at the 10 μg Hb/g cut-off, 89% (95% CI 80% to 95%; P < 0.001) and the 20 μg Hb/g cut-off, 89% (95% CI 85% to 92%; P < 0.001). So, in the hypothetical cohort with 100 participants with CRC, gFOBTs missed 41, while FITs missed 11 (10 μg Hb/g) and 11 (20 μg Hb/g). The summary specificity of gFOBTs was 98% (95% CI 98% to 99%), which was higher than FITs at both 10 μg and 20 μg Hb/g cut-offs: 94% (95% CI 92% to 95%; P < 0.001) and 95% (95% CI 94% to 96%; P < 0.001), respectively. So, out of 9900 participants without CRC, 198 were offered (unnecessary) colonoscopy with gFOBTs compared to 594 (10 μg Hb/g) and 495 (20 μg Hb/g) with FITs. At a specificity of 90% and 95%, FITs had a higher sensitivity than gFOBTs.\nAuthors' conclusions\nFITs are superior to gFOBTs in detecting AN and CRC in average-risk individuals. Specificity of both tests was similar in \"reference standard: all\" studies, whereas specificity was significantly higher for gFOBTs than FITs in \"reference standard: positive\" studies. However, at pre-specified specificities, the sensitivity of FITs was significantly higher than gFOBTs.\n\nGiven the provided abstract, please respond to the following query: \"How reliable are the results of the studies in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The results of the studies are reliable, as the included studies mostly met the quality criteria we specified before commencing the review."
}
] |
query-laysum | 1032 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is an inherited progressive life-limiting disease characterised by the build-up of abnormally thick, sticky mucus affecting mostly the lungs, pancreas, and digestive system. Airway clearance techniques (ACTs), traditionally referred to as chest physiotherapy, are recommended as part of a complex treatment programme for people with CF. The aim of an ACTs is to enhance mucociliary clearance and remove viscous secretions from the airways within the lung to prevent distal airway obstruction. This reduces the infective burden and associated inflammatory effects on the airway epithelia.\nThere are a number of recognised ACTs, none of which have shown superiority in improving short-term outcomes related to mucus transport. This systematic review, which has been updated regularly since it was first published in 2000, considers the efficacy of ACTs compared to not performing any ACT in adults and children with CF. It is important to continue to review this evidence, particularly the long-term outcomes, given the recent introduction of highly effective modulator therapies and the improved health outcomes and potential changes to CF management associated with these drugs.\nObjectives\nTo determine the effectiveness and acceptability of airway clearance techniques compared to no airway clearance techniques or cough alone in people with cystic fibrosis.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register, which comprises references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings, to 17 October 2022. We searched ongoing trials registers (Clinicaltrials.gov and the WHO International Clinical Trials Registry Platform) to 7 November 2022.\nSelection criteria\nWe included randomised or quasi-randomised studies that compared airway clearance techniques (chest physiotherapy) with no airway clearance techniques or spontaneous cough alone in people with CF.\nData collection and analysis\nBoth review authors independently assessed study eligibility, extracted data, and assessed the risk of bias of the included studies. We used GRADE methodology to assess the certainty of the evidence.\nMain results\nWe included 11 cross-over studies (153 participants) and one parallel study (41 participants). There were differences between studies in how the interventions were delivered, with several intervention groups combining more than one ACT. One study used autogenic drainage; five used conventional chest physiotherapy; nine used positive expiratory pressure (PEP), with one study varying the water pressure between arms; three studies used oscillating PEP; two used exercise; and two used high-frequency chest wall oscillation (HFCWO). Of the 12 included studies, 10 were single-treatment studies, and two delivered the intervention over two consecutive days (once daily in one study, twice daily in the second). This substantial heterogeneity in the treatment interventions precluded pooling of data for meta-analysis. Blinding of participants, caregivers, and clinicians is impossible in airway clearance studies; we therefore judged all studies at unclear risk of performance bias. Lack of information in eight studies made assessment of risk of bias unclear for most other domains.\nWe rated the certainty of evidence as low or very low due to the short-term cross-over trial design, small numbers of participants, and uncertain risk of bias across most or all domains.\nSix studies (84 participants) reported no effect on pulmonary function variables following intervention; but one study (14 participants) reported an improvement in pulmonary function following the intervention in some of the treatment groups. Two studies reported lung clearance index: one (41 participants) found a variable response to treatment with HFCWO, whilst another (15 participants) found no effect on lung clearance index with PEP therapy (low-certainty evidence). Five studies (55 participants) reported that ACTs, including coughing, increased radioactive tracer clearance compared to control, while a further study (eight participants) reported no improvement in radioactive tracer clearance when comparing PEP to control, although coughing was discouraged during the PEP intervention. We rated the certainty of evidence on the effect of ACTs on radioactive tracer clearance as very low.\nFour studies (46 participants) investigated the weight of mucus cleared from the lungs and reported greater secretions during chest physiotherapy compared to a control. One study (18 participants) reported no differences in sputum weight (very low-certainty evidence).\nAuthors' conclusions\nThe evidence from this review shows that ACTs may have short-term effects on increasing mucus transport in people with CF. All included studies had short-term follow-up; consequently, we were unable to draw any conclusions on the long-term effects of ACTs compared to no ACTs in people with CF.\nThe evidence in this review represents the use of airway clearance techniques in a CF population before widespread use of cystic fibrosis transmembrane conductance regulator (CFTR) modulators. Further research is needed to determine the effectiveness and acceptability of airway clearance in those treated with highly effective CFTR modulators.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What are the effects of using any airway clearance technique compared to not using an airway clearance technique for clearing excess mucus from the lungs of people with cystic fibrosis?"
}
] |
query-laysum | 1033 | [
{
"role": "user",
"content": "Abstract: Background\nThe laryngeal mask airway (LMA) is a safe and effective modality to maintain the airway for general anaesthesia during surgical procedures. The LMA is removed at the end of surgery and anaesthesia, when the patient maintains an adequate respiratory rate and depth. This removal of the LMA can be done either when the patient is deep under anaesthesia (early removal) or only after the patient has regained consciousness (late removal). It is not clear which of these techniques is superior.\nObjectives\nThe objective of this review was to compare the safety of LMA removal in the deep plane of anaesthesia (early removal) versus removal in the awake state (late removal) for participants undergoing general anaesthesia.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 8); MEDLINE (1966 to August 2014); EMBASE (1980 to August 2014); LILACS (1982 to August 2014); CINAHL (WebSPIRS; 1984 to August 2014); and ISI Web of Science (1984 to August 2014). We searched for ongoing trials through various trial registration websites. In addition, we searched conference proceedings and reference lists of relevant articles.\nSelection criteria\nWe included randomized controlled trials (RCTs) on adults and children undergoing elective general anaesthesia using the LMA, that compared early removal of the LMA (defined as removal of the LMA in the deep plane of anaesthesia) versus late removal of the LMA (defined as removal of the LMA after the patient is awake).\nData collection and analysis\nTwo authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We used a random-effects model to generate forest plots from the data.\nMain results\nWe identified a total of 9188 citations and included 15 RCTs conducted on 2242 participants in this review. All trials used the LMA Classic in American Society of Anesthesiologists (ASA) physical status I or II for patients undergoing elective general anaesthesia. Children were enrolled in 11 trials and adults in five trials. None of the trials were of high methodological quality. Eight of the 15 studies had adequate generation of random sequence, whereas only one trial had adequate concealment of random sequence. Three trials had blinded the outcome assessor. Thus, the majority of the studies appeared to have a high risk of bias in the study design.\nUsing the GRADE approach, we found low quality evidence that the risk of laryngospasm was similar with early removal of the LMA (3.3%) versus late removal (2.7%): risk ratio (RR) 1.23, 95% confidence interval (CI) 0.74 to 2.03; 11 trials, 1615 participants. The quality of evidence was very low that the risk of coughing was less after early removal (13.9%) than late removal (19.4%): RR 0.52, 95% CI 0.29 to 0.94; 11 trials, 1430 participants. The quality of evidence for the risk of desaturation was also very low; there was no difference between early removal (7.9%) and late removal (10.1%): RR 0.68, 95% CI 0.4 to 1.16; 13 trials, 2037 participants. We found low quality evidence that the risk of airway obstruction was higher with early removal (15.6%) compared to late removal of the LMA (4.6%): RR 2.69, 95% CI 1.32 to 5.5; eight trials, 1313 participants.\nAuthors' conclusions\nThis systematic review suggests that current best evidence comparing early versus late removal of the LMA in participants undergoing general anaesthesia does not demonstrate superiority of either intervention. However, the quality of evidence available is either low or very low. There is a paucity of well designed RCTs and a need for large scale RCTs to demonstrate whether early removal or late removal of the LMA is better after general anaesthesia.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The LMA is an airway device used to keep the airway open during general anaesthesia in adults and children. The LMA is removed at the end of the surgical procedure either while the person remains anaesthetized (referred to as early removal) or after the person is fully awake (referred to as late removal). At present it is unclear which of these approaches (early removal versus late removal) is better in terms of the safety of the patient."
}
] |
query-laysum | 1034 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review previously published in 2018.\nEpilepsy is a common neurological disorder characterised by recurrent seizures. Most people with epilepsy have a good prognosis and their seizures are well controlled by a single antiepileptic drug, but up to 30% develop drug-resistant epilepsy, especially people with focal seizures. In this review, we summarised the evidence from randomised controlled trials (RCTs) of gabapentin, when used as an add-on treatment for drug-resistant focal epilepsy.\nObjectives\nTo evaluate the efficacy and tolerability of gabapentin when used as an add-on treatment for people with drug-resistant focal epilepsy.\nSearch methods\nFor the latest update, we searched the Cochrane Register of Studies (CRS Web) and MEDLINE (Ovid) on 11 August 2020. CRS Web includes randomised or quasi-randomised, controlled trials from PubMed, Embase, ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform (ICTRP), the Cochrane Central Register of Controlled Trials (CENTRAL), and the Specialised Registers of Cochrane Review Groups including Epilepsy. We imposed no language restrictions.\nSelection criteria\nRandomised, placebo-controlled, double-blind, add-on trials of gabapentin in people with drug-resistant focal epilepsy. We also included trials using an active drug control group or comparing different doses of gabapentin.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted the relevant data. We assessed the following outcomes: seizure frequency, seizure freedom, treatment withdrawal (any reason) and adverse effects. Primary analyses were intention-to-treat. We also undertook sensitivity best-case and worst-case analyses. We estimated summary risk ratios (RR) for each outcome and evaluated dose-response in regression models.\nMain results\nWe identified no new studies for this update, therefore, the results and conclusions are unchanged.\nIn the previous update of this review, we combined data from six trials in meta-analyses of 1206 randomised participants. The overall risk ratio (RR) for reduction in seizure frequency of 50% or more compared to placebo was 1.89 (95% confidence interval (CI) 1.40 to 2.55; 6 studies, 1206 participants; moderate-certainty evidence). Dose regression analysis (for trials in adults) showed increasing efficacy with increasing dose, with 25.3% (95% CI 19.3 to 32.3) of people responding to gabapentin 1800 mg compared to 9.7% on placebo, a 15.5% increase in response rate (95% CI 8.5 to 22.5). The RR for treatment withdrawal compared to placebo was 1.05 (95% CI 0.74 to 1.49; 6 trials, 1206 participants; moderate-certainty evidence). Adverse effects were significantly associated with gabapentin compared to placebo. RRs were as follows: ataxia 2.01 (99% CI 0.98 to 4.11; 3 studies, 787 participants; low-certainty evidence), dizziness 2.43 (99% CI 1.44 to 4.12; 6 studies, 1206 participants; moderate-certainty evidence), fatigue 1.95 (99% CI 0.99 to 3.82; 5 studies, 1161 participants; low-certainty evidence) and somnolence 1.93 (99% CI 1.22 to 3.06; 6 studies, 1206 participants; moderate-certainty evidence). There was no evidence of a difference for the adverse effects of headache (RR 0.79, 99% CI 0.46 to 1.35; 6 studies, 1206 participants; moderate-certainty evidence) or nausea (RR 0.95, 99% CI 0.52 to 1.73; 4 trials, 1034 participants; moderate-certainty evidence). Overall, the studies were at low to unclear risk of bias due to information on each risk of bias domain not being available. We judged the overall certainty of the evidence (using the GRADE approach) as low to moderate due to potential attrition bias resulting from missing outcome data and imprecise results with wide CIs.\nAuthors' conclusions\nGabapentin has efficacy as an add-on treatment in people with drug-resistant focal epilepsy, and seems to be fairly well-tolerated. However, the trials reviewed were of relatively short duration and provide no evidence for the long-term efficacy of gabapentin beyond a three-month period. The results cannot be extrapolated to monotherapy or to people with other epilepsy types. Further trials are needed to assess the long-term effects of gabapentin, and to compare gabapentin with other add-on drugs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The results showed that gabapentin effectively reduced seizures when used as an additional treatment. Compared to a placebo, gabapentin was almost twice as likely to reduce seizures by 50% or more. The most common side effects associated with gabapentin were ataxia (poor co-ordination and unsteady gait), dizziness, fatigue and drowsiness."
}
] |
query-laysum | 1035 | [
{
"role": "user",
"content": "Abstract: Background\nSmith-Lemli-Opitz syndrome (SLOS) is a multiple congenital malformations syndrome caused by defective cholesterol biosynthesis. Affected individuals show cholesterol deficiency and accumulation of various precursor molecules, mainly 7-dehydrocholesterol and 8-dehydrocholesterol. There is currently no cure for SLOS, with cholesterol supplementation being primarily a biochemical therapy of limited evidence. However, several anecdotal reports and preclinical studies have highlighted statins as a potential therapy for SLOS.\nObjectives\nTo evaluate the effects of statins, either alone or in combination with other non-statin therapies (e.g. cholesterol, bile acid, or vitamin co-supplementation), compared to cholesterol supplementation alone or in combination with other non-statin therapies (e.g. bile acid or vitamin supplementation) on several important outcomes including overall survival, neurobehavioral features, and adverse effects in individuals with SLOS.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, five other databases and three trials registers on 15 February 2022, together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs) and quasi-RCTs with parallel or cross-over designs, and non-randomized studies of interventions (NRSIs) including non-randomized trials, cohort studies, and controlled before-and-after studies, were eligible for inclusion in this review if they met our prespecified inclusion criteria, i.e. involved human participants with biochemically or genetically diagnosed SLOS receiving statin therapy or cholesterol supplementation, or both.\nData collection and analysis\nTwo authors screened titles and abstracts and subsequently full-texts for all potentially-relevant references. Both authors independently extracted relevant data from included studies and assessed the risks of bias. We analyzed the data extracted from the included NRSIs and cohort studies separately from the data extracted from the single included RCT. We used a random-effects model to account for the inherent heterogeneity and methodological variation between these different study designs. We used GRADE to assess the certainty of evidence.\nMain results\nWe included six studies (61 participants with SLOS); one RCT (N = 18), three prospective NRSIs (N = 20), and two retrospective NRSIs (N = 22). Five studies included only children, and two limited their participant inclusion by disease severity. Overall, there were nearly twice as many males as females. All six studies compared add-on statin therapy to cholesterol supplementation alone. However, the dosages, formulations, and durations of treatment were highly variable across studies.\nWe judged the RCT as having a high risk of bias due to missing data and selective reporting. All included NRSIs had a serious or critical overall risk of bias assessed by the Risk Of Bias In Non-randomized Studies of Interventions tool (ROBINS-I).\nNone of the included studies evaluated survival or reported quality of life (QoL). Only the included RCT formally assessed changes in the neurobehavioral manifestations of SLOS, and we are uncertain whether statin therapy improves this outcome (very low-certainty evidence). We are also uncertain whether the adverse events reported in the RCT were statin-related (very low-certainty evidence). In contrast, the adverse events reported in the NRSIs seem to be possibly due to statin therapy (risk ratio 13.00, 95% confidence interval 1.85 to 91.49; P = 0.01; low-certainty evidence), with only one of the NRSIs retrospectively mentioning changes in the irritability of two of their participants. We are uncertain whether statins affect growth based on the RCT or NRSI results (very low-certainty evidence). The RCT showed that statins may make little or no difference to plasma biomarker levels (low-certainty evidence), while we are uncertain of their effects on such parameters in the NRSIs (very low-certainty evidence).\nAuthors' conclusions\nCurrently, there is no evidence on the potential effects of statin therapy in people with SLOS regarding survival or QoL, and very limited evidence on the effects on neurobehavioral manifestations. Likewise, current evidence is insufficient and of very low certainty regarding the effects of statins on growth parameters in children with SLOS and plasma or cerebrospinal fluid (CSF) levels of various disease biomarkers. Despite these limitations, current evidence seemingly suggests that statins may increase the risk of adverse reactions in individuals with SLOS receiving statins compared to those who are not. Given the insufficient evidence on potential benefits of statins in individuals with SLOS, and their potential for causing adverse reactions, anyone considering this therapy should take these findings into consideration. Future studies should address the highlighted gaps in evidence on the use of statins in individuals with SLOS by collecting prospective data on survival and performing serial standardized assessments of neurobehavioral features, QoL, anthropometric measures, and plasma and CSF biomarker levels after statin introduction. Future studies should also attempt to use consistent dosages, formulations and durations of cholesterol and statin therapy.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no evidence for the effects of combined statin and cholesterol supplementation, compared to cholesterol supplementation alone, on survival or quality of life in people with SLOS. We are also not sure if combined statin and cholesterol therapy has positive effects on neurobehavioral manifestations or growth in individuals with SLOS compared with cholesterol therapy alone, as we have low confidence in the evidence. We think people with SLOS receiving statin therapy in addition to cholesterol supplementation are more likely to experience adverse events that are usually associated with using statins, compared to those who are only receiving cholesterol supplementation. Finally, we are uncertain about the effects of statin therapy on the levels of various biomarkers usually measured in the blood of people with SLOS."
}
] |
query-laysum | 1036 | [
{
"role": "user",
"content": "Abstract: Background\nBronchiolitis is one of the most frequent causes of respiratory failure in infants; some infants will require intensive care and mechanical ventilation. There is lack of evidence regarding effective treatment for bronchiolitis other than supportive care. Abnormalities of surfactant quantity or quality (or both) have been observed in severe cases of bronchiolitis. Exogenous surfactant administration appears to favourably change the haemodynamics of the lungs and may be a potentially promising therapy for severe bronchiolitis. This is an update of a review published in Issue 9, 2012. We did not identify any new studies for inclusion, and our conclusions remain unchanged.\nObjectives\nTo evaluate the efficacy of exogenous surfactant administration (i.e. intratracheal administration of surfactant of any type (whether animal-derived or synthetic), at any dose and at any time after start of ventilation) compared to placebo, no intervention or standard care in reducing mortality and the duration of ventilation in infants and children with bronchiolitis requiring mechanical ventilation.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Studies (CENTRAL; 2015, Issue 5) which contains the Cochrane Acute Respiratory Infections Group's Specialised Register; MEDLINE (1948 to June week 3, 2015); EMBASE (1974 to June 2015); CINAHL (1982 to June 2015); LILACS (1985 to June 2015); and Web of Science (1985 to June 2015).\nSelection criteria\nWe considered prospective, randomised controlled trials (RCTs) and quasi-RCTs evaluating the effect of exogenous surfactant in infants and children with bronchiolitis requiring mechanical ventilation.\nData collection and analysis\nTwo review authors selected studies independently. We extracted the data using a predefined proforma, independently analysed the data, and performed meta-analyses.\nMain results\nWe included three small RCTs enrolling 79 participants. Two trials did not use a placebo in the control arms and the third trial used air placebo. Two included studies reported no mortality. We judged all three of the included studies to be at low risk or unclear risk across all risk of bias categories; we did not judge any of the studies to be at high risk of bias in any category. Our pooled analysis of the three trials revealed that duration of mechanical ventilation was not significantly different between the groups (mean difference (MD) -63.04, 95% confidence interval (CI) -130.43 to 4.35 hours) but duration of intensive care unit (ICU) stay was less in the surfactant group compared to the control group: MD -3.31, 95% CI -6.38 to -0.25 days. After excluding one trial which produced significant heterogeneity, the duration of mechanical ventilation and duration of ICU stay were significantly lower in the surfactant group compared to the control group: MD -28.99, 95% CI -40.10 to -17.87 hours; and MD -1.81, 95% CI -2.42 to -1.19 days, respectively. Use of surfactant had favourable effects on oxygenation and CO2 elimination. No adverse effects and no complications were observed in any of the three included studies. The level of evidence for duration of mechanical ventilation, duration of intensive care unit stay, oxygenation parameters, and carbon dioxide parameters was of moderate quality.\nAuthors' conclusions\nUse of surfactant had favourable effects on duration of mechanical ventilation, duration of ICU stay, oxygenation, and CO2 elimination. However, the studies are few and small (n = 79) so available evidence is insufficient to establish the effectiveness of surfactant therapy for bronchiolitis in critically ill infants who require mechanical ventilation. There is a need for larger trials with adequate power and a cost-effectiveness analysis to evaluate the effectiveness of exogenous surfactant therapy for infants with bronchiolitis who require intensive care management.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The level of evidence for duration of mechanical ventilation, duration of intensive care unit stay, oxygenation parameters, and carbon dioxide parameters was of moderate quality. The limited number of studies with small numbers of participants was the reason for moderate quality, and are limitations of this review. There is a need for larger trials to establish any benefits of surfactant for bronchiolitis in critically ill infants and children."
}
] |
query-laysum | 1037 | [
{
"role": "user",
"content": "Abstract: Background\nThis an update of a Cochrane Review.\nParaquat is a widely used herbicide, but is also a lethal poison. In some low- and middle-income countries (LMICs) paraquat is commonly available and inexpensive, making poisoning prevention difficult. Most of the people poisoned by paraquat have taken it as a means of self-poisoning.\nStandard treatment for paraquat poisoning prevents further absorption and reduces the load of paraquat in the blood through haemoperfusion or haemodialysis. The effectiveness of standard treatments is extremely limited.\nThe immune system plays an important role in exacerbating paraquat-induced lung fibrosis. Immunosuppressive treatment using glucocorticoid and cyclophosphamide in combination has been developed and studied as an intervention for paraquat poisoning.\nObjectives\nTo assess the effects of glucocorticoid with cyclophosphamide for moderate to severe oral paraquat poisoning.\nSearch methods\nThe most recent searches were run in September 2020. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Injuries Trials Register), Ovid MEDLINE(R), Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid MEDLINE Daily and Ovid OLDMEDLINE, Embase Classic + Embase (Ovid), ISI WOS (SCI-EXPANDED, SSCI, CPCI-S, and CPSI-SSH), and trials registries. We also searched the following three resources: China National Knowledge Infrastructure database (CNKI 数据库); Wanfang Data (万方数据库); and VIP (维普数据库) on 12 November 2020. We examined the reference lists of included studies and review papers.\nSelection criteria\nWe included randomised controlled trials (RCTs). For this update, in accordance with Cochrane Injuries' Group policy (2015), we included only prospectively registered RCTs for trials published after 2010. We included trials which assessed the effects of glucocorticoid with cyclophosphamide delivered in combination. Eligible comparators were standard care (with or without a placebo), or any other therapy in addition to standard care. Outcomes of interest included mortality and infections.\nData collection and analysis\nWe calculated the mortality risk ratio (RR) and 95% confidence interval (CI). Where possible, we summarised data for all-cause mortality at relevant time periods (from hospital discharge to three months after discharge) in meta-analysis, using a fixed-effect model. We conducted sensitivity analyses based on factors including whether participants were assessed at baseline for plasma paraquat levels. We also reported data on infections within one week after initiation of treatment.\nMain results\nWe included four trials with a total of 463 participants. The included studies were conducted in Taiwan (Republic of China), Iran, and Sri Lanka. Most participants were male. The mean age of participants was 28 years.\nWe judged two of the four included studies, including the largest and most recently conducted study (n = 299), to be at low risk of bias for key domains including sequence generation. We assessed one study to be at high risk of selection bias and another at unclear risk, since allocation concealment was either not mentioned in the trial report or explicitly not undertaken. We assessed three of the four studies to be at unclear risk of selective reporting, as no protocols could be identified. An important source of heterogeneity amongst the included studies was the method of assessment of participants' baseline severity using analysis of plasma levels (two studies employed this method, whilst the other two did not).\nNo studies assessed the outcome of mortality at 30 days following ingestion of paraquat.\nLow-certainty evidence from two studies indicates that glucocorticoids with cyclophosphamide in addition to standard care may slightly reduce the risk of death in hospital compared to standard care alone ((RR 0.82, 95% CI 0.68 to 0.99; participants = 322); results come from sensitivity analysis excluding studies not assessing plasma at baseline). However, we have limited confidence in this finding as heterogeneity was high (I2 = 77%) and studies varied in terms of size and comparators. A single large study provided data showing that there may be little or no effect of treatment at three months post discharge from hospital (RR 0.98, 95% CI 0.85 to 1.13; 1 study, 293 participants; low-certainty evidence); however, analysis of long-term results amongst participants whose injuries arose from self-poisoning must be interpreted with caution.\nWe remain uncertain of the effect of glucocorticoids with cyclophosphamide on infection within one week after initiation of the treatment; this outcome was assessed by two small studies only (31 participants, very low-certainty evidence) that considered leukopenia as a proxy or risk factor for infection. Neither study reported infections in any participants.\nAuthors' conclusions\nLow-certainly evidence suggests that glucocorticoids with cyclophosphamide in addition to standard care may slightly reduce mortality in hospitalised people with oral paraquat poisoning. However, we have limited confidence in this finding because of substantial heterogeneity and concerns about imprecision. Glucocorticoids with cyclophosphamide in addition to standard care may have little or no effect on mortality at three months after hospital discharge. We are uncertain whether glucocorticoid with cyclophosphamide puts patients at an increased risk of infection due to the limited evidence available for this outcome. Future research should be prospectively registered and CONSORT-compliant. Investigators should attempt to ensure an adequate sample size, screen participants for inclusion rigorously, and seek long-term follow-up of participants. Investigators may wish to research the effects of glucocorticoid in combination with other treatments.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review updates our previous review on this subject. The evidence is up to date to September 2020."
}
] |
query-laysum | 1038 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is an inherited condition causing disease most noticeably in the lungs, digestive tract and pancreas. People with CF often have malnutrition and growth delay. Adequate nutritional supplementation does not improve growth optimally and hence an anabolic agent, recombinant human growth hormone (rhGH), has been proposed as a potential intervention. This is an update of a previously published review.\nObjectives\nTo evaluate the effectiveness and safety of rhGH therapy in improving lung function, quality of life and clinical status of children and young adults with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Trials Register comprising references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. Date of latest search: 12 January 2021.\nWe also searched ongoing trials registers:\nclinicaltrials.gov from the United States - date of latest search 19 Jun 2021;\nWHO International Clinical Trials Registry Platform (ICTRP) - date of latest search 05 March 2018 (not available in 2021).\nWe conducted a search of relevant endocrine journals and proceedings of the Endocrinology Society meetings using Web of Science, Scopus and Proceedings First. Date of latest search: 21 Jun 2021.\nSelection criteria\nRandomised and quasi-randomised controlled trials of all preparations of rhGH compared to either no treatment, or placebo, or each other at any dose (high-dose and low-dose) or route and for any duration, in children or young adults (aged up to 25 years) diagnosed with CF (by sweat test or genetic testing).\nData collection and analysis\nTwo authors independently screened papers, extracted trial details and assessed their risk of bias. We assessed the quality of the evidence using the GRADE system.\nMain results\nWe included eight trials (291 participants, aged between five and 23 years) in the current version of the review. Seven trials compared standard-dose rhGH (approximately 0.3 mg/kg/week) to no treatment and one three-arm trial (63 participants) compared placebo, standard-dose rhGH (0.3 mg/kg/week) and high-dose rhGH (0.5 mg/kg/week). Six trials lasted for one year and two trials for six months. We found that rhGH treatment may improve some of the pulmonary function outcomes, but there was no difference between standard and high-dose levels (low-certainty evidence, limited by inconsistency across the trials, small number of participants and short duration of therapy). The trials show evidence of improvement in the anthropometric parameters (height, weight and lean body mass) with rhGH therapy, again no differences between dose levels. We found improvement in height for all comparisons (very low- to low-certainty evidence), but improvements in weight and lean body mass were only reported for standard-dose rhGH versus no treatment (very low-certainty evidence). There is some evidence indicating a change in the level of fasting blood glucose with rhGH therapy, however, it did not cross the clinical threshold for diagnosis of diabetes in the trials of short duration (low-certainty evidence). There is low- to very low-certainty evidence for improvement of pulmonary exacerbations with no further significant adverse effects, but this is limited by the short duration of trials and the small number of participants. One small trial provided inconsistent evidence on improvement in quality of life (very low-certainty evidence). There is limited evidence from three trials in improvements in exercise capacity (low-certainty evidence). None of the trials have systematically compared the expense of therapy on overall healthcare costs.\nAuthors' conclusions\nWhen compared with no treatment, rhGH therapy is effective in improving the intermediate outcomes in height, weight and lean body mass. Some measures of pulmonary function showed moderate improvement, but no consistent benefit was seen across all trials. The significant change in blood glucose levels, although not causing diabetes, emphasizes the need for careful monitoring of this adverse effect with therapy in a population predisposed to CF-related diabetes. No significant changes in quality of life, clinical status or side-effects were observed in this review due to the small number of participants. Long-term, well-designed randomised controlled trials of rhGH in individuals with CF are required prior to routine clinical use of rhGH in CF.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We did not have enough information to decide if overall the trials were biased in a way that might affect the results. All the measured outcomes were clearly reported in the trials, but the trials were small and did not have enough participants to show a difference that may not have been due to chance. We also had concerns that outcomes that were based on personal judgment, such as quality of life scores, might be affected because those taking part in seven of the trials were able to tell which group they were in."
}
] |
query-laysum | 1039 | [
{
"role": "user",
"content": "Abstract: Background\nPulmonary embolism (PE) can occur when a thrombus (blood clot) travels through the veins and lodges in the arteries of the lungs, producing an obstruction. People who are thought to be at risk include those with cancer, people who have had a recent surgical procedure or have experienced long periods of immobilisation and women who are pregnant. The clinical presentation can vary, but unexplained respiratory symptoms such as difficulty breathing, chest pain and an increased respiratory rate are common.\nD-dimers are fragments of protein released into the circulation when a blood clot breaks down as a result of normal body processes or with use of prescribed fibrinolytic medication. The D-dimer test is a laboratory assay currently used to rule out the presence of high D-dimer plasma levels and, by association, venous thromboembolism (VTE). D-dimer tests are rapid, simple and inexpensive and can prevent the high costs associated with expensive diagnostic tests.\nObjectives\nTo investigate the ability of the D-dimer test to rule out a diagnosis of acute PE in patients treated in hospital outpatient and accident and emergency (A&E) settings who have had a pre-test probability (PTP) of PE determined according to a clinical prediction rule (CPR), by estimating the accuracy of the test according to estimates of sensitivity and specificity. The review focuses on those patients who are not already established on anticoagulation at the time of study recruitment.\nSearch methods\nWe searched 13 databases from conception until December 2013. We cross-checked the reference lists of relevant studies.\nSelection criteria\nTwo review authors independently applied exclusion criteria to full papers and resolved disagreements by discussion.\nWe included cross-sectional studies of D-dimer in which ventilation/perfusion (V/Q) scintigraphy, computerised tomography pulmonary angiography (CTPA), selective pulmonary angiography and magnetic resonance pulmonary angiography (MRPA) were used as the reference standard.\n• Participants: Adults who were managed in hospital outpatient and A&E settings and were suspected of acute PE were eligible for inclusion in the review if they had received a pre-test probability score based on a CPR.\n• Index tests: quantitative, semi quantitative and qualitative D-dimer tests.\n• Target condition: acute symptomatic PE.\n• Reference standards: We included studies that used pulmonary angiography, V/Q scintigraphy, CTPA and MRPA as reference standard tests.\nData collection and analysis\nTwo review authors independently extracted data and assessed quality using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2). We resolved disagreements by discussion. Review authors extracted patient-level data when available to populate 2 × 2 contingency tables (true-positives (TPs), true-negatives (TNs), false-positives (FPs) and false-negatives (FNs)).\nMain results\nWe included four studies in the review (n = 1585 patients). None of the studies were at high risk of bias in any of the QUADAS-2 domains, but some uncertainty surrounded the validity of studies in some domains for which the risk of bias was uncertain. D-dimer assays demonstrated high sensitivity in all four studies, but with high levels of false-positive results, especially among those over the age of 65 years. Estimates of sensitivity ranged from 80% to 100%, and estimates of specificity from 23% to 63%.\nAuthors' conclusions\nA negative D-dimer test is valuable in ruling out PE in patients who present to the A&E setting with a low PTP. Evidence from one study suggests that this test may have less utility in older populations, but no empirical evidence was available to support an increase in the diagnostic threshold of interpretation of D-dimer results for those over the age of 65 years.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Four studies met our criteria, and data from 1585 patients were available. We found evidence that negative (disease absent) D-dimer tests are very good at ruling out PE and identifying people without PE, but high numbers of false-positive test results suggest that people with a raised D-dimer may not in fact have a PE; therefore, a positive result needs to be followed by imaging. In one study, false-positives were more common among people older than 65 years of age."
}
] |
query-laysum | 1040 | [
{
"role": "user",
"content": "Abstract: Background\nPsoriasis is a chronic skin disease that affects approximately two per cent of the general population. Plaque psoriasis is the most common form: it usually appears as raised, red patches of inflamed skin, covered with silvery white scales. The patches often occur in a symmetrical pattern. Guttate psoriasis is a particular form of psoriasis with widespread, small erythematosquamous lesions. Streptococcal infection is suspected to be a triggering factor for the onset of guttate psoriasis, and flare-up of chronic plaque psoriasis. The previous Cochrane Review on this topic was published in 2000; it required an update because antistreptococcal treatment continues to be used to treat psoriasis, especially for the acute form of guttate psoriasis.\nObjectives\nTo assess the effects of antistreptococcal interventions for guttate and chronic plaque psoriasis.\nSearch methods\nWe searched Cochrane Skin Specialised Register, Cochrane Register of Studies Online, CENTRAL, MEDLINE, Embase, LILACS, and five trials registers (January 2019). We checked the reference lists of included and excluded studies and searched conference proceedings from the American Academy of Dermatology, Society for Investigative Dermatology, and European Academy of Dermatology and Venereology.\nSelection criteria\nWe considered randomised controlled trials (RCTs) assessing antistreptococcal interventions (tonsillectomy or systemic antibiotic treatment) in people with clinically diagnosed acute guttate and chronic plaque psoriasis compared with placebo, no intervention, or each other.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Primary outcome measures were: 1) time-to-resolution; achieving clear or almost clear skin (Physician Global Assessment (PGA) 0 or 1 or Psoriasis Area and Severity Index (PASI) 90 or 100); 2) proportion of participants with adverse effects and severe adverse effects. Secondary outcomes were: 1) proportion of participants achieving clear or almost clear skin; 2) proportion of participants achieving PASI 75 or PGA 1 to 2; 3) risk of having at least one relapse at long-term follow-up. Short-term assessment was defined as within eight weeks of the start of treatment; long-term was at least one year after the start of treatment.\nMain results\nWe included five trials (162 randomised participants); three were conducted in a hospital dermatology department. One study declared funding by a pharmaceutical company. Participants' ages ranged from 12 to 77 years; only two participants were younger than 15 years. Mean PASI score at baseline varied from 5.7 (i.e. mild) to 23 (i.e. severe) in four studies. Twenty-three of 162 participants had streptococcus-positive throat swab culture. We did not perform a meta-analysis due to heterogeneity of participants' characteristics and interventions.\nNone of the trials measured our efficacy primary outcome, time-to-resolution, or the secondary outcome, risk of having at least one relapse at long-term follow-up.\nWe rated the quality of the results as very low-quality evidence, due to high risk of bias (absence of blinding of participants and caregivers, and high risk of outcome reporting bias) and imprecision (single study data with a low number of events). Hence, we are very uncertain about the results presented.\nGuttate psoriasis\nOne three-armed trial (N = 43) assessed penicillin (50,000 international units (IU)/kg/day in three doses) versus erythromycin (250 mg four times per day) versus no treatment (treatment for 14 days, with six-week follow-up from start of treatment). Adverse events and the proportion of participants achieving clear or almost clear skin were not measured.\nOne trial (N = 20) assessed penicillin (1.6 MU (million units) intramuscularly once a day) versus no treatment (six weeks of treatment, with eight-week follow-up from start of treatment). At six-week (short-term) follow-up, no adverse events were observed in either group, and there was no statistically significant difference between the two groups in the proportion of participants with clear or almost clear skin (risk ratio (RR) 2.00, 95% confidence interval (CI) 0.68 to 5.85).\nOne trial (N = 20) assessed rifampicin (300 mg twice daily) versus placebo (14-day treatment duration; six-week follow-up from start of treatment); none of the review outcomes were measured.\nThese trials did not measure the proportion of participants achieving PASI 75 or PGA 1 to 2.\nChronic plaque psoriasis\nOne trial (N = 50) assessed long-term azithromycin treatment (500 mg daily dose) versus vitamin C. Adverse events were reported in the azithromycin group (10 out of 30 had nausea and mild abdominal upset), but not in the vitamin C group. The proportion of participants who achieved clear or almost clear skin was not measured. In the azithromycin group, 18/30 versus 0/20 participants in the vitamin C group reached PASI 75 at the end of 48 weeks of treatment (RR 25.06, 95% CI 1.60 to 393.59).\nOne trial (N = 29) assessed tonsillectomy versus no treatment, with 24-month follow-up after surgery. One participant in the tonsillectomy group had minor bleeding. At eight-week follow-up, 1/15 in the tonsillectomy group, and 0/14 in the no treatment group achieved PASI 90; and 3/15 participants in the tonsillectomy group, and 0/14 in the no treatment group achieved PASI 75 (RR 6.56, 95% CI 0.37 to 116.7).\nAuthors' conclusions\nWe found only five trials (N = 162), which assessed the effects of five comparisons (systemic antibiotic treatment (penicillin, azithromycin) or tonsillectomy). Two comparisons (erythromycin compared to no treatment, and rifampicin compared to placebo) did not measure any of the outcomes of interest. There was very low-quality evidence for the outcomes that were measured, Therefore, we are uncertain of both the efficacy and safety of antistreptococcal interventions for guttate and chronic plaque psoriasis.\nThe included trials were at unclear or high risk of bias and involved only a small number of unrepresentative participants, with limited measurement of our outcomes of interest. The studies did not allow investigation into the influence of Streptococcal infection, and a key intervention (amoxicillin) was not assessed.\nFurther trials assessing the efficacy and tolerance of penicillin V or amoxicillin are needed in children and young adults with guttate psoriasis.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "These results are backed by very low-quality evidence, so we are not certain of their accuracy. Each result is based on only one study.\nNo studies measured our main outcome of interest, the time taken for the skin to be clear or almost clear of lesions, or the risk of relapsing at least once during long-term follow-up.\nNo side effects were seen when penicillin was compared with no treatment in people with guttate psoriasis. Side effects were not measured for the comparisons of rifampicin versus placebo, or erythromycin versus no treatment.\nIn participants with chronic plaque psoriasis, one trial assessed azithromycin versus vitamin C, and 10 participants in the azithromycin group complained of nausea or mild stomach upset. A trial of tonsillectomy versus no treatment reported one case of minor bleeding in the tonsillectomy group.\nTwo studies in participants with chronic plaque psoriasis measured the number of participants achieving a 75% reduction on the Psoriasis Area and Severity Index (PASI 75). In one, 18/30 participants in the azithromycin group reached PASI 75 versus none in the vitamin C group. In the other, 3/15 in the tonsillectomy group reached PASI 75 versus none in the no treatment group. The guttate psoriasis trials did not assess this outcome.\nWe are uncertain whether the number of participants with guttate psoriasis achieving clear or almost clear skin differs between those given penicillin and those receiving no treatment. Only one participant with chronic plaque psoriasis achieved almost clear skin in the tonsillectomy group compared to none in the no treatment group. The other three trials did not measure this outcome."
}
] |
query-laysum | 1041 | [
{
"role": "user",
"content": "Abstract: Background\nIn epidemics of highly infectious diseases, such as Ebola, severe acute respiratory syndrome (SARS), or coronavirus (COVID-19), healthcare workers (HCW) are at much greater risk of infection than the general population, due to their contact with patients' contaminated body fluids. Personal protective equipment (PPE) can reduce the risk by covering exposed body parts. It is unclear which type of PPE protects best, what is the best way to put PPE on (i.e. donning) or to remove PPE (i.e. doffing), and how to train HCWs to use PPE as instructed.\nObjectives\nTo evaluate which type of full-body PPE and which method of donning or doffing PPE have the least risk of contamination or infection for HCW, and which training methods increase compliance with PPE protocols.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and CINAHL to 20 March 2020.\nSelection criteria\nWe included all controlled studies that evaluated the effect of full-body PPE used by HCW exposed to highly infectious diseases, on the risk of infection, contamination, or noncompliance with protocols. We also included studies that compared the effect of various ways of donning or doffing PPE, and the effects of training on the same outcomes.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and assessed the risk of bias in included trials. We conducted random-effects meta-analyses were appropriate.\nMain results\nEarlier versions of this review were published in 2016 and 2019. In this update, we included 24 studies with 2278 participants, of which 14 were randomised controlled trials (RCT), one was a quasi-RCT and nine had a non-randomised design.\nEight studies compared types of PPE. Six studies evaluated adapted PPE. Eight studies compared donning and doffing processes and three studies evaluated types of training. Eighteen studies used simulated exposure with fluorescent markers or harmless microbes. In simulation studies, median contamination rates were 25% for the intervention and 67% for the control groups.\nEvidence for all outcomes is of very low certainty unless otherwise stated because it is based on one or two studies, the indirectness of the evidence in simulation studies and because of risk of bias.\nTypes of PPE\nThe use of a powered, air-purifying respirator with coverall may protect against the risk of contamination better than a N95 mask and gown (risk ratio (RR) 0.27, 95% confidence interval (CI) 0.17 to 0.43) but was more difficult to don (non-compliance: RR 7.5, 95% CI 1.81 to 31.1). In one RCT (59 participants) coveralls were more difficult to doff than isolation gowns (very low-certainty evidence). Gowns may protect better against contamination than aprons (small patches: mean difference (MD) −10.28, 95% CI −14.77 to −5.79). PPE made of more breathable material may lead to a similar number of spots on the trunk (MD 1.60, 95% CI −0.15 to 3.35) compared to more water-repellent material but may have greater user satisfaction (MD −0.46, 95% CI −0.84 to −0.08, scale of 1 to 5). According to three studies that tested more recently introduced full-body PPE ensembles, there may be no difference in contamination.\nModified PPE versus standard PPE\nThe following modifications to PPE design may lead to less contamination compared to standard PPE: sealed gown and glove combination (RR 0.27, 95% CI 0.09 to 0.78), a better fitting gown around the neck, wrists and hands (RR 0.08, 95% CI 0.01 to 0.55), a better cover of the gown-wrist interface (RR 0.45, 95% CI 0.26 to 0.78, low-certainty evidence), added tabs to grab to facilitate doffing of masks (RR 0.33, 95% CI 0.14 to 0.80) or gloves (RR 0.22, 95% CI 0.15 to 0.31).\nDonning and doffing\nUsing Centers for Disease Control and Prevention (CDC) recommendations for doffing may lead to less contamination compared to no guidance (small patches: MD −5.44, 95% CI −7.43 to −3.45). One-step removal of gloves and gown may lead to less bacterial contamination (RR 0.20, 95% CI 0.05 to 0.77) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28) than separate removal. Double-gloving may lead to less viral or bacterial contamination compared to single gloving (RR 0.34, 95% CI 0.17 to 0.66) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28). Additional spoken instruction may lead to fewer errors in doffing (MD −0.9, 95% CI −1.4 to −0.4) and to fewer contamination spots (MD −5, 95% CI −8.08 to −1.92). Extra sanitation of gloves before doffing with quaternary ammonium or bleach may decrease contamination, but not alcohol-based hand rub.\nTraining\nThe use of additional computer simulation may lead to fewer errors in doffing (MD −1.2, 95% CI −1.6 to −0.7). A video lecture on donning PPE may lead to better skills scores (MD 30.70, 95% CI 20.14 to 41.26) than a traditional lecture. Face-to-face instruction may reduce noncompliance with doffing guidance more (odds ratio 0.45, 95% CI 0.21 to 0.98) than providing folders or videos only.\nAuthors' conclusions\nWe found low- to very low-certainty evidence that covering more parts of the body leads to better protection but usually comes at the cost of more difficult donning or doffing and less user comfort. More breathable types of PPE may lead to similar contamination but may have greater user satisfaction. Modifications to PPE design, such as tabs to grab, may decrease the risk of contamination. For donning and doffing procedures, following CDC doffing guidance, a one-step glove and gown removal, double-gloving, spoken instructions during doffing, and using glove disinfection may reduce contamination and increase compliance. Face-to-face training in PPE use may reduce errors more than folder-based training.\nWe still need RCTs of training with long-term follow-up. We need simulation studies with more participants to find out which combinations of PPE and which doffing procedure protects best. Consensus on simulation of exposure and assessment of outcome is urgently needed. We also need more real-life evidence. Therefore, the use of PPE of HCW exposed to highly infectious diseases should be registered and the HCW should be prospectively followed for their risk of infection.\n\nGiven the provided abstract, please respond to the following query: \"What do we still need to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were no studies that investigated goggles or face shields. We are unclear about the best way to remove PPE after use and the best type of training in the long term.\nHospitals need to organise more studies, and researchers need to agree on the best way to simulate exposure to a virus.\nIn future, simulation studies need to have at least 60 participants each, and use exposure to a harmless virus to assess which type and combination of PPE is most protective.\nIt would be helpful if hospitals could register and record the type of PPE used by their workers to provide urgently needed, real-life information."
}
] |
query-laysum | 1042 | [
{
"role": "user",
"content": "Abstract: Background\nEndophthalmitis refers to severe infection within the eye that involves the aqueous humor or vitreous humor, or both, and that threatens vision. Most cases of endophthalmitis are exogenous (i.e. due to inoculation of organisms from an outside source), and most exogenous endophthalmitis is acute and occurs after an intraocular procedure. The mainstay of treatment is emergent administration of broad-spectrum intravitreous antibiotics. Due to their anti-inflammatory effects, steroids in conjunction with antibiotics have been proposed as being beneficial in endophthalmitis management.\nObjectives\nTo assess the effects of antibiotics combined with steroids versus antibiotics alone for the treatment of acute endophthalmitis following intraocular surgery or intravitreous injection.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2021, Issue 8), MEDLINE Ovid (1946 to August 2021), Embase Ovid (1980 to August 2021), LILACS (Latin American and Caribbean Health Sciences Literature database) (1982 to August 2021), the ISRCTN registry; searched August 2021, ClinicalTrials.gov; searched August 2021, and the WHO International Clinical Trials Registry Platform; searched August 2021. We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing the effectiveness of adjunctive steroids with antibiotics alone in the management of acute, clinically diagnosed endophthalmitis following intraocular surgery or intravitreous injection. We excluded trials with participants with endogenous endophthalmitis unless outcomes were reported by source of infection. We imposed no restrictions on the method or order of administration, dose, frequency, or duration of antibiotics and steroids.\nData collection and analysis\nWe used standard Cochrane methodology, and graded the certainty of the body of evidence for six outcomes using the GRADE classification.\nMain results\nWe included four RCTs with a total of 264 eyes of 264 participants in this review update. The studies were conducted in South Africa, India, and the Netherlands. All studies used intravitreous dexamethasone for adjunctive steroid therapy and a combination of two intravitreous antibiotics that provided gram-positive and gram-negative coverage for the antibiotic therapy. We judged two trials to be at overall low risk of bias, and the other two studies to be at overall unclear risk of bias due to lack of reporting of study methods. Only one study was registered in a clinical trial register.\nWhile none of the included studies reported the primary outcome of complete resolution of endophthalmitis as defined in our protocol, one study reported combined anatomical and functional success (i.e. proportion of participants with intraocular pressure of at least 5 mmHg and visual acuity of at least 6/120). Very low certainty evidence suggested no difference in combined success when comparing adjunctive steroid to antibiotics alone (risk ratio (RR) 1.08, 95% confidence interval (CI) 0.80 to 1.45; 32 participants). Low certainty evidence from two studies suggested that adjunctive dexamethasone may result in having a good visual outcome (Snellen visual acuity 6/6 to 6/18) at 3 months compared with antibiotics alone (RR 1.95, 95% CI 1.05 to 3.60; 60 participants); however, the evidence was less conclusive at 12 months (RR 1.12, 95% CI 0.92 to 1.37; 2 studies; 195 participants; low certainty evidence). Investigators of one study reported improvement in visual acuity, but we could not estimate the effect of adjunctive steroid therapy because the study investigators did not provide any estimates of precision. Only one study examined intraocular pressure (IOP). The evidence suggests that adjunctive dexamethasone may reduce IOP slightly after 12 months of interventions (mean difference −1.90, 95% CI −3.78 to 0.07; 1 study; 167 participants; low certainty evidence). Three studies reported adverse events (retinal detachment, hypotony, proliferative vitreoretinopathy, seclusion of pupil, floaters, and pucker). The total numbers of adverse events were 14 out of 111 (12.6%) for those who received dexamethasone versus 12 out of 116 (10.3%) for those who did not. We could only perform a pooled analysis for the occurrence of retinal detachment: any difference between the two treatment groups was uncertain (RR 1.41, 95% CI 0.53 to 3.74; 227 participants; low certainty evidence). No study reported cost-related outcomes.\nAuthors' conclusions\nThe currently available evidence on the effectiveness of adjunctive steroid therapy versus antibiotics alone in the management of acute endophthalmitis after intraocular surgery is inadequate. We found no studies that had enrolled cases of acute endophthalmitis following intravitreous injection. A combined analysis of two studies suggests that use of adjunctive steroids may provide a higher chance of having a good visual outcome at three months than not using adjunctive steroids. However, considering that most of the confidence intervals crossed the null, and that this review was limited in scope and applicability to clinical practice, it is not possible to conclude whether the use of adjunctive steroids is effective at this time. Any future trials should examine whether adjunctive steroids may be useful in certain clinical settings such as type of causative organism or etiology. These studies should include outcomes that take patients' symptoms and clinical examination into account; report outcomes in a uniform and consistent manner; and follow up at short- and long-term intervals.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies published up to 17 August 2021."
}
] |
query-laysum | 1043 | [
{
"role": "user",
"content": "Abstract: Background\nImproving mobility outcomes after hip fracture is key to recovery. Possible strategies include gait training, exercise and muscle stimulation. This is an update of a Cochrane Review last published in 2011.\nObjectives\nTo evaluate the effects (benefits and harms) of interventions aimed at improving mobility and physical functioning after hip fracture surgery in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, trial registers and reference lists, to March 2021.\nSelection criteria\nAll randomised or quasi-randomised trials assessing mobility strategies after hip fracture surgery. Eligible strategies aimed to improve mobility and included care programmes, exercise (gait, balance and functional training, resistance/strength training, endurance, flexibility, three-dimensional (3D) exercise and general physical activity) or muscle stimulation. Intervention was compared with usual care (in-hospital) or with usual care, no intervention, sham exercise or social visit (post-hospital).\nData collection and analysis\nMembers of the review author team independently selected trials for inclusion, assessed risk of bias and extracted data. We used standard methodological procedures expected by Cochrane. We used the assessment time point closest to four months for in-hospital studies, and the time point closest to the end of the intervention for post-hospital studies. Critical outcomes were mobility, walking speed, functioning, health-related quality of life, mortality, adverse effects and return to living at pre-fracture residence.\nMain results\nWe included 40 randomised controlled trials (RCTs) with 4059 participants from 17 countries. On average, participants were 80 years old and 80% were women. The median number of study participants was 81 and all trials had unclear or high risk of bias for one or more domains. Most trials excluded people with cognitive impairment (70%), immobility and/or medical conditions affecting mobility (72%).\nIn-hospital setting, mobility strategy versus control\nEighteen trials (1433 participants) compared mobility strategies with control (usual care) in hospitals. Overall, such strategies may lead to a moderate, clinically-meaningful increase in mobility (standardised mean difference (SMD) 0.53, 95% confidence interval (CI) 0.10 to 0.96; 7 studies, 507 participants; low-certainty evidence) and a small, clinically meaningful improvement in walking speed (CI crosses zero so does not rule out a lack of effect (SMD 0.16, 95% CI -0.05 to 0.37; 6 studies, 360 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to short-term (risk ratio (RR) 1.06, 95% CI 0.48 to 2.30; 6 studies, 489 participants; low-certainty evidence) or long-term mortality (RR 1.22, 95% CI 0.48 to 3.12; 2 studies, 133 participants; low-certainty evidence), adverse events measured by hospital re-admission (RR 0.70, 95% CI 0.44 to 1.11; 4 studies, 322 participants; low-certainty evidence), or return to pre-fracture residence (RR 1.07, 95% CI 0.73 to 1.56; 2 studies, 240 participants; low-certainty evidence). We are uncertain whether mobility strategies improve functioning or health-related quality of life as the certainty of evidence was very low.\nGait, balance and functional training probably causes a moderate improvement in mobility (SMD 0.57, 95% CI 0.07 to 1.06; 6 studies, 463 participants; moderate-certainty evidence). There was little or no difference in effects on mobility for resistance training. No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nPost-hospital setting, mobility strategy versus control\nTwenty-two trials (2626 participants) compared mobility strategies with control (usual care, no intervention, sham exercise or social visit) in the post-hospital setting. Mobility strategies lead to a small, clinically meaningful increase in mobility (SMD 0.32, 95% CI 0.11 to 0.54; 7 studies, 761 participants; high-certainty evidence) and a small, clinically meaningful improvement in walking speed compared to control (SMD 0.16, 95% CI 0.04 to 0.29; 14 studies, 1067 participants; high-certainty evidence). Mobility strategies lead to a small, non-clinically meaningful increase in functioning (SMD 0.23, 95% CI 0.10 to 0.36; 9 studies, 936 participants; high-certainty evidence), and probably lead to a slight increase in quality of life that may not be clinically meaningful (SMD 0.14, 95% CI -0.00 to 0.29; 10 studies, 785 participants; moderate-certainty evidence). Mobility strategies probably make little or no difference to short-term mortality (RR 1.01, 95% CI 0.49 to 2.06; 8 studies, 737 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to long-term mortality (RR 0.73, 95% CI 0.39 to 1.37; 4 studies, 588 participants; low-certainty evidence) or adverse events measured by hospital re-admission (95% CI includes a large reduction and large increase, RR 0.86, 95% CI 0.52 to 1.42; 2 studies, 206 participants; low-certainty evidence).\nTraining involving gait, balance and functional exercise leads to a small, clinically meaningful increase in mobility (SMD 0.20, 95% CI 0.05 to 0.36; 5 studies, 621 participants; high-certainty evidence), while training classified as being primarily resistance or strength exercise may lead to a clinically meaningful increase in mobility measured using distance walked in six minutes (mean difference (MD) 55.65, 95% CI 28.58 to 82.72; 3 studies, 198 participants; low-certainty evidence). Training involving multiple intervention components probably leads to a substantial, clinically meaningful increase in mobility (SMD 0.94, 95% CI 0.53 to 1.34; 2 studies, 104 participants; moderate-certainty evidence). We are uncertain of the effect of aerobic training on mobility (very low-certainty evidence). No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nAuthors' conclusions\nInterventions targeting improvement in mobility after hip fracture may cause clinically meaningful improvement in mobility and walking speed in hospital and post-hospital settings, compared with conventional care. Interventions that include training of gait, balance and functional tasks are particularly effective. There was little or no between-group difference in the number of adverse events reported. Future trials should include long-term follow-up and economic outcomes, determine the relative impact of different types of exercise and establish effectiveness in emerging economies.\n\nGiven the provided abstract, please respond to the following query: \"What can people do to improve their mobility after a hip fracture?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A key aim of care after surgery for hip fracture is to get people safely back on their feet, moving and walking again. Initially, people may be asked to rest in bed and restrict weight bearing. Then various strategies to improve mobility, including walking retraining, exercise programmes and electrical stimulation, are used during hospital stay and often after discharge from hospital."
}
] |
query-laysum | 1044 | [
{
"role": "user",
"content": "Abstract: Background\nDry eye disease (DED), arising from various etiologic factors, leads to tear film instability, ocular surface damage, and neurosensory changes. DED causes symptoms such as ocular dryness, burning, itching, pain, and visual impairment. Given their well-established anti-inflammatory effects, topical steroid preparations have been widely used as a short-term treatment option for DED. Because of potential risks of ocular hypertension, cataracts, and infections associated with the long-term use of topical steroids, published trials comparing the efficacy and safety of topical steroids (versus placebo) have mostly been of short duration (three to eight weeks).\nObjectives\nTo evaluate the effectiveness and safety of topical corticosteroids compared with no treatment, placebo, other steroidal or non-steroidal therapies, or a combination of therapies for DED.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, which contains the Cochrane Eyes and Vision Trials Register; 2021, Issue 8); Ovid MEDLINE; Ovid Embase; Latin American and Caribbean Health Sciences database (LILACS); ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), without restriction on language or year of publication. The date of the last search was 20 August 2021.\nSelection criteria\nWe included randomized controlled trials (RCTs) in which topical corticosteroids, alone or in combination with tobramycin, were compared with no treatment, artificial tears (AT), vehicles, AT plus tobramycin, or cyclosporine A (CsA).\nData collection and analysis\nWe applied standard Cochrane methodology.\nMain results\nWe identified 22 RCTs conducted in the USA, Italy, Spain, China, South Korea, and India. These RCTs reported outcome data from a total of 4169 participants with DED.\nStudy characteristics and risk of bias\nAll trials recruited adults aged 18 years or older, except one trial that enrolled children and adolescents aged between 3 and 14 years. Half of these trials involved predominantly female participants (median 79%, interquartile range [IQR] 76% to 80%). On average, each trial enrolled 86 participants (IQR 40 to 158). The treatment duration of topical steroids ranged between one week and three months; trial duration lasted between one week and six months. Eight trials were sponsored exclusively by industry, and four trials were co-sponsored by industry and institutional or governmental funds. We assessed the risk of bias of both subjective and objective outcomes using RoB 2, finding nearly half of the trials to be at high risk of bias associated with selective outcome reporting.\nFindings\nOf the 22 trials, 16 evaluated effects of topical steroids, alone or in combination with tobramycin, as compared with lubricants (AT, vehicle), AT plus tobramycin, or no treatment. Corticosteroids probably have a small to moderate effect on improving patient-reported symptoms by 0.29 standardized mean difference (SMD) (95% confidence interval [CI] 0.16 to 0.42) as compared with lubricants (moderate certainty evidence). Topical steroids also likely have a small to moderate effect on lowering corneal staining scores by 0.4 SMDs (95% CI 0.18 to 0.62) (moderate certainty evidence). However, steroids may increase tear film break-up time (TBUT) slightly (mean difference [MD] 0.70 s, 95% CI 0.06 to 1.34; low certainty evidence) but not tear osmolarity (MD 1.60 mOsm/kg, 95% CI −10.47 to 13.67; very low certainty evidence).\nSix trials examined topical steroids, either alone or in combination with CsA, against CsA alone. Low certainty evidence indicates that steroid-based interventions may have a small to moderate effect on improving participants' symptoms (SMD −0.33, 95% CI −0.51 to −0.15), but little to no effect on corneal staining scores (SMD 0.05, 95% CI −0.25 to 0.35) as compared with CsA. The effect of topical steroids compared to CsA alone on TBUT (MD 0.37 s, 95% CI −0.13 to 0.87) or tear osmolarity (MD 5.80 mOsm/kg, 95% CI −0.94 to 12.54; loteprednol etabonate alone) is uncertain because the certainty of the evidence is low or very low. None of the included trials reported on quality of life scores.\nAdverse effects\nThe evidence for adverse ocular effects of topical corticosteroids is very uncertain. Topical corticosteroids may increase participants' risk of intraocular pressure (IOP) elevation (risk ratio [RR] 5.96, 95% CI 1.30 to 27.38) as compared with lubricants. However, when compared with CsA, steroids alone or combined with CsA may decrease or increase IOP elevation (RR 1.45, 95% CI 0.25 to 8.33). It is also uncertain whether topical steroids may increase risk of cataract formation when compared with lubricants (RR 0.34, 95% CI 0.01 to 8.22), given the short-term use and study duration (four weeks or less) to observe longer-term adverse effects.\nAuthors' conclusions\nOverall, the evidence for the specified review outcomes was of moderate to very low certainty, mostly due to high risk of bias associated with selective results reporting. For dry eye patients whose symptoms require anti-inflammatory control, topical corticosteroids probably provide small to moderate degrees of symptom relief beyond lubricants, and may provide small to moderate degrees of symptom relief beyond CsA. However, the current evidence is less certain about the effects of steroids on improved tear film quality or quantity. The available evidence is also very uncertain regarding the adverse effects of topical corticosteroids on IOP elevation or cataract formation or progression. Future trials should generate high certainty evidence to inform physicians and patients of the optimal treatment strategies with topical corticosteroids in terms of regimen (types, formulations, dosages), duration, and its time-dependent adverse profile.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 22 clinical trials that enrolled a total of 4169 participants with dry eye. Most trials involved adults with a mean age between 50 and 67 years, except for one trial that exclusively involved children aged 3 to 14 years. Treatment duration ranged between 7 days and 3 months. When compared with lubricants, such as artificial tears, or with cyclosporine A, corticosteroids eye drops were probably effective in improving patient-reported symptoms and clinical tests, such as corneal staining. Clinicians may use corneal staining as a test for cornea damage. However, corticosteroids eye drops may result in little to no difference in tear quality or quantity. At the same time, it is uncertain whether steroid use may increase or decrease the chance of increased eye pressure, new cataract formation, or worsening of an existing cataract."
}
] |
query-laysum | 1045 | [
{
"role": "user",
"content": "Abstract: Background\nReduced vitamin A concentration increases the risk of blindness in children infected with the measles virus. Promoting vitamin A supplementation in children with measles contributes to the control of blindness in children, which is a high priority within the World Health Organization (WHO) VISION 2020 The Right to Sight Program.\nObjectives\nTo assess the efficacy of vitamin A in preventing blindness in children with measles without prior clinical features of vitamin A deficiency.\nSearch methods\nWe searched CENTRAL 2015, Issue 11, MEDLINE (1950 to December week 3, 2015), Embase (1974 to December 2015) and LILACS (1985 to December 2015).\nSelection criteria\nRandomised controlled trials (RCTs) assessing the efficacy of vitamin A in preventing blindness in well-nourished children diagnosed with measles but with no prior clinical features of vitamin A deficiency.\nData collection and analysis\nFor the original review, two review authors independently assessed studies for eligibility and extracted data on reported outcomes. We contacted trial authors of the included studies for additional information on unpublished data. We included two RCTs which were clinically heterogenous. We presented the continuous outcomes reported as the mean difference (MD) with 95% confidence interval (CI) and dichotomous outcomes as risk ratio (RR) with 95% CI. Due to marked clinical heterogeneity we considered it inappropriate to perform a meta-analysis.\nMain results\nFor the first publication of this review, two RCTs involving 260 children with measles which compared vitamin A with placebo met the inclusion criteria. Neither study reported blindness or other ocular morbidities as end points. One trial of moderate quality suggested evidence of a significant increase in serum retinol levels in the vitamin A group one week after two doses of vitamin A (MD 9.45 µg/dL, 95% CI 2.19 to 16.71; 17 participants, moderate-quality evidence), but not six weeks after three doses of vitamin A (MD 2.56 µg/dL, 95% CI -5.28 to 10.40; 39 participants, moderate-quality evidence). There was no significant difference in weight gain six weeks (MD 0.39 kg, -0.04 to 0.82; 48 participants, moderate-quality evidence) and six months (MD 0.52 kg, 95% CI -0.08 to 1.12; 36 participants, moderate-quality evidence) after three doses of vitamin A.\nThe second trial found no significant difference in serum retinol levels two weeks after a single dose of vitamin A (MD 2.67 µg/dL, 95% CI -0.29 to 5.63; 155 participants, moderate-quality evidence). Percentage of undernutrition between the two groups did not differ significantly at one week (RR 0.93, 95% CI 0.56 to 1.54, 145 participants) and two weeks (RR 0.82, 95% CI 0.52 to 1.29, 147 participants) after a single dose of vitamin A. No adverse event was reported in either study. We did not find any new RCTS for this second update.\nAuthors' conclusions\nWe did not find any trials assessing whether or not vitamin A supplementation in children with measles prevents blindness, as neither study reported blindness or other ocular morbidities as end points.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Annually 500,000 children become blind worldwide; 75% of them live in low-income countries. The major causes of blindness in children vary widely from region to region and are related to the standard of living of the community. Scarring of the eyes from measles, vitamin A deficiency, use of harmful traditional eye remedies and eye infection of the newborn, are the major causes of blindness in low-income countries. Vitamin A is an important nutrient in the body and is required for the normal functioning of the eye. Its deficiency results in poor vision.\nMeasles infection in children has been associated with vitamin A deficiency and blindness. The control of blindness in children is considered a high priority within the World Health Organization's VISION 2020 The Right to Sight Program. Studies have reported the beneficial effect of vitamin A in reducing disease burden and rate of death in children with measles. This review examined vitamin A use in preventing blindness in children infected with measles without features of vitamin A deficiency."
}
] |
query-laysum | 1046 | [
{
"role": "user",
"content": "Abstract: Background\nThe consequences of influenza in children and adults are mainly absenteeism from school and work. However, the risk of complications is greatest in children and people over 65 years of age. This is an update of a review published in 2011. Future updates of this review will be made only when new trials or vaccines become available. Observational data included in previous versions of the review have been retained for historical reasons but have not been updated because of their lack of influence on the review conclusions.\nObjectives\nTo assess the effects (efficacy, effectiveness, and harm) of vaccines against influenza in healthy children.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2016, Issue 12), which includes the Cochrane Acute Respiratory Infections Group Specialised Register, MEDLINE (1966 to 31 December 2016), Embase (1974 to 31 December 2016), WHO International Clinical Trials Registry Platform (ICTRP; 1 July 2017), and ClinicalTrials.gov (1 July 2017).\nSelection criteria\nRandomised controlled trials comparing influenza vaccines with placebo or no intervention in naturally occurring influenza in healthy children under 16 years. Previous versions of this review included 19 cohort and 11 case-control studies. We are no longer updating the searches for these study designs but have retained the observational studies for historical purposes.\nData collection and analysis\nReview authors independently assessed risk of bias and extracted data. We used GRADE to rate the certainty of evidence for the key outcomes of influenza, influenza-like illness (ILI), complications (hospitalisation, ear infection), and adverse events. Due to variation in control group risks for influenza and ILI, absolute effects are reported as the median control group risk, and numbers needed to vaccinate (NNVs) are reported accordingly. For other outcomes aggregate control group risks are used.\nMain results\nWe included 41 clinical trials (> 200,000 children). Most of the studies were conducted in children over the age of two and compared live attenuated or inactivated vaccines with placebo or no vaccine. Studies were conducted over single influenza seasons in the USA, Western Europe, Russia, and Bangladesh between 1984 and 2013. Restricting analyses to studies at low risk of bias showed that influenza and otitis media were the only outcomes where the impact of bias was negligible. Variability in study design and reporting impeded meta-analysis of harms outcomes.\nLive attenuated vaccines\nCompared with placebo or do nothing, live attenuated influenza vaccines probably reduce the risk of influenza infection in children aged 3 to 16 years from 18% to 4% (risk ratio (RR) 0.22, 95% confidence interval (CI) 0.11 to 0.41; 7718 children; moderate-certainty evidence), and they may reduce ILI by a smaller degree, from 17% to 12% (RR 0.69, 95% CI 0.60 to 0.80; 124,606 children; low-certainty evidence). Seven children would need to be vaccinated to prevent one case of influenza, and 20 children would need to be vaccinated to prevent one child experiencing an ILI. Acute otitis media is probably similar following vaccine or placebo during seasonal influenza, but this result comes from a single study with particularly high rates of acute otitis media (RR 0.98, 95% CI 0.95 to 1.01; moderate-certainty evidence). There was insufficient information available to determine the effect of vaccines on school absenteeism due to very low-certainty evidence from one study. Vaccinating children may lead to fewer parents taking time off work, although the CI includes no effect (RR 0.69, 95% CI 0.46 to 1.03; low-certainty evidence). Data on the most serious consequences of influenza complications leading to hospitalisation were not available. Data from four studies measuring fever following vaccination varied considerably, from 0.16% to 15% in children who had live vaccines, while in the placebo groups the proportions ranged from 0.71% to 22% (very low-certainty evidence). Data on nausea were not reported.\nInactivated vaccines\nCompared with placebo or no vaccination, inactivated vaccines reduce the risk of influenza in children aged 2 to 16 years from 30% to 11% (RR 0.36, 95% CI 0.28 to 0.48; 1628 children; high-certainty evidence), and they probably reduce ILI from 28% to 20% (RR 0.72, 95% CI 0.65 to 0.79; 19,044 children; moderate-certainty evidence). Five children would need to be vaccinated to prevent one case of influenza, and 12 children would need to be vaccinated to avoid one case of ILI. The risk of otitis media is probably similar between vaccinated children and unvaccinated children (31% versus 27%), although the CI does not exclude a meaningful increase in otitis media following vaccination (RR 1.15, 95% CI 0.95 to 1.40; 884 participants; moderate-certainty evidence). There was insufficient information available to determine the effect of vaccines on school absenteeism due to very low-certainty evidence from one study. We identified no data on parental working time lost, hospitalisation, fever, or nausea.\nWe found limited evidence on secondary cases, requirement for treatment of lower respiratory tract disease, and drug prescriptions. One brand of monovalent pandemic vaccine was associated with a sudden loss of muscle tone triggered by the experience of an intense emotion (cataplexy) and a sleep disorder (narcolepsy) in children. Evidence of serious harms (such as febrile fits) was sparse.\nAuthors' conclusions\nIn children aged between 3 and 16 years, live influenza vaccines probably reduce influenza (moderate-certainty evidence) and may reduce ILI (low-certainty evidence) over a single influenza season. In this population inactivated vaccines also reduce influenza (high-certainty evidence) and may reduce ILI (low-certainty evidence). For both vaccine types, the absolute reduction in influenza and ILI varied considerably across the study populations, making it difficult to predict how these findings translate to different settings. We found very few randomised controlled trials in children under two years of age. Adverse event data were not well described in the available studies. Standardised approaches to the definition, ascertainment, and reporting of adverse events are needed. Identification of all global cases of potential harms is beyond the scope of this review.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Live attenuated and inactivated vaccines can reduce the proportion of children who have influenza and ILI. Variation in the results of studies means that we are uncertain about the effects of these vaccines across different seasons."
}
] |
query-laysum | 1047 | [
{
"role": "user",
"content": "Abstract: Background\nDespite the existence of effective interventions and best-practice guideline recommendations for childcare services to implement evidence-based policies, practices and programmes to promote child healthy eating, physical activity and prevent unhealthy weight gain, many services fail to do so.\nObjectives\nThe primary aim of the review was to examine the effectiveness of strategies aimed at improving the implementation of policies, practices or programmes by childcare services that promote child healthy eating, physical activity and/or obesity prevention.\n\nThe secondary aims of the review were to:\n1. Examine the cost or cost-effectiveness of such strategies;\n2. Examine any adverse effects of such strategies on childcare services, service staff or children;\n3. Examine the effect of such strategies on child diet, physical activity or weight status.\n4. Describe the acceptability, adoption, penetration, sustainability and appropriateness of such implementation strategies.\nSearch methods\nWe searched the following electronic databases on February 22 2019: Cochrane Central Register of Controlled trials (CENTRAL), MEDLINE, MEDLINE In Process, Embase, PsycINFO, ERIC, CINAHL and SCOPUS for relevant studies. We searched reference lists of included studies, handsearched two international implementation science journals, the World Health Organization International Clinical Trials Registry Platform (www.who.int/ictrp/) and ClinicalTrials.gov (www.clinicaltrials.gov).\nSelection criteria\nWe included any study (randomised or nonrandomised) with a parallel control group that compared any strategy to improve the implementation of a healthy eating, physical activity or obesity prevention policy, practice or programme by staff of centre-based childcare services to no intervention, 'usual' practice or an alternative strategy. Centre-based childcare services included preschools, nurseries, long daycare services and kindergartens catering for children prior to compulsory schooling (typically up to the age of five to six years).\nData collection and analysis\nTwo review authors independently screened study titles and abstracts, extracted study data and assessed risk of bias; we resolved discrepancies via consensus. We performed meta-analysis using a random-effects model where studies with suitable data and homogeneity were identified; otherwise, findings were described narratively.\nMain results\nTwenty-one studies, including 16 randomised and five nonrandomised, were included in the review. The studies sought to improve the implementation of policies, practices or programmes targeting healthy eating (six studies), physical activity (three studies) or both healthy eating and physical activity (12 studies). Studies were conducted in the United States (n = 12), Australia (n = 8) and Ireland (n = 1). Collectively, the 21 studies included a total of 1945 childcare services examining a range of implementation strategies including educational materials, educational meetings, audit and feedback, opinion leaders, small incentives or grants, educational outreach visits or academic detailing, reminders and tailored interventions. Most studies (n = 19) examined implementation strategies versus usual practice or minimal support control, and two compared alternative implementation strategies. For implementation outcomes, six studies (one RCT) were judged to be at high risk of bias overall.\nThe review findings suggest that implementation strategies probably improve the implementation of policies, practices or programmes that promote child healthy eating, physical activity and/or obesity prevention in childcare services. Of the 19 studies that compared a strategy to usual practice or minimal support control, 11 studies (nine RCTs) used score-based measures of implementation (e.g. childcare service nutrition environment score). Nine of these studies were included in pooled analysis, which found an improvement in implementation outcomes (SMD 0.49; 95% CI 0.19 to 0.79; participants = 495; moderate-certainty evidence). Ten studies (seven RCTs) used dichotomous measures of implementation (e.g. proportion of childcare services implementing a policy or specific practice), with seven of these included in pooled analysis (OR 1.83; 95% CI 0.81 to 4.11; participants = 391; low-certainty evidence).\nFindings suggest that such interventions probably lead to little or no difference in child physical activity (four RCTs; moderate-certainty evidence) or weight status (three RCTs; moderate-certainty evidence), and may lead to little or no difference in child diet (two RCTs; low-certainty evidence). None of the studies reported the cost or cost-effectiveness of the intervention. Three studies assessed the adverse effects of the intervention on childcare service staff, children and parents, with all studies suggesting they have little to no difference in adverse effects (e.g. child injury) between groups (three RCTs; low-certainty evidence). Inconsistent quality of the evidence was identified across review outcomes and study designs, ranging from very low to moderate.\nThe primary limitation of the review was the lack of conventional terminology in implementation science, which may have resulted in potentially relevant studies failing to be identified based on the search terms used.\nAuthors' conclusions\nCurrent research suggests that implementation strategies probably improve the implementation of policies, practices or programmes by childcare services, and may have little or no effect on measures of adverse effects. However such strategies appear to have little to no impact on measures of child diet, physical activity or weight status.\n\nGiven the provided abstract, please respond to the following query: \"The review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review aimed to look at the effects of strategies to improve the implementation (or correct undertaking) of policies, practices or programmes by childcare services that promote children's healthy eating, physical activity and/or obesity prevention. We wanted to determine the cost or cost-effectiveness of providing implementation support, whether support strategies were associated with any adverse effects, and whether there was an impact on child nutrition, physical activity or weight status. We also looked at the implementation strategy acceptability, adoption, penetration, sustainability and appropriateness.BackgroundA number of childcare service-based interventions have been found to be effective in improving child diet, increasing child physical activity and preventing excessive weight gain. Despite the existence of such evidence and best-practice guideline recommendations for childcare services toimplementthese policies, practices or programmes, many childcare services fail to do so. Without proper implementation, children will not benefit from these child health-directed policies, practices or programmes.Study characteristicsWe identified 21 studies, 19 of which examined implementation strategies versus usual practice or minimal support control, and two that compared different types of implementation strategies. The studies sought to improve the implementation of policies, practices or programmes targeting healthy eating (six studies), physical activity (three studies) or both healthy eating and physical activity (twelve studies). Collectively, the 21 included studies included a total of 1945 childcare services and examined a range of implementation strategies including educational materials, educational meetings, audit and feedback, opinion leaders, small incentives or grants, educational outreach visits or academic detailing, reminders and tailored interventions. The strategies tested were only a small number of those that could be applied to improve implementation in this setting.Search dateThe evidence is current to February 2019.Key results\nFindings suggest that implementation support strategies can improve the implementation of physical activity policies, programmes or practices by childcare services or their staff (moderate-certainty evidence), and do not appear to increase the risk of child injury (low-certainty evidence). However, such approaches do not appear to have an impact on the diet, physical activity or weight status of children (low to moderate-certainty evidence). None of the included studies reported information regarding implementation strategy costs or measures of cost-effectiveness. The lack of consistent terminology in this area of research may have meant some relevant studies were not picked up in our search."
}
] |
query-laysum | 1048 | [
{
"role": "user",
"content": "Abstract: Background\nThymic carcinoma or advanced thymoma is a rare cancer of the thymus gland that tends to be aggressive and infiltrate neighbouring organs, making total resection very difficult. Induction or adjuvant chemotherapy, or both, are often used in a multimodality approach to treat people affected by this condition, but the effectiveness of chemotherapy for thymic carcinoma or advanced thymoma remains uncertain.\nObjectives\nTo assess the role of chemotherapy in adults with thymic carcinoma or advanced thymoma.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2012, Issue 7), MEDLINE (accessed via Ovid from 1966 to July 2012), EMBASE (accessed via Ovid, from 1980 to July 2012), Latin American and Caribbean Literature on Health Sciences (LILACS), the Chinese Biological Medicine Database (CBM, 1978 to July 2012), China National Knowledge Infrastructure (CNKI, 1980 to July 2012) and the Chinese scientific periodical database VIP Information (VIP, 1989 to July 2012). There was no language restriction in searching for studies.\nSelection criteria\nWe planned to include randomised controlled trials (RCTs) of trials using chemotherapy (either single-agent or combination chemotherapy plus surgery, radiotherapy or not) for thymic carcinoma and/or advanced thymoma. We planned to include all adults (aged 18 years and over) diagnosed with thymic carcinoma and/or with Masaoka stage III or IV thymic tumours. The intended primary outcomes were overall survival (OS) and progression-free survival (PFS).\nData collection and analysis\nTwo review authors independently evaluated the search results according to the inclusion and exclusion criteria. There were no studies identified for inclusion and therefore no data extraction was completed.\nMain results\nNo RCTs were eligible for inclusion in this review. We report details of excluded prospective studies in an additional table and try to provide some useful evidence regarding current practice.\nAuthors' conclusions\nThere were no RCTs eligible for inclusion in this review. In current practice the most common regimen for adult patients with thymic carcinoma or advanced thymoma is cisplatin-based chemotherapy. Considering the condition is rare, it is suggested that an international group is set up to organise and evaluate prospective collection of data from cohorts of patients to inform current clinical practice.\n\nGiven the provided abstract, please respond to the following query: \"Main findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We did not identify any suitable clinical trials for inclusion in this review from our search (up to July 2012). We have reported details of prospective studies that are not suitable for inclusion in the review, which provide some useful evidence about current clinical practice."
}
] |
query-laysum | 1049 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the original review published in the Cochrane Database of Systematic Reviews Issue 10, 2015.\nInvasive aspergillosis (IA) is the most common life-threatening opportunistic invasive mould infection in immunocompromised people. Early diagnosis of IA and prompt administration of appropriate antifungal treatment are critical to the survival of people with IA. Antifungal drugs can be given as prophylaxis or empirical therapy, instigated on the basis of a diagnostic strategy (the pre-emptive approach) or for treating established disease. Consequently, there is an urgent need for research into both new diagnostic tools and drug treatment strategies. Increasingly, newer methods such as polymerase chain reaction (PCR) to detect fungal nucleic acids are being investigated.\nObjectives\nTo provide an overall summary of the diagnostic accuracy of PCR-based tests on blood specimens for the diagnosis of IA in immunocompromised people.\nSearch methods\nWe searched MEDLINE (1946 to June 2015) and Embase (1980 to June 2015). We also searched LILACS, DARE, Health Technology Assessment, Web of Science and Scopus to June 2015. We checked the reference lists of all the studies identified by the above methods and contacted relevant authors and researchers in the field. For this review update we updated electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 3) in the Cochrane Library; MEDLINE via Ovid (June 2015 to March week 2 2018); and Embase via Ovid (June 2015 to 2018 week 12).\nSelection criteria\nWe included studies that: i) compared the results of blood PCR tests with the reference standard published by the European Organisation for Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG); ii) reported data on false-positive, true-positive, false-negative and true-negative results of the diagnostic tests under investigation separately; and iii) evaluated the test(s) prospectively in cohorts of people from a relevant clinical population, defined as a group of individuals at high risk for invasive aspergillosis. Case-control and retrospective studies were excluded from the analysis.\nData collection and analysis\nAuthors independently assessed quality and extracted data. For PCR assays, we evaluated the requirement for either one or two consecutive samples to be positive for diagnostic accuracy. We investigated heterogeneity by subgroup analyses. We plotted estimates of sensitivity and specificity from each study in receiver operating characteristics (ROC) space and constructed forest plots for visual examination of variation in test accuracy. We performed meta-analyses using the bivariate model to produce summary estimates of sensitivity and specificity.\nMain results\nWe included 29 primary studies (18 from the original review and 11 from this update), corresponding to 34 data sets, published between 2000 and 2018 in the meta-analyses, with a mean prevalence of proven or probable IA of 16.3 (median prevalence 11.1% , range 2.5% to 57.1%). Most patients had received chemotherapy for haematological malignancy or had undergone hematopoietic stem cell transplantation. Several PCR techniques were used among the included studies. The sensitivity and specificity of PCR for the diagnosis of IA varied according to the interpretative criteria used to define a test as positive. The summary estimates of sensitivity and specificity were 79.2% (95% confidence interval (CI) 71.0 to 85.5) and 79.6% (95% CI 69.9 to 86.6) for a single positive test result, and 59.6% (95% CI 40.7 to 76.0) and 95.1% (95% CI 87.0 to 98.2) for two consecutive positive test results.\nAuthors' conclusions\nPCR shows moderate diagnostic accuracy when used as screening tests for IA in high-risk patient groups. Importantly the sensitivity of the test confers a high negative predictive value (NPV) such that a negative test allows the diagnosis to be excluded. Consecutive positives show good specificity in diagnosis of IA and could be used to trigger radiological and other investigations or for pre-emptive therapy in the absence of specific radiological signs when the clinical suspicion of infection is high. When a single PCR positive test is used as the diagnostic criterion for IA in a population of 100 people with a disease prevalence of 16.3% (overall mean prevalence), three people with IA would be missed (sensitivity 79.2%, 20.8% false negatives), and 17 people would be unnecessarily treated or referred for further tests (specificity of 79.6%, 21.4% false positives). If we use the two positive test requirement in a population with the same disease prevalence, it would mean that nine IA people would be missed (sensitivity 59.6%, 40.4% false negatives) and four people would be unnecessarily treated or referred for further tests (specificity of 95.1%, 4.9% false positives). Like galactomannan, PCR has good NPV for excluding disease, but the low prevalence of disease limits the ability to rule in a diagnosis. As these biomarkers detect different markers of disease, combining them is likely to prove more useful.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "IA is a fungal disease caused by the widespread mouldAspergillus,withAspergillus fumigatusbeing the most common species.Most people breathe inAspergillusspores every day without becoming ill. However people with weakened immune systems or lung diseases are at a higher risk of developing respiratory problems of the lungs and sinuses due toAspergillus,ranging from allergic complications to IA, which is the most common life-threatening, invasive fungal infection of people whose immune systems are compromised. Without antifungal treatment, most people with IA will die as a direct result of IA, so early diagnosis and prompt administration of appropriate antifungal treatment are both critical to the survival of these people. The ideal specimen for diagnosing IA would be lung tissue but obtaining this carries a significant risk to the patient so there is a clear need for new, non-invasive methods such as PCR to demonstrate the fungus’s presence in blood by detecting its nucleic acids."
}
] |
query-laysum | 1050 | [
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is one of the most common genetic disorders. Sickle cell crises in which irregular and dehydrated cells contribute to blocking of blood vessels are characterised by episodes of pain. Treatment is mainly supportive and symptomatic. In vitro studies with piracetam indicate that it has the potential for inhibition and a reversal of the process of sickling of erythrocytes. This is an update of a previously published Cochrane review.\nObjectives\nTo assess the effectiveness of piracetam for reducing the incidence of painful sickle cell disease crises.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register which comprises of references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings.\nLast search of the Group's Haemoglobinopathies Trials Register: 21 September 2015.\nSelection criteria\nRandomised controlled trials comparing orally administered piracetam to placebo or standard care in people, of all ages and both sexes, with sickle cell disease.\nData collection and analysis\nTwo authors independently assessed trial quality and extracted data. Trial authors were contacted for additional information. Adverse effects data were collected from the trials.\nMain results\nThree trials involving 169 participants were included in the review. A limited amount of data addressing some of the primary and some of the secondary outcomes were provided, but data were incomplete and based on un-validated assumptions used in the evaluation of outcomes. One trial reported a reduction in the number of pain crises and their severity with active intervention than placebo but presented no data to confirm these results. A second trial presented a monthly global pain score based on the number of sickle cell crises and severity of pain but included no separate data for these primary outcomes. Although there was no significant difference between the piracetam and placebo periods for the number of days of hospitalisation (P = 0.87) in one trial, inconsistencies in the criteria necessary for hospitalisation during sickle crises did not permit accurate conclusions to be drawn. Two of the trials reported participant satisfaction with piracetam but provided no details as to how this satisfaction had been assessed. There were no reports of toxicity or adverse effects with piracetam other than one participant who experienced dizziness.\nAuthors' conclusions\nThe small number of included trials and their poor methodological quality provided insufficient reliable evidence to support the routine use of this medication for preventing the incidence of painful sickle cell disease crises.\nWe will continue to run searches to identify any potentially relevant trials; however, we do not plan to update other sections of the review until new trials are published.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to: 21 September 2015."
}
] |
query-laysum | 1051 | [
{
"role": "user",
"content": "Abstract: Background\nAge-related cataract accounts for more than 40% of cases of blindness in the world with the majority of people who are blind from cataract living in lower income countries. With the increased number of people with cataract, it is important to review the evidence on the effectiveness of day care cataract surgery.\nObjectives\nTo provide authoritative, reliable evidence regarding the safety, feasibility, effectiveness and cost-effectiveness of day case cataract extraction by comparing clinical outcomes, cost-effectiveness, patient satisfaction or a combination of these in cataract operations performed in day care versus in-patient units.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2015, Issue 7), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2015), EMBASE (January 1980 to August 2015), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to August 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 17 August 2015.\nSelection criteria\nWe included randomised controlled trials comparing day care and in-patient surgery for age-related cataract. The primary outcome was the achievement of a satisfactory visual acuity six weeks after the operation.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We collected adverse effects information from the trials.\nMain results\nWe included two trials. One study was conducted in the USA in 1981 (250 people randomised and completed trial) and one study conducted in Spain in 2001 (1034 randomised, 935 completed trial). Both trials used extracapsular cataract extraction techniques that are not commonly used in higher income countries now. Most of the data in this review came from the larger trial, which we judged to be at low risk of bias.\nThe mean change in visual acuity (in Snellen lines) of the operated eye four months postoperatively was similar in people given day care surgery (mean 4.1 lines standard deviation (SD) 2.3, 464 participants) compared to people treated as in-patients (mean 4.1 lines, SD 2.2, 471 participants) (P value = 0.74). No data were available from either study on intra-operative complications.\nWound leakage, intraocular pressure (IOP) and corneal oedema were reported in the first day postoperatively and at four months after surgery. There was an increased risk of high IOP in the day care group in the first day after surgery (risk ratio (RR) 3.33, 95% confidence intervals (CI) 1.21 to 9.16, 935 participants) but not at four months (RR 0.61, 95% CI 0.14 to 2.55, 935 participants). The findings for the other outcomes were inconclusive with wide CIs. There were two cases of endophthalmitis observed at four months in the day care group and none in the in-patient group. The smaller study stated that there were no infections or severe hyphaemas.\nIn a subset of participants evaluated for quality of life (VF14 questionnaire) similar change in quality of life before and four months after surgery was observed (mean change in VF14 score: day care group 25.2, SD 21.2, 150 participants; in-patient group: 23.5, SD 25.7, 155 participants; P value = 0.30). Subjective assessment of patient satisfaction in the smaller study suggested that participants preferred to recuperate at home, were more comfortable in their familiar surroundings and enjoyed the family support that they received at home. Costs were 20% more for the in-patient group and this was attributed to higher costs for overnight stay.\nAuthors' conclusions\nThis review provides evidence that there is cost saving with day care cataract surgery compared to in-patient cataract surgery. Although effects on visual acuity and quality of life appeared similar, the evidence with respect to postoperative complications was inconclusive because the effect estimates were imprecise. Given the wide-spread adoption of day care cataract surgery, future research in cataract clinical pathways should focus on evidence provided by high quality clinical databases (registers), which would enable clinicians and healthcare planners to agree clinical and social indications for in-patient care and so make better use of resources.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The lens in the eyes can become cloudy with age (called cataracts), leading to blurry vision or total vision loss. Cataracts can be surgically removed by breaking up the lens and removing the pieces with a needle (a process called phacoemulsification), followed by the placing of an artificial lens to restore vision. This method of surgery is quick and together with a shorter recovery period has made the possibility of day surgery a reality. We wanted to find out whether operation in a day care unit was as effective and safe as staying overnight after an operation to replace the lens and regain better vision. Furthermore, we were interested to know what the side effects, risks and complications were of the two different approaches. We took into account improvement in quality of life and differences in costs."
}
] |
query-laysum | 1052 | [
{
"role": "user",
"content": "Abstract: Background\nInvasive urodynamic tests are used to investigate men with lower urinary tract symptoms (LUTS) and voiding dysfunction to determine a definitive objective diagnosis. The aim is to help clinicians select the treatment that is most likely to be successful. These investigations are invasive and time-consuming.\nObjectives\nTo determine whether performing invasive urodynamic investigation, as opposed to other methods of diagnosis such as non-invasive urodynamics or clinical history and examination alone, reduces the number of men with continuing symptoms of voiding dysfunction. This goal will be achieved by critically appraising and summarising current evidence from randomised controlled trials related to clinical outcomes and cost-effectiveness. This review is not intended to consider whether urodynamic tests are reliable for making clinical diagnoses, nor whether one type of urodynamic test is better than another for this purpose.\nThe following comparisons were made.\n• Urodynamics versus clinical management.\n• One type of urodynamics versus another.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2014, issue 10), MEDLINE (1 January 1946 to Week 4 October 2014), MEDLINE In-Process and other non-indexed citations (covering 27 November 2014; all searched on 28 November 2014), EMBASE Classic and EMBASE (1 January 2010 to Week 47 2014, searched on 28 November 2014), ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (searched on 1 December 2014 and 3 December 2014, respectively), as well as the reference lists of relevant articles.\nSelection criteria\nRandomised and quasi-randomised trials comparing clinical outcomes in men who were and were not investigated with the use of invasive urodynamics, or comparing one type of urodynamics against another, were included. Trials were excluded if they did not report clinical outcomes.\nData collection and analysis\nThree review authors independently assessed trial quality and extracted data.\nMain results\nWe included two trials, but data were available for only 339 men in one trial, of whom 188 underwent invasive urodynamic studies. We found evidence of risk of bias, such as lack of outcome information for 24 men in one arm of the trial.\nStatistically significant evidence suggests that the tests did change clinical decision making. Men in the invasive urodynamics arm were more likely to have their management changed than men in the control arm (proportion with change in management 24/188 (13%) vs 0/151 (0%), risk ratio (RR) 39.41, 95% confidence interval (CI) 2.42 to 642.74). However, the quality of the evidence was low.\nLow-quality evidence indicates that men in the invasive urodynamics group were less likely to undergo surgery as treatment for voiding LUTS (164/188 (87%) vs 151/151 (100%), RR 0.87, 95% CI 0.83 to 0.92).\nInvestigators observed no difference in urine flow rates before and after surgery for LUTS (mean percentage increase in urine flow rate, 140% in invasive urodynamic group vs 149% in immediate surgery group, P value = 0.13). Similarly, they found no differences between groups with regards to International Prostate Symptom Score (IPSS) (mean percentage decrease in IPSS score, 58% in invasive urodynamics group vs 59% in immediate surgery group, P value = 0.22).\nNo evidence was available to demonstrate whether differences in management equated to improved health outcomes, such as relief of symptoms of voiding dysfunction or improved quality of life.\nNo evidence from randomised trials revealed the adverse effects associated with invasive urodynamic studies.\nAuthors' conclusions\nAlthough invasive urodynamic testing did change clinical decision making, we found no evidence to demonstrate whether this led to reduced symptoms of voiding dysfunction after treatment. Larger definitive trials of better quality are needed, in which men are randomly allocated to management based on invasive urodynamic findings or to management based on findings obtained by other diagnostic means. This research will show whether performance of invasive urodynamics results in reduced symptoms of voiding dysfunction after treatment.\n\nGiven the provided abstract, please respond to the following query: \"Main findings of this review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found two trials, which included around 350 men, although information was available for only 339 men in one trial. Evidence was not sufficient to show whether invasive urodynamic tests led to better patient outcomes. Some evidence suggests that these tests did alter management decisions, resulting in fewer men undergoing surgery. No evidence indicates whether this change in management led to fewer symptoms in men after treatment, and it is not known whether patients reported a better quality of life."
}
] |
query-laysum | 1053 | [
{
"role": "user",
"content": "Abstract: Background\nHelminth infections, such as soil-transmitted helminths, schistosomiasis, onchocerciasis, and lymphatic filariasis, are prevalent in many countries where human immunodeficiency virus (HIV) infection is also common. There is some evidence from observational studies that HIV and helminth co-infection may be associated with higher viral load and lower CD4+ cell counts. Treatment of helminth infections with antihelminthics (deworming drugs) may have benefits for people living with HIV beyond simply clearance of worm infections.\nThis is an update of a Cochrane Review published in 2009 and we have expanded it to include outcomes of anaemia and adverse events.\nObjectives\nTo evaluate the effects of deworming drugs (antihelminthic therapy) on markers of HIV disease progression, anaemia, and adverse events in children and adults.\nSearch methods\nIn this review update, we searched online for published and unpublished studies in the Cochrane Library, MEDLINE, EMBASE, CENTRAL, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICRTP), ClinicalTrials.gov, and the WHO Global Health Library up to 29 September 2015. We also searched databases listing conference abstracts, scanned reference lists of articles, and contacted the authors of included studies.\nSelection criteria\nWe searched for randomized controlled trials (RCTs) that compared antihelminthic drugs with placebo or no intervention in HIV-positive people.\nData collection and analysis\nTwo review authors independently extracted data and assessed trials for eligibility and risk of bias. The primary outcomes were changes in HIV viral load and CD4+ cell count, and secondary outcomes were anaemia, iron deficiency, adverse events, and mortality events. We compared the effects of deworming using mean differences, risk ratios (RR), and 95% confidence intervals (CIs). We assessed the quality of evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nEight trials met the inclusion criteria of this review, enrolling a total of 1612 participants. Three trials evaluated the effect of providing antihelminthics to all adults with HIV without knowledge of their helminth infection status, and five trials evaluated the effects of providing deworming drugs to HIV-positive individuals with confirmed helminth infections. Seven trials were conducted in sub-Saharan Africa and one in Thailand.\nAntihelminthics for people with unknown helminth infection status\nProviding antihelminthics (albendazole and praziquantel together or separately) to HIV-positive adults with unknown helminth infection status may have a small suppressive effect on mean viral load at six weeks but the 95% CI includes the possibility of no effect (difference in mean change −0.14 log10 viral RNA/mL, 95% CI −0.35 to 0.07, P = 0.19; one trial, 166 participants, low quality evidence).\nRepeated dosing with deworming drugs over two years (albendazole every three months plus annual praziquantel), probably has little or no effect on mean viral load (difference in mean change 0.01 log10 viral RNA, 95% CI: −0.03 to −0.05; one trial, 917 participants, moderate quality evidence), and little or no effect on mean CD4+ count (difference in mean change 2.60 CD4+ cells/µL, 95% CI −10.15 to 15.35; P = 0.7; one trial, 917 participants, low quality evidence).\nAntihelminthics for people with confirmed helminth infections\nTreating confirmed helminth infections in HIV-positive adults may have a small suppressive effect on mean viral load at six to 12 weeks following deworming (difference in mean change −0.13 log10 viral RNA, 95% CI −0.26 to −0.00; P = 0.04; four trials, 445 participants, low quality evidence). However, this finding is strongly influenced by a single study of praziquantel treatment for schistosomiasis. There may also be a small favourable effect on mean CD4+ cell count at 12 weeks after deworming in HIV-positive populations with confirmed helminth infections (difference in mean change 37.86 CD4+ cells/µL, 95% CI 7.36 to 68.35; P = 0.01; three trials, 358 participants, low quality evidence).\nAdverse events and mortality\nThere is no indication that antihelminthic drugs impart additional risks in HIV-positive populations. However, adverse events were not well reported (very low quality evidence) and trials were underpowered to evaluate effects on mortality (low quality evidence).\nAuthors' conclusions\nThere is low quality evidence that treating confirmed helminth infections in HIV-positive adults may have small, short-term favourable effects on markers of HIV disease progression. Further studies are required to confirm this finding. Current evidence suggests that deworming with antihelminthics is not harmful, and this is reassuring for the routine treatment of confirmed or suspected helminth infections in people living with HIV in co-endemic areas.\nFurther long-term studies are required to make confident conclusions regarding the impact of presumptively deworming all HIV-positive individuals irrespective of helminth infection status, as the only long-term trial to date did not demonstrate an effect.\n\nGiven the provided abstract, please respond to the following query: \"What are deworming drugs and why might they delay HIV disease progression\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Deworming drugs are used to treat a variety of human helminth infections, such as soil-transmitted helminths, schistosomiasis, onchocerciasis, and lymphatic filariasis. In areas where these infections are common, the World Health Organization currently recommends that targeted populations are routinely treated every six to 12 months without prior confirmation of an individual's infection status. The use of empiric therapy, or treating all at-risk populations presumptively, is preferred to test-and-treat strategies because deworming drugs are inexpensive and well tolerated. Additionally, a strategy of testing before treatment is considered less cost-effective given that available diagnostic tests are relatively expensive and can exhibit poor sensitivity.\nHelminth infections are known to affect the human immune system. In people with HIV, some studies have suggested that helminth infections may reduce the number of CD4+ cells (which are a critical part of the immune response to HIV) and compromise a person's ability to control HIV viral replication. Thus, treatment of helminth infections could have important benefits for people living with HIV beyond the benefits observed in the general population as a result of deworming."
}
] |
query-laysum | 1054 | [
{
"role": "user",
"content": "Abstract: Background\nUterine fibroids occur in up to 40% of women aged over 35 years. Some are asymptomatic, but up to 50% cause symptoms that warrant therapy. Symptoms include anaemia caused by heavy menstrual bleeding, pelvic pain, dysmenorrhoea, infertility and low quality of life. Surgery is the first choice of treatment. In recent years, medical therapies have been used before surgery to improve intraoperative and postoperative outcomes. However, such therapies tend to be expensive.\nFibroid growth is stimulated by oestrogen. Gonadotropin-hormone releasing analogues (GnRHa) induce a state of hypo-oestrogenism that shrinks fibroids , but has unacceptable side effects if used long-term. Other potential hormonal treatments, include progestins and selective progesterone-receptor modulators (SPRMs).\nThis is an update of a Cochrane Review published in 2000 and 2001; the scope has been broadened to include all preoperative medical treatments.\nObjectives\nTo assess the effectiveness and safety of medical treatments prior to surgery for uterine fibroids.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group specialised register, CENTRAL, MEDLINE, Embase, PsycINFO and CINAHL in June 2017. We also searched trials registers (ClinicalTrials.com; WHO ICTRP), theses and dissertations and the grey literature, handsearched reference lists of retrieved articles and contacted pharmaceutical companies for additional trials.\nSelection criteria\nWe included randomised comparisons of medical therapy versus placebo, no treatment, or other medical therapy before surgery, myomectomy, hysterectomy or endometrial resection, for uterine fibroids.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included a total of 38 RCTs (3623 women); 19 studies compared GnRHa to no pretreatment (n = 19), placebo (n = 8), other medical pretreatments (progestin, SPRMs, selective oestrogen receptor modulators (SERMs), dopamine agonists, oestrogen receptor antagonists) (n = 7), and four compared SPRMs with placebo. Most results provided low-quality evidence due to limitations in study design (poor reporting of randomisation procedures, lack of blinding), imprecision and inconsistency.\nGnRHa versus no treatment or placebo\nGnRHa treatments were associated with reductions in both uterine (MD -175 mL, 95% CI -219.0 to -131.7; 13 studies; 858 participants; I² = 67%; low-quality evidence) and fibroid volume (heterogeneous studies, MD 5.7 mL to 155.4 mL), and increased preoperative haemoglobin (MD 0.88 g/dL, 95% CI 0.7 to 1.1; 10 studies; 834 participants; I² = 0%; moderate-quality evidence), at the expense of a greater likelihood of adverse events, particularly hot flushes (OR 7.68, 95% CI 4.6 to 13.0; 6 studies; 877 participants; I² = 46%; moderate-quality evidence).\nDuration of hysterectomy surgery was reduced among women who received GnRHa treatment (-9.59 minutes, 95% CI 15.9 to -3.28; 6 studies; 617 participants; I² = 57%; low-quality evidence) and there was less blood loss (heterogeneous studies, MD 25 mL to 148 mL), fewer blood transfusions (OR 0.54, 95% CI 0.3 to 1.0; 6 studies; 601 participants; I² = 0%; moderate-quality evidence), and fewer postoperative complications (OR 0.54, 95% CI 0.3 to 0.9; 7 studies; 772 participants; I² = 28%; low-quality evidence).\nGnRHa appeared to reduce intraoperative blood loss during myomectomy (MD 22 mL to 157 mL). There was no clear evidence of a difference among groups for other primary outcomes after myomectomy: duration of surgery (studies too heterogeneous for pooling), blood transfusions (OR 0.85, 95% CI 0.3 to 2.8; 4 studies; 121 participants; I² = 0%; low-quality evidence) or postoperative complications (OR 1.07, 95% CI 0.43 to 2.64; I² = 0%; 5 studies; 190 participants; low-quality evidence). No suitable data were available for analysis of preoperative bleeding.\nGnRHa versus other medical therapies\nGnRHa was associated with a greater reduction in uterine volume (-47% with GnRHa compared to -20% and -22% with 5 mg and 10 mg ulipristal acetate) but was more likely to cause hot flushes (OR 12.3, 95% CI 4.04 to 37.48; 5 studies; 183 participants; I² = 61%; low-quality evidence) compared with ulipristal acetate. There was no clear evidence of a difference in bleeding reduction (ulipristal acetate 5 mg: OR 0.71, 95% CI 0.3 to 1.7; 1 study; 199 participants; moderate-quality evidence; ulipristal acetate 10 mg: OR 0.39, 95% CI 0.1 to 1.1; 1 study; 203 participants; moderate-quality evidence) or haemoglobin levels (MD -0.2, 95% CI -0.6 to 0.2; 188 participants; moderate-quality evidence).\nThere was no clear evidence of a difference in fibroid volume between GnRHa and cabergoline (MD 12.71 mL, 95% CI -5.9 to 31.3; 2 studies; 110 participants; I² = 0%; low-quality evidence).\nThe included studies did not report usable data for any other primary outcomes.\nSPRMs versus placebo\nSPRMs (mifepristone, CDB-2914, ulipristal acetate and asoprisnil) were associated with greater reductions in uterine or fibroid volume than placebo (studies too heterogeneous to pool) and increased preoperative haemoglobin levels (MD 0.93 g/dL, 0.5 to 1.4; 2 studies; 173 participants; I² = 0%; high-quality evidence). Ulipristal acetate and asoprisnil were also associated with greater reductions in bleeding before surgery (ulipristal acetate 5 mg: OR 41.41, 95% CI 15.3 to 112.4; 1 study; 143 participants; low-quality evidence; ulipristal acetate 10 mg: OR 78.83, 95% CI 24.0 to 258.7; 1 study; 146 participants; low-quality evidence; asoprisnil: MD -166.9 mL; 95% CI -277.6 to -56.2; 1 study; 22 participants; low-quality evidence). There was no evidence of differences in preoperative complications. No other primary outcomes were measured.\nAuthors' conclusions\nA rationale for the use of preoperative medical therapy before surgery for fibroids is to make surgery easier. There is clear evidence that preoperative GnRHa reduces uterine and fibroid volume, and increases preoperative haemoglobin levels, although GnRHa increases the incidence of hot flushes. During hysterectomy, blood loss, operation time and complication rates were also reduced. Evidence suggests that ulipristal acetate may offer similar advantages (reduced fibroid volume and fibroid-related bleeding and increased haemoglobin levels) although replication of these studies is advised before firm conclusions can be made. Future research should focus on cost-effectiveness and distinguish between groups of women with fibroids who would most benefit.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "GnRHa increased haemoglobin levels before surgery and decreased uterine and fibroid size, compared with no treatment or placebo. Blood loss, need for blood transfusion, operation time during hysterectomy and postoperative complications were reduced. However, women were more likely to experience hot flushes during treatment. An SPRM drug (ulipristal acetate) had similar benefits, particularly reduced bleeding. Future research should focus on cost-effectiveness and distinguish between groups of women with fibroids who would most benefit."
}
] |
query-laysum | 1055 | [
{
"role": "user",
"content": "Abstract: Background\nCrohn's disease (CD) is an inflammatory bowel disease that causes inflammation and stricture, of any part of the mucosa and the gut wall. It forms skip lesions, sparing the areas in between the affected parts of the gastrointestinal tract. Crohn's disease could have one of three complications; fistula, intestinal obstruction due to stricture, or gastrointestinal inflammation presenting as severe diarrhoea.\nStem cell therapy (SCT) is an innovative treatment that has been recently used in CD. The exact role of SCT in CD is still unclear. Stem cells modify the immunity of the patients or act as a “reset tool” for the immune system as in the case of systemically-injected stem cells, or regenerate the affected area of necrotic and inflammatory tissue as in the case of local injection into the lesion. Stem cells are a wide variety of cells including pluripotent stem cells or differentiated stem cells. The hazards range from rejection to symptomatic manifestations as fever or increase infection.\nObjectives\nThe objective of this Cochrane systematic review is to assess the effects of stem cell transplantation compared to standard of care alone or with placebo on efficacy and safety outcomes in patients with refractory CD.\nSearch methods\nWe searched MEDLINE, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), and clinical trial registries (Clinicaltrials.gov, World Health Organization-International Clinical Trials Registry Platform WHO ICTRP) from inception to 19 March 2021, without any language, publication year, or publication status restrictions. In addition, we searched references of included studies and review articles for further references. An update of the published studies was done during the writing of the review.\nSelection criteria\nWe included only randomised controlled trials (RCTs) that assessed the effectiveness and safety of SCT in refractory CD versus standard care alone (control) or with placebo.\nData collection and analysis\nTwo review authors (SEN and SFA) independently screened the studies retrieved from the search results for inclusion, extracted data and assessed the risk of bias. Any disagreement was resolved through a consensus between the authors. We used standard methodological procedures expected by Cochrane.\nMain results\nWe conducted our search on 19 March 2021 and identified 639 records. We added two records by a manual search of the published reviews on the topic to a total of 641 records. The Covidence program removed 125 duplicates making a total of 516 reports. Two review authors (SEN and SFA) screened titles and abstracts and excluded 451 records with the remaining 65 for full-text records screened independently by the two authors; only 18 studies were considered for inclusion.\nWe included seven RCTs with a total of 442 participants for the meta-analysis. The intervention group included 234 patients, and the control group included 208 patients. Nine trials are ongoing and, two abstracts are awaiting classification.\nAll patients in the control and intervention groups received the standard therapy for CD. Only three studies used blinding methods for the control group in the form of a placebo, with one study of the three stated that the blinding method was inefficient. The patients and personnel were aware of the intervention in the rest of the four studies as they were open-label trials. However, the effect of unblinding was balanced by the low risk of detection bias in five of the included studies.\nThe evidence is uncertain about the effect of SCT on achieving clinical remission as compared to control/placebo (risk ratio (RR) 1.88, 95% Confidence Interval (CI) 0.80 to 4.41; 3 studies; low-certainty evidence).\nThe evidence is very uncertain about the effect of SCT on achieving Crohn’s Disease Activity Index (CDAI) <150 at 24 weeks compared to control (RR1.02 95% CI 0.67 to 1.56; 4 studies; very-low certainty evidence).\nSCT is likely to achieve fistula closure as compared to the control/placebo both in the short term (RR 1.48, 95% CI 1.12 to 1.96); low-certainty evidence) and in the long term (RR 1.42, 95% CI 1.09 to 1.87; 4 studies; low-certainty evidence) follow-up.\nThe evidence is very uncertain about the effect of SCT to cause no difference in the number of total adverse events as compared to the control/placebo (RR 0.99, 95% CI [0.88 to 1.13); 4 studies; very-low-certainty evidence). However, SCT is likely to increase the number of serious adverse events as compared to the control/placebo (RR 1.22, 95% CI 0.88 to 1.67; 7 studies; low-certainty evidence).\nThe evidence is very uncertain about the effect of SCT to decrease the withdrawal due to adverse events as compared to the control/placebo (RR 0.78, 95% CI 0.32 to 1.89; 3 studies; very-low certainty evidence).\nFunding by pharmaceutical companies was found in three studies, with one including more than 50% of our studied population.\nAuthors' conclusions\nSCT shows an uncertain effect on clinical remission with low certainty of evidence. SCT shows an uncertain effect on CDAI score to reach <150 after 24 weeks of treatment, with very low certainty evidence. SCT shows beneficial effects on fistula-closure during short and long-term follow-up with low-certainty evidence in both outcomes. There was no change in the total number of adverse events with SCT as compared to control, with very low certainty evidence. While there was a moderate effect on increasing the number of serious adverse events in the SCT group, as compared to the control with low-certainty evidence. Withdrawal due to adverse events was slightly higher in the control group with very low certainty evidence.\nAll the participants were refractory to standard medical treatment, but the number of participants was small, this may limit the generalizability of the results. Further research is needed for validation. More objective outcomes are needed in the assessment of stem cell effectiveness in the treatment of Crohn's disease, especially the intestinal CD subtype; with standardization of the dose, methods of stem cell preparation, route of administration, and inclusion criteria to the studies to achieve clear results.\n\nGiven the provided abstract, please respond to the following query: \"What is Crohn's disease?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is an autoimmune (fighting the patient's own body) disease-causing inflammation and stenosis of the bowel, or causing a bowel opening in the skin (fistula). CD usually follows an \"on and off\" pattern, it also affects the mortality, morbidity, and quality of life of the patients. Standard treatment of CD includes drugs that suppress the immunity of the patient including anti-inflammatory, immunosuppressive, and biological drugs. However, one-third of patients do not respond to medical or surgical treatment."
}
] |
query-laysum | 1056 | [
{
"role": "user",
"content": "Abstract: Background\nNon-specific low back pain (LBP) is a common condition. It is reported to be a major health and socioeconomic problem associated with work absenteeism, disability and high costs for patients and society. Exercise is a modestly effective treatment for chronic LBP. However, current evidence suggests that no single form of exercise is superior to another. Among the most commonly used exercise interventions is motor control exercise (MCE). MCE intervention focuses on the activation of the deep trunk muscles and targets the restoration of control and co-ordination of these muscles, progressing to more complex and functional tasks integrating the activation of deep and global trunk muscles. While there are previous systematic reviews of the effectiveness of MCE, recently published trials justify an updated systematic review.\nObjectives\nTo evaluate the effectiveness of MCE in patients with chronic non-specific LBP.\nSearch methods\nWe conducted electronic searches in CENTRAL, MEDLINE, EMBASE, five other databases and two trials registers from their inception up to April 2015. We also performed citation tracking and searched the reference lists of reviews and eligible trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) that examined the effectiveness of MCE in patients with chronic non-specific LBP. We included trials comparing MCE with no treatment, another treatment or that added MCE as a supplement to other interventions. Primary outcomes were pain intensity and disability. We considered function, quality of life, return to work or recurrence as secondary outcomes. All outcomes must have been measured with a valid and reliable instrument.\nData collection and analysis\nTwo independent review authors screened the search results, assessed risk of bias and extracted the data. A third independent review author resolved any disagreement. We assessed risk of bias using the Cochrane Back and Neck (CBN) Review Group expanded 12-item criteria. We extracted mean scores, standard deviations and sample sizes from the included trials, and if this information was not provided we calculated or estimated them using methods recommended in the Cochrane Handbook. We also contacted the authors of the trials for any missing or unclear information. We considered the following time points: short-term (less than three months after randomisation); intermediate (at least three months but less than 12 months after randomisation); and long-term (12 months or more after randomisation) follow-up. We assessed heterogeneity by visual inspection of the forest plots, and by calculating the Chi2 test and the I2 statistic. We combined results in a meta-analysis expressed as mean difference (MD) and 95% confidence interval (CI). We assessed the overall quality of the evidence using the GRADE approach.\nMain results\nWe included 29 trials (n = 2431) in this review. The study sample sizes ranged from 20 to 323 participants. We considered a total of 76.6% of the included trials to have a low risk of bias, representing 86% of all participants. There is low to high quality evidence that MCE is not clinically more effective than other exercises for all follow-up periods and outcomes tested. When compared with minimal intervention, there is low to moderate quality evidence that MCE is effective for improving pain at short, intermediate and long-term follow-up with medium effect sizes (long-term, MD –12.97; 95% CI –18.51 to –7.42). There was also a clinically important difference for the outcomes function and global impression of recovery compared with minimal intervention. There is moderate to high quality evidence that there is no clinically important difference between MCE and manual therapy for all follow-up periods and outcomes tested. Finally, there is very low to low quality evidence that MCE is clinically more effective than exercise and electrophysical agents (EPA) for pain, disability, global impression of recovery and quality of life with medium to large effect sizes (pain at short term, MD –30.18; 95% CI –35.32 to –25.05). Minor or no adverse events were reported in the included trials.\nAuthors' conclusions\nThere is very low to moderate quality evidence that MCE has a clinically important effect compared with a minimal intervention for chronic low back pain. There is very low to low quality evidence that MCE has a clinically important effect compared with exercise plus EPA. There is moderate to high quality evidence that MCE provides similar outcomes to manual therapies and low to moderate quality evidence that it provides similar outcomes to other forms of exercises. Given the evidence that MCE is not superior to other forms of exercise, the choice of exercise for chronic LBP should probably depend on patient or therapist preferences, therapist training, costs and safety.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "MCE probably provides better improvements in pain, function and global impression of recovery than minimal intervention at all follow-up periods. MCE may provide slightly better improvements than exercise and electrophysical agents for pain, disability, global impression of recovery and the physical component of quality of life in the short and intermediate term. There is probably little or no difference between MCE and manual therapy for all outcomes and follow-up periods. Little or no difference is observed between MCE and other forms of exercise. Given the minimal evidence that MCE is superior to other forms of exercise, the choice of exercise for chronic LBP should probably depend on patient or therapist preferences, therapist training, costs and safety."
}
] |
query-laysum | 1057 | [
{
"role": "user",
"content": "Abstract: Background\nA detailed summary and meta-analysis of the dose-related effect of pravastatin on lipids is not available.\nObjectives\nPrimary objective\nTo assess the pharmacology of pravastatin by characterizing the dose-related effect and variability of the effect of pravastatin on the surrogate marker: low-density lipoprotein (LDL cholesterol). The effect of pravastatin on morbidity and mortality is not the objective of this systematic review.\nSecondary objectives\n• To assess the dose-related effect and variability of effect of pravastatin on the following surrogate markers: total cholesterol; high-density lipoprotein (HDL cholesterol); and triglycerides. \n• To assess the effect of pravastatin on withdrawals due to adverse effects.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomized controlled trials (RCTs) up to September 2021: CENTRAL (2021, Issue 8), Ovid MEDLINE, Ovid Embase, Bireme LILACS, the WHO International Clinical Trials Registry Platform, and ClinicalTrials.gov. We also contacted authors of relevant papers regarding further published and unpublished work. The searches had no language restrictions.\nSelection criteria\nRandomized placebo-controlled trials evaluating the dose response of different fixed doses of pravastatin on blood lipids over a duration of three to 12 weeks in participants of any age with and without evidence of cardiovascular disease.\nData collection and analysis\nTwo review authors independently assessed eligibility criteria for studies to be included, and extracted data. We entered lipid data from placebo-controlled trials into Review Manager 5 as continuous data and withdrawal due to adverse effects (WDAEs) data as dichotomous data. We searched for WDAEs information from all trials. We assessed all trials using Cochrane's risk of bias tool under the categories of sequence generation, allocation concealment, blinding, incomplete outcome data, selective reporting, and other potential biases.\nMain results\nSixty-four RCTs evaluated the dose-related efficacy of pravastatin in 9771 participants. The participants were of any age, with and without evidence of cardiovascular disease, and pravastatin effects were studied within a treatment period of three to 12 weeks. Log dose-response data over the doses of 5 mg to 160 mg revealed strong linear dose-related effects on blood total cholesterol and LDL cholesterol, and a weak linear dose-related effect on blood triglycerides. There was no dose-related effect of pravastatin on blood HDL cholesterol. Pravastatin 10 mg/day to 80 mg/day reduced LDL cholesterol by 21.7% to 31.9%, total cholesterol by 16.1% to 23.3%,and triglycerides by 5.8% to 20.0%. The certainty of evidence for these effects was judged to be moderate to high. For every two-fold dose increase there was a 3.4% (95% confidence interval (CI) 2.2 to 4.6) decrease in blood LDL cholesterol. This represented a dose-response slope that was less than the other studied statins: atorvastatin, rosuvastatin, fluvastatin, pitavastatin and cerivastatin. From other systematic reviews we conducted on statins for its effect to reduce LDL cholesterol, pravastatin is similar to fluvastatin, but has a decreased effect compared to atorvastatin, rosuvastatin, pitavastatin and cerivastatin. The effect of pravastatin compared to placebo on WADES has a risk ratio (RR) of 0.81 (95% CI 0.63 to 1.03). The certainty of evidence was judged to be very low.\nAuthors' conclusions\nPravastatin lowers blood total cholesterol, LDL cholesterol and triglyceride in a dose-dependent linear fashion. This review did not provide a good estimate of the incidence of harms associated with pravastatin because of the lack of reporting of adverse effects in 48.4% of the randomized placebo-controlled trials.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How do different doses of pravastatin affect fats in our blood?"
}
] |
query-laysum | 1058 | [
{
"role": "user",
"content": "Abstract: Background\nStrontium ranelate is a new treatment for osteoporosis therefore, its benefits and harms need to be known.\nObjectives\nTo determine the efficacy and safety of strontium ranelate for the treatment and prevention of postmenopausal osteoporosis.\nSearch methods\nWe searched MEDLINE (1996-March 2005), EMBASE (1996-week 9 2005), the Cochrane Library (1996-Issue 1 2005), reference lists of relevant articles and conference proceedings from the last two years. Additional data was sought from authors.\nSelection criteria\nWe included randomized controlled trials (RCTs) of at least one year duration comparing strontium ranelate to placebo and reporting fracture incidence, bone mineral density (BMD) or adverse events in postmenopausal women. Treatment population was defined as women with prevalent vertebral fractures and/or lumbar spine BMD T-score < -2.5 SD.\nData collection and analysis\nTwo reviewers independently determined study eligibility, assessed study validity, graded the evidence and extracted relevant data. Disagreements were resolved by consensus. RCTs were grouped by dose and treatment duration. Where possible, meta-analysis was conducted using the random effects model.\nMain results\nFour trials met the inclusion criteria. Three had losses to follow-up > 20% and only one provided an adequate description of allocation concealment. Three included a treatment population (0.5 to 2 g/day of strontium ranelate) and one a prevention population (0.125, 0.5 and 1 g/day). A 37% reduction in vertebral fractures (RR 0.63, 95% CI 0.56, 0.71), and a 14% reduction in non-vertebral fractures with the upper boundary of the confidence interval approaching one (RR 0.86, 95% CI 0.75, 0.98), were demonstrated over three years with 2 g of strontium ranelate daily in a treatment population. An increase in BMD was shown at all sites after two to three years of treatment in both populations. Lower doses of strontium ranelate were superior to placebo and the highest dose demonstrated the greatest reduction in vertebral fractures and increase in BMD. An increased risk of diarrhea with 2 g of strontium ranelate daily was found; however, adverse events did not affect the risk of discontinuing treatment nor did it increase the risk of serious side effects, gastritis or death. Additional data suggests that the risk of vascular and nervous system side-effects is increased with taking 2 g of strontium ranelate daily over three to four years.\nAuthors' conclusions\nThere is silver level evidence (www.cochranemsk.org) to support the efficacy of strontium ranelate for the reduction of fractures (vertebral and to a lesser extent, non-vertebral) in postmenopausal osteoporotic women and an increase in BMD in postmenopausal women with/without osteoporosis. Diarrhea may occur, however, adverse events leading to study withdrawal were not significantly increased. Potential vascular and neurological side-effects need to be further explored.\n\nGiven the provided abstract, please respond to the following query: \"What are the results of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Women in the studies took 2 g of strontium ranelate or a placebo (fake tablets or powder). After 2 to 3 years, the number of fractures that occurred and bone mineral density was measured. Bone mineral density is a lab test to measure how dense or strong bones are in the hip, spine or neck. The higher the bone density the better."
}
] |
query-laysum | 1059 | [
{
"role": "user",
"content": "Abstract: Background\nVirtual reality simulation uses computer-generated imagery to present a simulated training environment for learners. This review seeks to examine whether there is evidence to support the introduction of virtual reality surgical simulation into ear, nose and throat surgical training programmes.\nObjectives\n1. To assess whether surgeons undertaking virtual reality simulation-based training achieve surgical ('patient') outcomes that are at least as good as, or better than, those achieved through conventional training methods.\n2. To assess whether there is evidence from either the operating theatre, or from controlled (simulation centre-based) environments, that virtual reality-based surgical training leads to surgical skills that are comparable to, or better than, those achieved through conventional training.\nSearch methods\nThe Cochrane Ear, Nose and Throat Disorders Group (CENTDG) Trials Search Co-ordinator searched the CENTDG Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 6); PubMed; EMBASE; ERIC; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 27 July 2015.\nSelection criteria\nWe included all randomised controlled trials and controlled trials comparing virtual reality training and any other method of training in ear, nose or throat surgery.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration. We evaluated both technical and non-technical aspects of skill competency.\nMain results\nWe included nine studies involving 210 participants. Out of these, four studies (involving 61 residents) assessed technical skills in the operating theatre (primary outcomes). Five studies (comprising 149 residents and medical students) assessed technical skills in controlled environments (secondary outcomes). The majority of the trials were at high risk of bias. We assessed the GRADE quality of evidence for most outcomes across studies as 'low'.\nOperating theatre environment (primary outcomes)\nIn the operating theatre, there were no studies that examined two of three primary outcomes: real world patient outcomes and acquisition of non-technical skills. The third primary outcome (technical skills in the operating theatre) was evaluated in two studies comparing virtual reality endoscopic sinus surgery training with conventional training. In one study, psychomotor skill (which relates to operative technique or the physical co-ordination associated with instrument handling) was assessed on a 10-point scale. A second study evaluated the procedural outcome of time-on-task. The virtual reality group performance was significantly better, with a better psychomotor score (mean difference (MD) 3.20, 95% CI 2.05 to 4.34; 10-point scale) and a shorter time taken to complete the operation (MD -5.50 minutes, 95% CI -9.97 to -1.03).\nControlled training environments (secondary outcomes)\nIn a controlled environment five studies evaluated the technical skills of surgical trainees (one study) and medical students (three studies). One study was excluded from the analysis.\nSurgical trainees: One study (80 participants) evaluated the technical performance of surgical trainees during temporal bone surgery, where the outcome was the quality of the final dissection. There was no difference in the end-product scores between virtual reality and cadaveric temporal bone training.\nMedical students: Two other studies (40 participants) evaluated technical skills achieved by medical students in the temporal bone laboratory. Learners' knowledge of the flow of the operative procedure (procedural score) was better after virtual reality than conventional training (SMD 1.11, 95% CI 0.44 to 1.79). There was also a significant difference in end-product score between the virtual reality and conventional training groups (SMD 2.60, 95% CI 1.71 to 3.49). One study (17 participants) revealed that medical students acquired anatomical knowledge (on a scale of 0 to 10) better during virtual reality than during conventional training (MD 4.3, 95% CI 2.05 to 6.55). No studies in a controlled training environment assessed non-technical skills.\nAuthors' conclusions\nThere is limited evidence to support the inclusion of virtual reality surgical simulation into surgical training programmes, on the basis that it can allow trainees to develop technical skills that are at least as good as those achieved through conventional training. Further investigations are required to determine whether virtual reality training is associated with better real world outcomes for patients and the development of non-technical skills. Virtual reality simulation may be considered as an additional learning tool for medical students.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We assessed the quality of the evidence in this review for most outcomes as 'low' (using the GRADE system). The key reasons for this were issues related to study design. The evidence in this review is up to date to 27 July 2015."
}
] |
query-laysum | 1060 | [
{
"role": "user",
"content": "Abstract: Background\nOsteoarthritis, a common joint disorder, is one of the leading causes of disability. Chondroitin has emerged as a new treatment. Previous meta-analyses have shown contradictory results on the efficacy of chondroitin. This, in addition to the publication of more trials, necessitates a systematic review.\nObjectives\nTo evaluate the benefit and harm of oral chondroitin for treating osteoarthritis compared with placebo or a comparator oral medication including, but not limited to, nonsteroidal anti-inflammatory drugs (NSAIDs), analgesics, opioids, and glucosamine or other \"herbal\" medications.\nSearch methods\nWe searched seven databases up to November 2013, including the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE, CINAHL, EMBASE, Science Citation Index (Web of Science) and Current Controlled Trials. We searched the US Food and Drug Administration (FDA) and European Medicines Agency (EMEA) websites for adverse effects. Trial registers were not searched.\nSelection criteria\nAll randomized or quasi-randomized clinical trials lasting longer than two weeks, studying adults with osteoarthritis in any joint, and comparing chondroitin with placebo, an active control such as NSAIDs, or other \"herbal\" supplements such as glucosamine.\nData collection and analysis\nTwo review authors independently performed all title assessments, data extractions, and risk of bias assessments.\nMain results\nForty-three randomized controlled trials including 4,962 participants treated with chondroitin and 4,148 participants given placebo or another control were included. The majority of trials were in knee OA, with few in hip and hand OA. Trial duration varied from 1 month to 3 years. Participants treated with chondroitin achieved statistically significantly and clinically meaningful better pain scores (0-100) in studies less than 6 months than those given placebo with an absolute risk difference of 10% lower (95% confidence interval (CI), 15% to 6% lower; number needed to treat (NNT) = 5 (95% CI, 3 to 8; n = 8 trials) (level of evidence, low; risk of bias, high); but there was high heterogeneity between the trials (T2 = 0.07; I2 = 70%, which was not easily explained by differences in risk of bias or study sample size). In studies longer than 6 months, the absolute risk difference for pain was 9% lower (95% CI 18% lower to 0%); n = 6 trials; T2 = 0.18; I2 = 83% ), again with low level of evidence.\nFor the Western Ontario and McMaster Universities Osteoarthritis Index Minimal Clinically Important Improvement (WOMAC MCII Pain subscale) outcome, a reduction in knee pain by 20% was achieved by 53/100 in the chondroitin group versus 47/100 in the placebo group, an absolute risk difference of 6% (95% CI 1% to 11%), (RR 1.12, 95% CI 1.01 to 1.24; T2 = 0.00; I2 = 0%) (n = 2 trials, 1253 participants; level of evidence, high; risk of bias, low).\nDifferences in Lequesne's index (composite of pain,function and disability) statistically significantly favoured chondroitin as compared with placebo in studies under six months, with an absolute risk difference of 8% lower (95% CI 12% to 5% lower; T2= 0.78; n = 7 trials) (level of evidence, moderate; risk of bias, unclear), also clinically meaningful. Loss of minimum joint space width in the chondroitin group was statistically significantly less than in the placebo group, with a relative risk difference of 4.7% less (95% CI 1.6% to 7.8% less; n = 2 trials) (level of evidence, high; risk of bias, low). Chondroitin was associated with statistically significantly lower odds of serious adverse events compared with placebo with Peto odds ratio of 0.40 (95% CI 0.19 to 0.82; n = 6 trials) (level of evidence, moderate). Chondroitin did not result in statistically significant numbers of adverse events or withdrawals due to adverse events compared with placebo or another drug. Adverse events were reported in a limited fashion, with some studies providing data and others not.\nComparisons of chondroitin taken alone or in combination with glucosamine or another supplement showed a statistically significant reduction in pain (0-100) when compared with placebo or an active control, with an absolute risk difference of 10% lower (95% CI 14% to 5% lower); NNT = 4 (95% CI 3 to 6); T2 = 0.33; I2 = 91%; n = 17 trials) (level of evidence, low). For physical function, chondroitin in combination with glucosamine or another supplement showed no statistically significant difference from placebo or an active control, with an absolute risk difference of 1% lower (95% CI 6% lower to 3% higher with T2 = 0.04; n = 5 trials) (level of evidence, moderate). Differences in Lequesne's index statistically significantly favoured chondroitin as compared with placebo, with an absolute risk difference of 8% lower (95% CI, 12% to 4% lower; T2 = 0.12; n = 10 trials) (level of evidence, moderate). Chondroitin in combination with glucosamine did not result in statistically significant differences in the numbers of adverse events, withdrawals due to adverse events, or in the numbers of serious adverse events compared with placebo or with an active control.\nThe beneficial effects of chondroitin in pain and Lequesne's index persisted when evidence was limited to studies with adequate blinding or studies that used appropriate intention to treat (ITT) analyses. These beneficial effects were uncertain when we limited data to studies with appropriate allocation concealment or a large study sample (> 200) or to studies without pharmaceutical funding.\nAuthors' conclusions\nA review of randomized trials of mostly low quality reveals that chondroitin (alone or in combination with glucosamine) was better than placebo in improving pain in participants with osteoarthritis in short-term studies. The benefit was small to moderate with an 8 point greater improvement in pain (range 0 to 100) and a 2 point greater improvement in Lequesne's index (range 0 to 24), both likely clinically meaningful. These differences persisted in some sensitivity analyses and not others. Chondroitin had a lower risk of serious adverse events compared with control. More high-quality studies are needed to explore the role of chondroitin in the treatment of osteoarthritis. The combination of some efficacy and low risk associated with chondroitin may explain its popularity among patients as an over-the-counter supplement.\n\nGiven the provided abstract, please respond to the following query: \"Pain level after 6 months (lower score is better)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- People who took chondroitin scored 10 points lower on 0 to 100 pain scale than those who took a placebo (10% absolute difference).\n- People who took chondroitin rated their pain at 18 on a 0 to 100 scale.\n- People who took placebo rated their pain at 28 on a 0 to 100 scale.\nIn studies longer than 6 months, we are uncertain whether pain is reduced more by chondroitin than placebo."
}
] |
query-laysum | 1061 | [
{
"role": "user",
"content": "Abstract: Background\nThe choice of surgical approach for the management of subaxial cervical spine facet dislocations is a controversial subject amongst spine surgeons. Reasons for this include differences in the technical familiarity and experience of surgeons with the different surgical approaches, and variable interpretation of image studies regarding the existence of a traumatic intervertebral disc herniation and of the neurological status of the patient. Moreover, since the approaches are dissimilar, important variations are likely in neurological, radiographical and clinical outcomes.\nObjectives\nTo compare the effects (benefits and harms) of the different surgical approaches used for treating adults with acute cervical spine facet dislocation.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (9 May 2014), The Cochrane Central Register of Controlled Trials (The Cochrane Library, 2014 Issue 4), MEDLINE (1946 to April Week 5 2014), MEDLINE In-Process & Other Non-Indexed Citations (8 May 2013), EMBASE (1980 to 2014 Week 18), Latin American and Caribbean Health Sciences (9 May 2014), trial registries, conference proceedings and reference lists of articles to May 2014.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials that compared surgical approaches for the management of adults with acute cervical spine facet dislocations with and without spinal cord injury.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias and extracted data.\nMain results\nWe included one randomised and one quasi-randomised controlled trial involving a total of 94 participants and reporting results for a maximum of 84 participants. One trial included patients with spinal cord injuries and the other included patients without spinal cord injuries. Both trials compared anterior versus posterior surgical approaches. Both trials were at high risk of bias, including selection bias (one trial), performance bias (both trials) and attrition bias (one trial). Data were pooled for one outcome only: non-union. Reflecting also the imprecision of the results, the evidence was deemed to be of very low quality for all outcomes; which means that our level of uncertainty about the estimates is high.\nNeither trial found differences between the two approaches in neurological recovery or status, as shown in one study by small clinically insignificant differences in NASS (Northern American Spine Society) neurological scores (0 to 100: optimal score) at one year of follow-up: anterior mean score: 85.23 versus posterior mean score: 83.86; mean difference (MD) 1.37 favouring anterior approach, 95% confidence interval (CI) -9.76 to 12.50; 33 participants; 1 study). The same trial found no relevant between-approach differences at one year in patient-reported quality of life measured using the 36-item Short Form Survey physical (MD -0.08, 95% CI -7.26 to 7.10) and mental component scores (MD 2.88, 95% CI -3.32 to 9.08). Neither trial found evidence of significant differences in long-term pain, or non-union (2/38 versus 2/46; risk ratio (RR) 1.18, 95% CI 0.04 to 34.91). One trial found better sagittal and more 'normal' alignment after the anterior approach (MD -10.31 degrees favouring anterior approach, 95% CI -14.95 degrees to -5.67 degrees), while the other trial reported no significant differences in cervical alignment. There was insufficient evidence to indicate between-group differences in medical adverse events, rates of instrumentation failure and infection. One trial found that the several participants had voice and swallowing disorders after anterior approach surgery (11/20) versus none (0/22) in the posterior approach group: RR 25.19, 95% CI 1.58 to 401.58); all had recovered by three months.\nAuthors' conclusions\nVery low quality evidence from two trials indicated little difference in long-term neurological status, pain or patient-reported quality of life between anterior and posterior surgical approaches to the management of individuals with subaxial cervical spine facet dislocations. Sagittal alignment may be better achieved with the anterior approach. There was insufficient evidence available to indicate between-group differences in medical adverse events, rates of instrumentation failure and infection. The disorders of the voice and swallowing that occurred exclusively in the anterior approach group all resolved by three months. We are very uncertain about this evidence and thus we cannot say whether one approach is better than the other. There was no evidence available for other approaches. Further higher quality multicentre randomised trials are warranted.\n\nGiven the provided abstract, please respond to the following query: \"Summary of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Neither study found differences between the two approaches in neurological status and pain at one year. One study also found no differences between the two approaches in patient-reported quality of life. Although one study found that the anterior approach resulted in more normal curvature of the neck, the other study reported finding no difference between the two approaches with regard to the alignment of the neck vertebrae. The evidence was insufficient to indicate differences between the two approaches in medical adverse events, rates of instrumentation failure and infection. Although over half (11) of 20 people in the anterior approach group in one study had voice and swallowing disorders, these all resolved by three months."
}
] |
query-laysum | 1062 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a Cochrane Review first published in 2001, and last updated in 2013. This review is one in a series of Cochrane Reviews investigating pair-wise monotherapy comparisons.\nEpilepsy is a common neurological condition in which abnormal electrical discharges from the brain cause recurrent unprovoked seizures. It is believed that with effective drug treatment, up to 70% of individuals with active epilepsy have the potential to become seizure-free and go into long-term remission shortly after starting drug therapy with a single antiepileptic drug in monotherapy.\nWorldwide, particularly in the developing world, phenytoin and phenobarbitone are commonly used antiepileptic drugs, primarily because they are inexpensive. The aim of this review is to summarise data from existing trials comparing phenytoin and phenobarbitone.\nObjectives\nTo review the time to treatment failure, remission and first seizure with phenobarbitone compared with phenytoin when used as monotherapy in people with focal onset seizures (simple or complex focal and secondarily generalised), or generalised onset tonic-clonic seizures (with or without other generalised seizure types).\nSearch methods\nFor the latest update, we searched the following databases on 21 August 2018: the Cochrane Register of Studies (CRS Web), which includes Cochrane Epilepsy's Specialized Register and CENTRAL; MEDLINE; the US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov); and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We handsearched relevant journals and contacted pharmaceutical companies, original trial investigators, and experts in the field.\nSelection criteria\nRandomised controlled trials comparing monotherapy with either phenobarbitone or phenytoin in children or adults with focal onset seizures or generalised onset tonic-clonic seizures.\nData collection and analysis\nThis was an individual participant data (IPD), review. Our primary outcome was time to treatment failure. Our secondary outcomes were time to first seizure post-randomisation, time to six-month remission and time to 12-month remission. We used Cox proportional hazards regression models to obtain trial-specific estimates of hazard ratios (HRs), with 95% confidence intervals (CIs), using the generic inverse variance method to obtain the overall pooled HR and 95% CI.\nMain results\nIndividual participant data were obtained for five studies, which recruited a total of 635 participants, representing 80% of 798 individuals from all seven identified eligible trials. For remission outcomes, an HR of less than 1 indicates an advantage for phenytoin and for first seizure and treatment failure outcomes an HR of less than 1 indicates an advantage for phenobarbitone.\nResults for the primary outcome of the review were: time to treatment failure for any reason related to treatment (pooled HR adjusted for seizure type for 499 participants: 1.61, 95% CI 1.22 to 2.12, low-certainty evidence), time to treatment failure due to adverse events (pooled HR adjusted for seizure type for 499 participants: 1.99, 95% CI 1.37 to 2.87, low-certainty evidence), time to treatment failure due to lack of efficacy (pooled HR adjusted for seizure type for 499 participants: 1.87, 95% CI 1.32 to 2.66, moderate-certainty evidence), showing a statistically significant advantage for phenytoin compared to phenobarbitone.\nFor our secondary outcomes, we did not find any statistically significant differences between phenytoin and phenobarbitone: time to first seizure post-randomisation (pooled HR adjusted for seizure type for 624 participants: 0.85, 95% CI 0.69 to 1.06, moderate-certainty evidence), time to 12-month remission (pooled HR adjusted for seizure type for 588 participants: 0.90, 95% CI 0.69 to 1.19, moderate-certainty evidence), and time to six-month remission pooled HR adjusted for seizure type for 588 participants: 0.91, 95% CI 0.71 to 1.15, moderate-certainty evidence).\nFor individuals with focal onset seizures (73% of individuals contributing to analysis), numerical results were similar and conclusions the same as for analyses of all individuals and for individuals with generalised onset seizures (27% of individuals contributing to analysis), results were imprecise and no clear differences between the drugs were observed.\nSeveral confounding factors, most notably the differences in design of the trials with respect to blinding, were likely to have impacted on the results of the primary outcome 'time to treatment failure', and in turn, the treatment failure rates may have impacted on the secondary efficacy outcomes of time to first seizure and time to 12-month and six-month remission.\nAuthors' conclusions\nLow-certainty evidence from this review suggests that phenytoin may be a more effective drug than phenobarbitone in terms of treatment retention (treatment failures due to lack of efficacy or adverse events or both). Moderate-certainty evidence from this review also indicates no differences between the drugs in terms of time to seizure recurrence and seizure remission.\nHowever, the trials contributing to the analyses had methodological inadequacies and methodological design differences that may have impacted upon the results of this review. Therefore, we do not suggest that results of this review alone should form the basis of a treatment choice for a patient with newly onset seizures. We recommend that future trials should be designed to the highest quality possible with consideration of masking, choice of population, classification of seizure type, duration of follow-up, choice of outcomes and analysis, and presentation of results.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Some of the trials contributing data to the review had methodological problems, which may have introduced bias and inconsistent results into this review. Also, we believe that the difference in study design with regards to whether the treatment was masked from the patients and clinicians (e.g. with a placebo tablet) had an impact on the rates of withdrawal from the study treatments, which also is likely to have impacted on the outcomes related to seizure control\nThese problems may have affected the results of this review and we judged the quality of the evidence provided by this review to be moderate to low quality. We do not suggest using the results of this review alone for making a choice between phenytoin or phenobarbitone for the treatment of epilepsy. Future trials comparing these drugs or any other antiepileptic drugs should be designed using high-quality methods to ensure results are also of high quality."
}
] |
query-laysum | 1063 | [
{
"role": "user",
"content": "Abstract: Background\nGlaucoma affects more than 70 million people worldwide, with about 10% being bilaterally blind, making it the leading cause of irreversible blindness globally. In patients with advanced glaucoma or those who have failed medical treatment without achieving adequate intraocular pressure (IOP) control, trabeculectomy (glaucoma filtration surgery where an ostium is created into the anterior chamber from underneath a partial thickness scleral flap to allow for aqueous flow out of the anterior chamber intointo the subconjunctival space forming a filtering bleb) and aqueous shunt surgery for more complex and refractory cases remain the mainstay therapies. Proliferation of fibrous tissue around an implanted aqueous shunt may block the diffusion of aqueous humour. Mitomycin C (MMC) is one of two commonly used adjunct antifibrotic agents used during aqueous shunt surgery to prevent proliferation of fibrous tissue. However, the effectiveness and safety of the use of intraoperative MMC during aqueous shunt surgery has not been established.\nObjectives\nTo evaluate the effectiveness and safety of MMC versus no MMC used during aqueous shunt surgery for reducing IOP in primary and secondary glaucoma.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2018, Issue 2); Ovid MEDLINE; Embase.com; PubMed; Latin American and Caribbean Health Sciences Literature Database (LILACS); ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 13 February 2018.\nSelection criteria\nWe included randomized controlled trials (RCTs) in which one group of participants received MMC during aqueous shunt surgery and another group did not. We did not exclude studies based on outcomes.\nData collection and analysis\nTwo review authors independently reviewed titles and abstracts from the literature searches. We obtained full-text reports of potentially relevant studies and assessed them for inclusion. Two review authors independently extracted data related to study characteristics, risk of bias, and outcomes. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included five RCTs, with a total of 333 eyes with glaucoma randomized, and identified two ongoing trials. All included trials examined the effect of MMC versus no MMC when used during aqueous shunt surgery for glaucoma. The trials included participants with different types of uncontrolled glaucoma. One study was conducted in China, one in Saudi Arabia, two in the USA, and one study was a multicenter study conducted in Brazil, Canada, Scotland, and USA. We assessed all trials as having overall unclear risk of bias due to incomplete reporting of study methods and outcomes; two of the five trials were reported only as conference abstracts.\nNone of the included trials reported mean decrease from baseline in IOP; however, all five trials reported mean IOP at 12 months post-surgery. At 12 months, the effect of MMC on mean IOP compared with no MMC was unclear based on a meta-analysis of trials (mean difference -0.12 mmHg, 95% CI -2.16 to 2.41; low-certainty evidence). Two trial did not report sufficient information to include in meta-analysis, but reported that mean IOP was lower in the MMC group compared with the no MMC group at 12 months.\nNone of the included trials reported mean change from baseline in visual acuity; however, one trial reported lower mean LogMAR values (better vision) in the MMC group than in the no MMC group at 12 months post-surgery. None of the included studies reported the proportion of participants with stable best-corrected visual acuity. Three trials reported that loss of vision was not significantly different between groups (no data available for meta-analysis).\nNone of the included studies reported the proportion of participants with a postoperative hypertensive phase, which is defined as IOP > 21 mmHg within 3 months after surgery. Two trials reported adverse events (choroidal effusion, corneal edema, flat anterior chamber, and retinal detachment); however, due to small numbers of events and sample sizes, no clear difference between MMC and placebo groups was observed.\nAuthors' conclusions\nWe found insufficient evidence in this review to suggest MMC provides any postoperative benefit for glaucoma patients who undergo aqueous shunt surgery. Data across all five included trials were sparse and the reporting of study methods required to assess bias was inadequate. Future RCTs of this intervention should report methods in sufficient detail to permit assessment of potential bias and estimate target sample sizes based on clinically meaningful effect sizes.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included five studies with a total of 333 eyes with glaucoma. All five trials reported few details about how they were designed and carried out and few outcomes regarding IOP. Thus, we do not know whether MMC was advantageous compared with placebo. We found no clear benefit or harm for MMC versus no MMC during aqueous shunt surgery."
}
] |
query-laysum | 1064 | [
{
"role": "user",
"content": "Abstract: Background\nTranexamic acid reduces haemorrhage through its antifibrinolytic effects. In a previous version of the present review, we found that tranexamic acid may reduce mortality. This review includes updated searches and new trials.\nObjectives\nTo assess the effects of tranexamic acid versus no intervention, placebo or other antiulcer drugs for upper gastrointestinal bleeding.\nSearch methods\nWe updated the review by performing electronic database searches (Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Science Citation Index) and manual searches in July 2014.\nSelection criteria\nRandomised controlled trials, irrespective of language or publication status.\nData collection and analysis\nWe used the standard methodological procedures of the The Cochrane Collaboration. All-cause mortality, bleeding and adverse events were the primary outcome measures. We performed fixed-effect and random-effects model meta-analyses and presented results as risk ratios (RRs) with 95% confidence intervals (CIs) and used I² as a measure of between-trial heterogeneity. We analysed tranexamic acid versus placebo or no intervention and tranexamic acid versus antiulcer drugs separately. To analyse sources of heterogeneity and robustness of the overall results, we performed subgroup, sensitivity and sequential analyses.\nMain results\nWe included eight randomised controlled trials on tranexamic acid for upper gastrointestinal bleeding. Additionally, we identified one large ongoing pragmatic randomised controlled trial from which data are not yet available. Control groups were randomly assigned to placebo (seven trials) or no intervention (one trial). Two trials also included a control group randomly assigned to antiulcer drugs (lansoprazole or cimetidine). The included studies were published from 1973 to 2011. The number of participants randomly assigned ranged from 47 to 216 (median 204). All trials reported mortality. In total, 42 of 851 participants randomly assigned to tranexamic acid and 71 of 850 in the control group died (RR 0.60, 95% CI 0.42 to 0.87; P value 0.007; I² = 0%). The analysis was not confirmed when all participants in the intervention group with missing outcome data were included as treatment failures, or when the analysis was limited to trials with low risk of attrition bias. Rebleeding was diagnosed for 117 of 826 participants in the tranexamic acid group and for 146 of 825 participants in the control group (RR 0.80, 95% CI 0.64 to 1.00; P value 0.07; I² = 49%). We were able to evaluate the risk of serious adverse events on the basis of only four trials. Our analyses showed 'no evidence of a difference between tranexamic acid and control interventions regarding the risk of thromboembolic events.' Tranexamic acid appeared to reduce the risk of surgery in a fixed-effect meta-analysis (RR 0.73, 95% CI 0.56 to 0.95), but this result was no longer statistically significant in a random-effects meta-analysis (RR 0.61, 95% CI 0.35 to 1.04; P value 0.07). No difference was apparent between tranexamic acid and placebo in the assessment of transfusion (RR 1.02, 95% CI 0.94 to 1.11; I² = 0%), and meta-analyses that compared tranexamic acid versus antiulcer drugs did not identify beneficial or detrimental effects of tranexamic acid for any of the outcomes assessed.\nAuthors' conclusions\nThis review found that tranexamic acid appears to have a beneficial effect on mortality, but a high dropout rate in some trials means that we cannot be sure of this until the findings of additional research are published. At the time of this update in 2014, one large study (8000 participants) is in progress, so this review will be much more informative in a few years. Further examination of tranexamic acid would require inclusion of high-quality randomised controlled trials. Timing of randomisation is essential to avoid attrition bias and to limit the number of withdrawals. Future trials may use a pragmatic design and should include all participants with suspected bleeding or with endoscopically verified bleeding, as well as a tranexamic placebo arm and co-administration of pump inhibitors and endoscopic therapy. Assessment of outcome measures in such studies should be clearly defined. Endoscopic examination with appropriate control of severe bleeding should be performed, as should endoscopic verification of clinically significant rebleeding. In addition, clinical measures of rebleeding should be included. Other important outcome measures include mortality (30-day or in-hospital), need for emergency surgery or blood transfusion and adverse events (major or minor).\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review includes data from eight randomised trials on tranexamic acid. Two trials also assessed antiulcer drugs. Only one trial used additional endoscopic therapy, as the remaining trials were performed before this intervention was introduced into clinical practice."
}
] |
query-laysum | 1065 | [
{
"role": "user",
"content": "Abstract: Background\nVertebral artery stenosis (narrowing of the vertebral artery) is an important cause of posterior circulation ischaemic stroke. Medical treatment (MT) e.g. controlling risk-factors and drug treatment, surgery, and endovascular treatment (ET) are the prevailing treatment strategies for symptomatic vertebral artery stenosis. ET consist s of percutaneous transluminal angioplasty (balloon catheter through the skin), with or without stenting. However, optimal management of people with symptomatic vertebral artery stenosis has not yet been established.\nObjectives\nTo assess the safety and efficacy of percutaneous transluminal angioplasty, with or without stenting, combined with MT, compared to MT alone, in people with episodes of cerebral ischaemia due to vertebral artery stenosis.\nSearch methods\nWe searched the Cochrane Stroke Group, MEDLINE, Embase, BIOSIS, and two other indexes in Web of Science, China Biological Medicine Database, Chinese Science and Technique Journals Database, China National Knowledge Infrastructure and Wanfang Data, as well as ClinicalTrials.gov trials register and the World Health Organization (WHO) International Clinical Trials Registry Platform to 23 July 2021.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared ET plus MT with MT alone in treating people aged 18 years or over with symptomatic vertebral artery stenosis. We included all types of ET modalities (e.g. angioplasty alone, balloon-mounted stenting, and angioplasty followed by placement of a self-expanding stent). MT included risk factor control, antiplatelet therapy, lipid-lowering therapy, and individualised management for people with hypertension or diabetes.\nData collection and analysis\nTwo review authors independently screened potentially eligible studies, extracted data, and assessed trial quality and risk of bias. We applied the GRADE approach to assess the certainty of evidence. The primary outcomes were 30-day post-randomisation death/stroke (short-term outcome) and fatal/non-fatal stroke after 30 days post-randomisation to completion of follow-up (long-term outcome).\nMain results\nWe included three RCTs with 349 participants with symptomatic vertebral artery stenosis with a mean age of 64.4 years. The included RCTs were at low risk of bias overall. However, all included studies had a high risk of performance bias because blinding of the ET was not feasible.\nThere was no significant difference in 30-day post-randomisation deaths/strokes between ET plus MT and MT alone (risk ratio (RR) 2.33, 95% confidence interval (CI) 0.77 to 7.07; 3 studies, 349 participants; low-certainty evidence). There were no significant differences between ET plus MT and MT alone in fatal/non-fatal strokes in the territory of the treated vertebral artery stenosis after 30 days post-randomisation to completion of follow-up (RR 0.51, 95% CI 0.26 to 1.01; 3 studies, 349 participants; moderate-certainty evidence), ischaemic or haemorrhagic stroke during the entire follow-up period (RR 0.77, 95% CI 0.44 to 1.32; 3 studies, 349 participants; moderate-certainty evidence), death during the entire follow-up period (RR 0.78, 95% CI 0.37 to 1.62; 3 studies, 349 participants; low-certainty evidence), and stroke or transient ischaemic attack (TIA) during the entire follow-up period (RR 0.65, 95% CI 0.39 to 1.06; 2 studies, 234 participants; moderate-certainty evidence).\nAuthors' conclusions\nThis Cochrane Review provides low- to moderate-certainty evidence indicating that there are no significant differences in either short- or long-term risks of stroke, death, or TIA between people with symptomatic vertebral artery stenosis treated with ET plus MT and those treated with MT alone.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched various electronic databases of clinical trial data on 23 July 2021, to find studies that would help us answer the research question."
}
] |
query-laysum | 1066 | [
{
"role": "user",
"content": "Abstract: Background\nIt is well-established that experiencing sexual abuse and violence can have a range of detrimental impacts; a wide variety of interventions exist to support survivors in the aftermath. Understanding the experiences and perspectives of survivors receiving such interventions, along with those of their family members, and the professionals who deliver them is important for informing decision making as to what to offer survivors, for developing new interventions, and enhancing their acceptability.\nObjectives\nThis review sought to:\n1. identify, appraise and synthesise qualitative studies exploring the experiences of child and adult survivors of sexual abuse and violence, and their caregivers, regarding psychosocial interventions aimed at supporting survivors and preventing negative health outcomes in terms of benefits, risks/harms and barriers;\n2. identify, appraise and synthesise qualitative studies exploring the experiences of professionals who deliver psychosocial interventions for sexual abuse and violence in terms of perceived benefits, risks/harms and barriers for survivors and their families/caregivers;\n3. develop a conceptual understanding of how different factors influence uptake, dropout or completion, and outcomes from psychosocial interventions for sexual abuse and violence;\n4. develop a conceptual understanding of how features and types of interventions responded to the needs of different user/survivor groups (e.g. age groups; types of abuse exposure; migrant populations) and contexts (healthcare/therapeutic settings; low- and middle-income countries (LMICs));\n5. explore how the findings of this review can enhance our understanding of the findings from the linked and related reviews assessing the effectiveness of interventions aimed at supporting survivors and preventing negative health outcomes.\nSearch methods\nIn August 2021 we searched MEDLINE, Embase, PsycINFO and nine other databases. We also searched for unpublished reports and qualitative reports of quantitative studies in a linked systematic review, together with reference checking, citation searches and contacting authors and other researchers to identify relevant studies.\nSelection criteria\nWe included qualitative and mixed-methods studies (with an identifiable qualitative component) that were linked to a psychosocial intervention aimed at supporting survivors of sexual abuse and violence. Eligible studies focused on at least one of three participant groups: survivors of any age, gender, sexuality, ethnicity or [dis]ability who had received a psychosocial intervention; their carers, family members or partners; and professionals delivering such interventions. We placed no restrictions in respect of settings, locations, intervention delivery formats or durations.\nData collection and analysis\nSix review authors independently assessed the titles, abstracts and full texts identified. We extracted data using a form designed for this synthesis, then used this information and an appraisal of data richness and quality in order to stratify the studies using a maximum variation approach. We assessed the methodological limitations using the Critical Skills Appraisal Programme (CASP) tool. We coded directly onto the sampled papers using NVivo and synthesised data using a thematic synthesis methodology and used the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach to assess our confidence in each finding. We used a narrative synthesis and matrix model to integrate our qualitative evidence synthesis (QES) findings with those of intervention review findings.\nMain results\nWe identified 97 eligible studies and sampled 37 of them for our analysis. Most sampled studies were from high-income countries, with four from middle-income and two from low-income countries. In 27 sampled studies, the participants were survivors, in three they were intervention facilitators. Two included all three of our stakeholder groups, and five included two of our groups. The studies explored a wide range of psychosocial interventions, with only one type of intervention explored in more than one study.\nThe review indicates that features associated with the context in which interventions were delivered had an impact on how individuals accessed and experienced interventions. This included organisational features, such as staff turnover, that could influence survivors' engagement with interventions; the setting or location in which interventions were delivered; and the characteristics associated with who delivered the interventions.\nStudies that assess the effectiveness of interventions typically assess their impact on mental health; however, as well as finding benefits to mental health, our QES found that study participants felt interventions also had positive impacts on their physical health, mood, understanding of trauma, interpersonal relationships and enabled them to re-engage with a wide range of areas in their lives.\nParticipants explained that features of interventions and their contexts that best enabled them to benefit from interventions were also often things that could be a barrier to benefiting from interventions. For example, the relationship with the therapist, when open and warm was a benefit, but if such a relationship could not be achieved, it was a barrier. Survivors' levels of readiness and preparedness to both start and end interventions could have positive (if they were ready) or negative (if they were not) impacts. Study participants identified the potential risks and harms associated with completing interventions but felt that it was important to face and process trauma.\nSome elements of interventions were specific to the intervention type (e.g. faith-based interventions), or related to an experience of an intervention that held particular relevance to subgroups of survivors (e.g. minority groups); these issues could impact how individuals experienced delivering or receiving interventions.\nAuthors' conclusions\nWe had high or moderate confidence in all but one of our review findings. Further research in low- and middle-income settings, with male survivors of sexual abuse and violence and those from minority groups could strengthen the evidence for low and moderate confidence findings. We found that few interventions had published quantitative and qualitative evaluations. Since this QES has highlighted important aspects that could enable interventions to be more suitable for survivors, using a range of methodologies would provide valuable information that could enhance intervention uptake, completion and effectiveness. This study has shown that although survivors often found interventions difficult, they also appreciated that they needed to work through trauma, which they said resulted in a wide range of benefits. Therefore, listening to survivors and providing appropriate interventions, at the right time for them, can make a significant difference to their health and well-being.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked for studies that explored the experiences of survivors and professionals who took part in interventions that supported survivors of sexual abuse and violence, or family members who supported survivors who completed these interventions.\nWe included studies that:\n- treated survivors who were sexually abused when they were children or adults, or both;\n- involved participants of any age, gender, sexuality, ethnicity, or [dis]ability.\nWe included studies from any country and setting."
}
] |
query-laysum | 1067 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is an inherited condition causing disease most noticeably in the lungs, digestive tract and pancreas. People with CF often have malnutrition and growth delay. Adequate nutritional supplementation does not improve growth optimally and hence an anabolic agent, recombinant human growth hormone (rhGH), has been proposed as a potential intervention. This is an update of a previously published review.\nObjectives\nTo evaluate the effectiveness and safety of rhGH therapy in improving lung function, quality of life and clinical status of children and young adults with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Trials Register comprising references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. Date of latest search: 12 January 2021.\nWe also searched ongoing trials registers:\nclinicaltrials.gov from the United States - date of latest search 19 Jun 2021;\nWHO International Clinical Trials Registry Platform (ICTRP) - date of latest search 05 March 2018 (not available in 2021).\nWe conducted a search of relevant endocrine journals and proceedings of the Endocrinology Society meetings using Web of Science, Scopus and Proceedings First. Date of latest search: 21 Jun 2021.\nSelection criteria\nRandomised and quasi-randomised controlled trials of all preparations of rhGH compared to either no treatment, or placebo, or each other at any dose (high-dose and low-dose) or route and for any duration, in children or young adults (aged up to 25 years) diagnosed with CF (by sweat test or genetic testing).\nData collection and analysis\nTwo authors independently screened papers, extracted trial details and assessed their risk of bias. We assessed the quality of the evidence using the GRADE system.\nMain results\nWe included eight trials (291 participants, aged between five and 23 years) in the current version of the review. Seven trials compared standard-dose rhGH (approximately 0.3 mg/kg/week) to no treatment and one three-arm trial (63 participants) compared placebo, standard-dose rhGH (0.3 mg/kg/week) and high-dose rhGH (0.5 mg/kg/week). Six trials lasted for one year and two trials for six months. We found that rhGH treatment may improve some of the pulmonary function outcomes, but there was no difference between standard and high-dose levels (low-certainty evidence, limited by inconsistency across the trials, small number of participants and short duration of therapy). The trials show evidence of improvement in the anthropometric parameters (height, weight and lean body mass) with rhGH therapy, again no differences between dose levels. We found improvement in height for all comparisons (very low- to low-certainty evidence), but improvements in weight and lean body mass were only reported for standard-dose rhGH versus no treatment (very low-certainty evidence). There is some evidence indicating a change in the level of fasting blood glucose with rhGH therapy, however, it did not cross the clinical threshold for diagnosis of diabetes in the trials of short duration (low-certainty evidence). There is low- to very low-certainty evidence for improvement of pulmonary exacerbations with no further significant adverse effects, but this is limited by the short duration of trials and the small number of participants. One small trial provided inconsistent evidence on improvement in quality of life (very low-certainty evidence). There is limited evidence from three trials in improvements in exercise capacity (low-certainty evidence). None of the trials have systematically compared the expense of therapy on overall healthcare costs.\nAuthors' conclusions\nWhen compared with no treatment, rhGH therapy is effective in improving the intermediate outcomes in height, weight and lean body mass. Some measures of pulmonary function showed moderate improvement, but no consistent benefit was seen across all trials. The significant change in blood glucose levels, although not causing diabetes, emphasizes the need for careful monitoring of this adverse effect with therapy in a population predisposed to CF-related diabetes. No significant changes in quality of life, clinical status or side-effects were observed in this review due to the small number of participants. Long-term, well-designed randomised controlled trials of rhGH in individuals with CF are required prior to routine clinical use of rhGH in CF.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Results showed a modest improvement in height, weight and lean body mass between six and 12 months. However, there was no consistent evidence that rhGH treatment improves lung function, muscle strength, or quality of life. The trials were small and we did not find any evidence on changes in glucose metabolism or the long-term risk of diabetes due to the treatment. Given these results, we are not able to identify any clear benefit of therapy and believe that more research from well-designed, adequately powered clinical trials is needed."
}
] |
query-laysum | 1068 | [
{
"role": "user",
"content": "Abstract: Background\nSexual problems are common among people with cardiovascular disease. Although clinical guidelines recommend sexual counselling for patients and their partners, there is little evidence on its effectiveness.\nObjectives\nTo evaluate the effectiveness of sexual counselling interventions (in comparison to usual care) on sexuality-related outcomes in patients with cardiovascular disease and their partners.\nSearch methods\nWe searched CENTRAL, MEDLINE, EMBASE, and three other databases up to 2 March 2015 and two trials registers up to 3 February 2016.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs, including individual and cluster RCTs. We included studies that compared any intervention to counsel adult cardiac patients about sexual problems with usual care.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included three trials with 381 participants. We were unable to pool the data from the included studies due to the differences in interventions used; therefore we synthesised the trial findings narratively.\nTwo trials were conducted in the USA and one was undertaken in Israel. All trials included participants who were admitted to hospital with myocardial infarction (MI), and one trial also included participants who had undergone coronary artery bypass grafting. All trials followed up participants for a minimum of three months post-intervention; the longest follow-up timepoint was five months.\nOne trial (N = 92) tested an intensive (total five hours) psychotherapeutic sexual counselling intervention delivered by a sexual therapist. One trial (N = 115) used a 15-minute educational video plus written material on resuming sexual activity following a MI. One trial (N = 174) tested the addition of a component that focused on resumption of sexual activity following a MI within a hospital cardiac rehabilitation programme.\nThe quality of the evidence for all outcomes was very low.\nNone of the included studies reported any outcomes from partners.\nTwo trials reported sexual function. One trial compared intervention and control groups on 12 separate sexual function subscales and used a repeated measures analysis of variance (ANOVA) test. They reported statistically significant differences in favour of the intervention. One trial compared intervention and control groups using a repeated measures analysis of covariance (ANCOVA), and concluded: \"There were no significant differences between the two groups [for sexual function] at any of the time points\".\nTwo trials reported sexual satisfaction. In one trial, the authors compared sexual satisfaction between intervention and control and used a repeated measured ANOVA; they reported \"differences were reported in favour of the intervention\". One trial compared intervention and control with a repeated measures ANCOVA and reported: \"There were no significant differences between the two groups [for sexual satisfaction] at any of the timepoints\".\nAll three included trials reported the number of patients returning to sexual activity following MI. One trial found some evidence of an effect of sexual counselling on reported rate of return to sexual activity (yes/no) at four months after completion of the intervention (relative risk (RR) 1.71, 95% confidence interval (CI) 1.26 to 2.32; one trial, 92 participants, very low quality of evidence). Two trials found no evidence of an effect of sexual counselling on rate of return to sexual activity at 12 week (RR 1.01, 95% CI 0.94 to 1.09; one trial, 127 participants, very low quality of evidence) and three month follow-up (RR 0.98, 95% CI 0.88 to 1.10; one trial, 115 participants, very low quality of evidence).\nTwo trials reported psychological well-being. In one trial, no scores were reported, but the trial authors stated: \"No treatment effects were observed on state anxiety as measured in three points in time\". In the other trial no scores were reported but, based on results of a repeated measures ANCOVA to compare intervention and control groups, the trial authors stated: \"The experimental group had significantly greater anxiety at one month post MI\". They also reported: \"There were no significant differences between the two groups [for anxiety] at any other time points\".\nOne trial reporting relationship satisfaction and one trial reporting quality of life found no differences between intervention and control.\nNo trial reported on satisfaction in how sexual issues were addressed in cardiac rehabilitation services.\nAuthors' conclusions\nWe found no high quality evidence to support the effectiveness of sexual counselling for sexual problems in patients with cardiovascular disease. There is a clear need for robust, methodologically rigorous, adequately powered RCTs to test the effectiveness of sexual counselling interventions for people with cardiovascular disease and their partners.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence was of very low quality. We judged the included studies to be at high risk of bias and study results were poorly reported. Bearing this in mind, the results of this review should be interpreted with caution."
}
] |
query-laysum | 1069 | [
{
"role": "user",
"content": "Abstract: Background\nNeonates who have undergone gastrointestinal surgery are particularly susceptible to infectious complications in the postoperative period. This may be due in part to disruption of the integrity of the gut and its altered intestinal microflora. Lactoferrin is a whey protein found in milk and is an important innate mammalian defence mechanism. Lactoferrin has been reported to have antimicrobial and anti-inflammatory properties. It has also been reported to help establish a healthy gut microflora and aid in the intestinal immune system. Lactoferrin supplementation has been reported to decrease sepsis in preterm infants. There may be a role for lactoferrin to reduce the incidence of sepsis, thus reducing morbidity and mortality and improving enteral feeding in postoperative term neonates.\nObjectives\nThe primary objective of this review was to evaluate the efficacy of administering lactoferrin on the incidence of sepsis and mortality in term neonates after gastrointestinal surgery.\nThe secondary objective was to assess the impact of administering lactoferrin on time to full enteral feeds, the intestinal microflora, duration of hospital stay, and mortality before discharge in the same population.\nSearch methods\nThe Cochrane Neonatal Information Specialist searched the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE, Embase Ovid, CINAHL, the WHO ICTRP and ClinicalTrials.gov trials registries. The date of the last search was February 2023. There were no restrictions to language, publication year or publication type. We checked references of potentially relevant studies and systematic reviews.\nSelection criteria\nWe planned to include randomised controlled trials that studied infants born at 37 or more weeks of gestation who had one or more episodes of gastrointestinal surgery within 28 days of birth, and compared administration of lactoferrin with a placebo.\nData collection and analysis\nWe used standard Cochrane methodological procedures. We planned to use the GRADE approach to assess the certainty of evidence for each outcome.\nMain results\nWe identified no published randomised controlled studies that assessed the efficacy of lactoferrin for the postoperative management of term neonates following gastrointestinal surgery.\nAuthors' conclusions\nThere is currently no evidence available from randomised controlled trials to show whether lactoferrin is effective or ineffective for the postoperative management of term neonates after gastrointestinal surgery. There is a need for randomised controlled trials to be performed to assess the role of lactoferrin in this setting.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies in which babies who had digestive tract surgery were assigned randomly to lactoferrin or a placebo (a 'dummy' treatment that does not contain any medicine but looks or tastes the same as the medicine being tested)."
}
] |
query-laysum | 1070 | [
{
"role": "user",
"content": "Abstract: Background\nIn epidemics of highly infectious diseases, such as Ebola, severe acute respiratory syndrome (SARS), or coronavirus (COVID-19), healthcare workers (HCW) are at much greater risk of infection than the general population, due to their contact with patients' contaminated body fluids. Personal protective equipment (PPE) can reduce the risk by covering exposed body parts. It is unclear which type of PPE protects best, what is the best way to put PPE on (i.e. donning) or to remove PPE (i.e. doffing), and how to train HCWs to use PPE as instructed.\nObjectives\nTo evaluate which type of full-body PPE and which method of donning or doffing PPE have the least risk of contamination or infection for HCW, and which training methods increase compliance with PPE protocols.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and CINAHL to 20 March 2020.\nSelection criteria\nWe included all controlled studies that evaluated the effect of full-body PPE used by HCW exposed to highly infectious diseases, on the risk of infection, contamination, or noncompliance with protocols. We also included studies that compared the effect of various ways of donning or doffing PPE, and the effects of training on the same outcomes.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and assessed the risk of bias in included trials. We conducted random-effects meta-analyses were appropriate.\nMain results\nEarlier versions of this review were published in 2016 and 2019. In this update, we included 24 studies with 2278 participants, of which 14 were randomised controlled trials (RCT), one was a quasi-RCT and nine had a non-randomised design.\nEight studies compared types of PPE. Six studies evaluated adapted PPE. Eight studies compared donning and doffing processes and three studies evaluated types of training. Eighteen studies used simulated exposure with fluorescent markers or harmless microbes. In simulation studies, median contamination rates were 25% for the intervention and 67% for the control groups.\nEvidence for all outcomes is of very low certainty unless otherwise stated because it is based on one or two studies, the indirectness of the evidence in simulation studies and because of risk of bias.\nTypes of PPE\nThe use of a powered, air-purifying respirator with coverall may protect against the risk of contamination better than a N95 mask and gown (risk ratio (RR) 0.27, 95% confidence interval (CI) 0.17 to 0.43) but was more difficult to don (non-compliance: RR 7.5, 95% CI 1.81 to 31.1). In one RCT (59 participants) coveralls were more difficult to doff than isolation gowns (very low-certainty evidence). Gowns may protect better against contamination than aprons (small patches: mean difference (MD) −10.28, 95% CI −14.77 to −5.79). PPE made of more breathable material may lead to a similar number of spots on the trunk (MD 1.60, 95% CI −0.15 to 3.35) compared to more water-repellent material but may have greater user satisfaction (MD −0.46, 95% CI −0.84 to −0.08, scale of 1 to 5). According to three studies that tested more recently introduced full-body PPE ensembles, there may be no difference in contamination.\nModified PPE versus standard PPE\nThe following modifications to PPE design may lead to less contamination compared to standard PPE: sealed gown and glove combination (RR 0.27, 95% CI 0.09 to 0.78), a better fitting gown around the neck, wrists and hands (RR 0.08, 95% CI 0.01 to 0.55), a better cover of the gown-wrist interface (RR 0.45, 95% CI 0.26 to 0.78, low-certainty evidence), added tabs to grab to facilitate doffing of masks (RR 0.33, 95% CI 0.14 to 0.80) or gloves (RR 0.22, 95% CI 0.15 to 0.31).\nDonning and doffing\nUsing Centers for Disease Control and Prevention (CDC) recommendations for doffing may lead to less contamination compared to no guidance (small patches: MD −5.44, 95% CI −7.43 to −3.45). One-step removal of gloves and gown may lead to less bacterial contamination (RR 0.20, 95% CI 0.05 to 0.77) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28) than separate removal. Double-gloving may lead to less viral or bacterial contamination compared to single gloving (RR 0.34, 95% CI 0.17 to 0.66) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28). Additional spoken instruction may lead to fewer errors in doffing (MD −0.9, 95% CI −1.4 to −0.4) and to fewer contamination spots (MD −5, 95% CI −8.08 to −1.92). Extra sanitation of gloves before doffing with quaternary ammonium or bleach may decrease contamination, but not alcohol-based hand rub.\nTraining\nThe use of additional computer simulation may lead to fewer errors in doffing (MD −1.2, 95% CI −1.6 to −0.7). A video lecture on donning PPE may lead to better skills scores (MD 30.70, 95% CI 20.14 to 41.26) than a traditional lecture. Face-to-face instruction may reduce noncompliance with doffing guidance more (odds ratio 0.45, 95% CI 0.21 to 0.98) than providing folders or videos only.\nAuthors' conclusions\nWe found low- to very low-certainty evidence that covering more parts of the body leads to better protection but usually comes at the cost of more difficult donning or doffing and less user comfort. More breathable types of PPE may lead to similar contamination but may have greater user satisfaction. Modifications to PPE design, such as tabs to grab, may decrease the risk of contamination. For donning and doffing procedures, following CDC doffing guidance, a one-step glove and gown removal, double-gloving, spoken instructions during doffing, and using glove disinfection may reduce contamination and increase compliance. Face-to-face training in PPE use may reduce errors more than folder-based training.\nWe still need RCTs of training with long-term follow-up. We need simulation studies with more participants to find out which combinations of PPE and which doffing procedure protects best. Consensus on simulation of exposure and assessment of outcome is urgently needed. We also need more real-life evidence. Therefore, the use of PPE of HCW exposed to highly infectious diseases should be registered and the HCW should be prospectively followed for their risk of infection.\n\nGiven the provided abstract, please respond to the following query: \"Guidance on PPE use\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Following CDC guidance for apron or gown removal, or any instructions for removing PPE compared to an individual’s own preferences may reduce self-contamination. Removing gown and gloves in one step, using two pairs of gloves, and cleaning gloves with bleach or disinfectant (but not alcohol) may also reduce contamination."
}
] |
query-laysum | 1071 | [
{
"role": "user",
"content": "Abstract: Background\nOut-of-hospital cardiac arrest (OHCA) is a major cause of death worldwide. Cardiac arrest can be subdivided into asphyxial and non asphyxial etiologies. An asphyxia arrest is caused by lack of oxygen in the blood and occurs in drowning and choking victims and in other circumstances. A non asphyxial arrest is usually a loss of functioning cardiac electrical activity. Cardiopulmonary resuscitation (CPR) is a well-established treatment for cardiac arrest. Conventional CPR includes both chest compressions and ‘rescue breathing’ such as mouth-to-mouth breathing. Rescue breathing is delivered between chest compressions using a fixed ratio, such as two breaths to 30 compressions or can be delivered asynchronously without interrupting chest compression. Studies show that applying continuous chest compressions is critical for survival and interrupting them for rescue breathing might increase risk of death. Continuous chest compression CPR may be performed with or without rescue breathing.\nObjectives\nTo assess the effects of continuous chest compression CPR (with or without rescue breathing) versus conventional CPR plus rescue breathing (interrupted chest compression with pauses for breaths) of non-asphyxial OHCA.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 1 2017); MEDLINE (Ovid) (from 1985 to February 2017); Embase (1985 to February 2017); Web of Science (1985 to February 2017). We searched ongoing trials databases including controlledtrials.com and clinicaltrials.gov. We did not impose any language or publication restrictions.\nSelection criteria\nWe included randomized and quasi-randomized studies in adults and children suffering non-asphyxial OHCA due to any cause. Studies compared the effects of continuous chest compression CPR (with or without rescue breathing) with interrupted CPR plus rescue breathing provided by rescuers (bystanders or professional CPR providers).\nData collection and analysis\nTwo authors extracted the data and summarized the effects as risk ratios (RRs), adjusted risk differences (ARDs) or mean differences (MDs). We assessed the quality of evidence using GRADE.\nMain results\nWe included three randomized controlled trials (RCTs) and one cluster-RCT (with a total of 26,742 participants analysed). We identified one ongoing study. While predominantly adult patients, one study included children.\nUntrained bystander-administered CPR\nThree studies assessed CPR provided by untrained bystanders in urban areas of the USA, Sweden and the UK. Bystanders administered CPR under telephone instruction from emergency services. There was an unclear risk of selection bias in two trials and low risk of detection, attrition, and reporting bias in all three trials. Survival outcomes were unlikely to be affected by the unblinded design of the studies.\nWe found high-quality evidence that continuous chest compression CPR without rescue breathing improved participants’ survival to hospital discharge compared with interrupted chest compression with pauses for rescue breathing (ratio 15:2) by 2.4% (14% versus 11.6%; RR 1.21, 95% confidence interval (CI) 1.01 to 1.46; 3 studies, 3031 participants).\nOne trial reported survival to hospital admission, but the number of participants was too low to be certain about the effects of the different treatment strategies on survival to admission(RR 1.18, 95% CI 0.94 to 1.48; 1 study, 520 participants; moderate-quality evidence).\nThere were no data available for survival at one year, quality of life, return of spontaneous circulation or adverse effects.\nThere was insufficient evidence to determine the effect of the different strategies on neurological outcomes at hospital discharge (RR 1.25, 95% CI 0.94 to 1.66; 1 study, 1286 participants; moderate-quality evidence). The proportion of participants categorized as having good or moderate cerebral performance was 11% following treatment with interrupted chest compression plus rescue breathing compared with 10% to 18% for those treated with continuous chest compression CPR without rescue breathing.\nCPR administered by a trained professional\nIn one trial that assessed OHCA CPR administered by emergency medical service professionals (EMS) 23,711 participants received either continuous chest compression CPR (100/minute) with asynchronous rescue breathing (10/minute) or interrupted chest compression with pauses for rescue breathing (ratio 30:2). The study was at low risk of bias overall.\nAfter OHCA, risk of survival to hospital discharge is probably slightly lower for continuous chest compression CPR with asynchronous rescue breathing compared with interrupted chest compression plus rescue breathing (9.0% versus 9.7%) with an adjusted risk difference (ARD) of -0.7%; 95% CI (-1.5% to 0.1%); moderate-quality evidence.\nThere is high-quality evidence that survival to hospital admission is 1.3% lower with continuous chest compression CPR with asynchronous rescue breathing compared with interrupted chest compression plus rescue breathing (24.6% versus 25.9%; ARD -1.3% 95% CI (-2.4% to -0.2%)).\nSurvival at one year and quality of life were not reported.\nReturn of spontaneous circulation is likely to be slightly lower in people treated with continuous chest compression CPR plus asynchronous rescue breathing (24.2% versus 25.3%; -1.1% (95% CI -2.4 to 0.1)), high-quality evidence.\nThere is high-quality evidence of little or no difference in neurological outcome at discharge between these two interventions (7.0% versus 7.7%; ARD -0.6% (95% CI -1.4 to 0.1).\nRates of adverse events were 54.4% in those treated with continuous chest compressions plus asynchronous rescue breathing versus 55.4% in people treated with interrupted chest compression plus rescue breathing compared with the ARD being -1% (-2.3 to 0.4), moderate-quality evidence).\nAuthors' conclusions\nFollowing OHCA, we have found that bystander-administered chest compression-only CPR, supported by telephone instruction, increases the proportion of people who survive to hospital discharge compared with conventional interrupted chest compression CPR plus rescue breathing. Some uncertainty remains about how well neurological function is preserved in this population and there is no information available regarding adverse effects.\nWhen CPR was performed by EMS providers, continuous chest compressions plus asynchronous rescue breathing did not result in higher rates for survival to hospital discharge compared to interrupted chest compression plus rescue breathing. The results indicate slightly lower rates of survival to admission or discharge, favourable neurological outcome and return of spontaneous circulation observed following continuous chest compression. Adverse effects are probably slightly lower with continuous chest compression.\nIncreased availability of automated external defibrillators (AEDs), and AED use in CPR need to be examined, and also whether continuous chest compression CPR is appropriate for paediatric cardiac arrest.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When CPR was performed by bystanders, we found that more people survived until discharge from hospital after chest compression alone than they did following interrupted chest compression with pauses at a fixed ratio for rescue breathing (15 compressions to 2 breaths) (14% versus 11.6%). For the outcomes of survival to hospital admission and neurological outcomes, we did not have sufficient data to be certain that either strategy was better. No data was available for adverse effects, quality of life or survival at one-year.\nWhen CPR was performed by EMS professionals, we found that survival to hospital discharge was slightly lower with continuous chest compressions (100/minute) plus asynchronous rescue breathing (10/minutes) CPR compared with interrupted chest compression plus rescue breathing. Around 9.7% of people lived when they received interrupted chest compression plus rescue breathing compared with 9% of people who received continuous chest compression plus asynchronous rescue breathing.\nThe number of people who survived to hospital admission was slightly higher in those treated with interrupted chest compression plus rescue breathing compared with continuous chest compression plus asynchronous rescue breathing (25.9% versus 24.6%). There was little or no difference in neurological outcomes. The proportion of people who experienced adverse events was probably similar with 55.4% people treated with interrupted chest compression plus rescue breathing experiencing an adverse event compared with 54.4% in those treated with continuous chest compression asynchronous rescue breathing."
}
] |
query-laysum | 1072 | [
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease (SCD) is one of the commonest severe monogenic disorders in the world, due to the inheritance of two abnormal haemoglobin (beta globin) genes. SCD can cause severe pain, significant end-organ damage, pulmonary complications, and premature death. Surgical interventions are more common in people with SCD, and occur at much younger ages than in the general population. Blood transfusions are frequently used prior to surgery and several regimens are used but there is no consensus over the best method or the necessity of transfusion in specific surgical cases. This is an update of a Cochrane Review.\nObjectives\nTo determine whether there is evidence that preoperative blood transfusion in people with SCD undergoing elective or emergency surgery reduces mortality and perioperative or sickle cell-related serious adverse events.\nTo compare the effectiveness of different transfusion regimens (aggressive or conservative) if preoperative transfusions are indicated in people with SCD.\nSearch methods\nWe searched for relevant trials in the Cochrane Library, MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1980), and ongoing trial databases; all searches current to 28 January 2020\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register: 19 September 2019.\nSelection criteria\nAll randomised controlled trials and quasi-randomised controlled trials comparing preoperative blood transfusion regimens to different regimens or no transfusion in people with SCD undergoing elective or emergency surgery. There was no restriction by outcomes examined, language or publication status.\nData collection and analysis\nTwo authors independently assessed trial eligibility and the risk of bias and extracted data.\nMain results\nThree trials with 990 participants were eligible for inclusion in the review. There were no ongoing trials identified. These trials were conducted between 1988 and 2011. The majority of people included had haemoglobin (Hb) SS SCD. The majority of surgical procedures were considered low or intermediate risk for developing sickle cell-related complications.\nAggressive versus simple red blood cell transfusions\nOne trial (551 participants) compared an aggressive transfusion regimen (decreasing sickle haemoglobin to less than 30%) to a simple transfusion regimen (increasing haemoglobin to 100 g/L). This trial re-randomised participants and therefore quantitative analysis was only possible on two subsets of data: participants undergoing cholecystectomy (230 participants); and participants undergoing tonsillectomy or adenoidectomy surgeries (107 participants). Data were not combined as we do not know if any participant received both surgeries. Overall, the quality of the evidence was very low across different outcomes according to GRADE methodology. This was due to the trial being at high risk of bias primarily due to lack of blinding, indirectness and the outcome estimates being imprecise. Cholecystectomy subgroup results are reported in the abstract. Results for both subgroups were similar.\nThere was no difference in all-cause mortality between people receiving aggressive transfusions and those receiving conservative transfusions. No deaths occurred in either subgroup.\nThere were no differences between the aggressive transfusion group and conservative transfusion group in the number of people developing:\n• an acute chest syndrome, risk ratio (RR) 0.84 (95% confidence interval (CI) 0.38 to 1.84) (one trial, 230 participants, very low-quality evidence);\n• vaso-occlusive crisis, risk ratio 0.30 (95% CI 0.09 to 1.04) (one trial, 230 participants, very low quality evidence);\n• serious infection, risk ratio 1.75 (95% CI 0.59 to 5.18) (one trial, 230 participants, very low-quality evidence);\n• any perioperative complications, RR 0.75 (95% CI 0.36 to 1.55) (one trial, 230 participants, very low-quality evidence);\n• a transfusion-related complication, RR 1.85 (95% CI 0.89 to 3.88) (one trial, 230 participants, very low-quality evidence).\nPreoperative transfusion versus no preoperative transfusion\nTwo trials (434 participants) compared a preoperative transfusion plus standard care to a group receiving standard care. Overall, the quality of the evidence was low to very low across different outcomes according to GRADE methodology. This was due to the trials being at high risk of bias due to lack of blinding, and outcome estimates being imprecise. One trial was stopped early because more people in the no transfusion arm developed an acute chest syndrome.\nThere was no difference in all-cause mortality between people receiving preoperative transfusions and those receiving no preoperative transfusions (two trials, 434 participants, no deaths occurred).\nThere was significant heterogeneity between the two trials in the number of people developing an acute chest syndrome, a meta-analysis was therefore not performed. One trial showed a reduced number of people developing acute chest syndrome between people receiving preoperative transfusions and those receiving no preoperative transfusions, risk ratio 0.11 (95% confidence interval 0.01 to 0.80) (65 participants), whereas the other trial did not, RR 4.81 (95% CI 0.23 to 99.61) (369 participants).\nThere were no differences between the preoperative transfusion groups and the groups without preoperative transfusion in the number of people developing:\n• a vaso-occlusive crisis, Peto odds ratio (OR) 1.91 (95% confidence interval 0.61 to 6.04) (two trials, 434 participants, very low-quality evidence).\n• a serious infection, Peto OR 1.29 (95% CI 0.29 to 5.71) (two trials, 434 participants, very low-quality evidence);\n• any perioperative complications, RR 0.24 (95% CI 0.03 to 2.05) (one trial, 65 participants, low-quality evidence).\nThere was an increase in the number of people developing circulatory overload in those receiving preoperative transfusions compared to those not receiving preoperative transfusions in one of the two trials, and no events were seen in the other trial (no meta-analysis performed).\nAuthors' conclusions\nThere is insufficient evidence from randomised trials to determine whether conservative preoperative blood transfusion is as effective as aggressive preoperative blood transfusion in preventing sickle-related or surgery-related complications in people with HbSS disease. There is very low quality evidence that preoperative blood transfusion may prevent development of acute chest syndrome.\nDue to lack of evidence this review cannot comment on management for people with HbSC or HbSβ+ disease or for those with high baseline haemoglobin concentrations.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature to 28 January 2020. We included three trials with 990 people in this review. One trial compared aggressive transfusion to conservative transfusion. Two trials compared aggressive or conservative transfusion before surgery to no transfusion. The majority of people within the trials had one form of sickle cell disease (HbSS). The majority of the operations were those considered to be at low or intermediate risk for causing sickle cell-related complications.\nTwo of the three trials received government funding, the third trial did not report the funding source."
}
] |
query-laysum | 1073 | [
{
"role": "user",
"content": "Abstract: Background\nIntimate partner abuse is common worldwide, damaging the short- and long-term physical, mental, and emotional health of survivors and children. Advocacy may contribute to reducing abuse, empowering women to improve their situation by providing informal counselling and support for safety planning and increasing access to different services. Advocacy may be a stand-alone service, accepting referrals from healthcare providers, or part of a multi-component (and possibly multi-agency) intervention provided by service staff or others.\nObjectives\nTo assess the effects of advocacy interventions within or outside healthcare settings in women who have experienced intimate partner abuse.\nSearch methods\nIn April 2015, we searched CENTRAL, Ovid MEDLINE, EMBASE, and 10 other databases. We also searched WHO ICTRP, mRCT, and UK Clinical Research Network (UKCRN), and examined relevant websites and reference lists with forward citation tracking of included studies. For the original review we handsearched six key journals. We also contacted first authors of eligible papers and experts in the field.\nSelection criteria\nRandomised or quasi-randomised controlled trials comparing advocacy interventions for women with experience of intimate partner abuse versus no intervention or usual care (if advocacy was minimal and fewer than 20% of women received it).\nData collection and analysis\nTwo review authors independently assessed risk of bias and undertook data extraction. We contacted authors for missing information needed to calculate statistics for the review and looked for adverse events.\nMain results\nWe included 13 trials involving 2141 participants aged 15 to 65 years, frequently having low socioeconomic status.\nThe studies were quite heterogeneous in terms of methodology, study processes and design, including with regard to the duration of follow-up (postintervention to three years), although this was not associated with differences in effect. The studies also had considerable clinical heterogeneity in relation to staff delivering advocacy; setting (community, shelter, antenatal, healthcare); advocacy intensity (from 30 minutes to 80 hours); and abuse severity. Three trials evaluated advocacy within multi-component interventions. Eleven measured some form of abuse (eight scales), six assessed quality of life (three scales), and six measured depression (three scales). Countries and ethnic groups varied (one or more minority ethnic groups in the USA or UK, and local populations in Hong Kong and Peru). Setting was associated with intensity and duration of advocacy.\nRisk of bias was high in five studies, moderate in five, and low in three. The quality of evidence (considering multiple factors such as risk of bias, study size, missing data) was moderate to low for brief advocacy and very low for intensive advocacy.\nIncidence of abuse\nPhysical abuse\nModerate quality pooled data from two healthcare studies (moderate risk of bias) and one community study (low risk of bias), all with 12-month follow-up data, showed no effect on physical abuse for brief (< 12 hours) advocacy interventions (standardised mean difference (SMD) 0.00, 95% confidence interval (CI) - 0.17 to 0.16; n = 558). One antenatal study (low risk of bias) showed an association between brief advocacy and reduced minor physical abuse at one year (mean difference (MD) change - 1.00, 95% CI - 1.82 to - 0.18; n = 110). An antenatal, multi-component study showed a greater likelihood of physical abuse ending (odds ratio (OR) 0.42, 95% CI 0.23 to 0.75) immediately after advocacy (number needed to treat (NNT) = 8); we cannot exclude impact from other components.\nLow to very low quality evidence from two intensive advocacy trials (12 hours plus duration) showed reduced severe physical abuse in women leaving a shelter at 24 months (OR 0.39, 95% CI 0.20 to 0.77; NNT = 8), but not at 12 or 36 months.\nSexual abuse\nMeta-analysis of two studies (n = 239) showed no effect of advocacy on sexual abuse (SMD - 0.12, 95% CI - 0.37 to 0.14), agreeing with the change score (MD - 0.07, 95% CI - 0.30 to 0.16) from a third study and the OR (0.96, 95% CI 0.44 to 2.12) from a fourth antenatal, multi-component study.\nEmotional abuse\nOne study in antenatal care, rated at low risk of bias, showed reduced emotional abuse at ≤ 12-month follow-up (MD (change score) - 4.24, 95% CI - 6.42 to - 2.06; n = 110).\nPsychosocial health\nQuality of life\nMeta-analysis of two studies (high risk of bias) showed intensive advocacy slightly improved overall quality of life of women recruited from shelters (MD 0.23, 95% CI 0.00 to 0.46; n = 343) at 12-month follow-up, with greater improvement in perceived physical quality of life from a primary care study (high risk of bias; MD 4.90, 95% CI 0.98 to 8.82) immediately postintervention.\nDepression\nMeta-analysis of two studies in healthcare settings, one at high risk of bias and one at moderate risk, showed that fewer women developed depression (OR 0.31, 95% CI 0.15 to 0.65; n = 149; NNT = 4) with brief advocacy. One study at high risk of bias reported a slight reduction in depression in pregnant women immediately after the intervention (OR 0.51, 95% CI 0.20 to 1.29; n = 103; NNT = 8).\nThere was no evidence that intensive advocacy reduced depression at ≤ 12-month follow-up (MD - 0.14, 95% CI - 0.33 to 0.05; 3 studies; n = 446) or at two years (SMD − 0.12, 95% CI − 0.36 to 0.12; 1 study; n = 265).\nAdverse effects\nTwo women died, one who was murdered by her partner and one who committed suicide. No evidence links either death to study participation.\nAuthors' conclusions\nResults suggest some benefits from advocacy. However, most studies were underpowered. Clinical and methodological heterogeneity largely precluded pooling of trials. Therefore, there is uncertainty about the magnitude of benefit, the impact of abuse severity, and the setting.\nBased on the evidence reviewed, intensive advocacy may improve short-term quality of life and reduce physical abuse one to two years after the intervention for women recruited from domestic violence shelters or refuges. Brief advocacy may provide small short-term mental health benefits and reduce abuse, particularly in pregnant women and for less severe abuse.\n\nGiven the provided abstract, please respond to the following query: \"Key outcomes\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Quality of the evidence\nFive studies had design flaws that entailed a high risk of biasing their results, while five had moderate risk and three studies had low risk.\nPhysical abuse\nAfter one year, brief advocacy had no effect in two healthcare studies of moderate quality or in one community study at low risk of bias, but it reduced minor abuse in another antenatal care study (low risk of bias). Another antenatal study showed reduced abuse immediately after brief advocacy (with one woman likely to benefit if eight received the advocacy), but women were also treated for depression, which may have affected results. Two studies provided weak evidence that intensive advocacy reduces physical abuse up to two years after the intervention (with one in eight women likely to benefit).\nSexual abuse\nFour studies failed to show benefits from advocacy for sexual abuse.\nEmotional abuse\nOne antenatal care study (low risk of bias) reported reduced emotional abuse at 12 months after advocacy.\nDepression\nBrief advocacy prevented depression in abused women attending healthcare services and pregnant women immediately after advocacy (with one woman likely to benefit for every four to eight treated). Intensive advocacy did not reduce depression in shelter women followed up at 12 and 24 months. The moderate-to-low quality evidence came mostly from studies with a low risk of bias.\nQuality of life\nThree brief advocacy trials found no benefit on quality of life. Intensive advocacy showed a weak benefit in two studies in domestic violence shelters/refuges. A primary care study (high risk of bias) showed improved motivation to do daily tasks immediately after advocacy.\nDeaths\nTwo women died: one was murdered by her partner and one committed suicide; however, investigators did not consider these deaths to be related to the studies."
}
] |
query-laysum | 1074 | [
{
"role": "user",
"content": "Abstract: Background\nThere is strong evidence that our current consumption of salt is a major factor in the development of increased blood pressure (BP) and that a reduction in our salt intake lowers BP, whether BP levels are normal or raised initially. Effective control of BP in people with diabetes lowers the risk of strokes, heart attacks and heart failure and slows the progression of chronic kidney disease (CKD) in people with diabetes. This is an update of a review first published in 2010.\nObjectives\nTo evaluate the effect of altered salt intake on BP and markers of cardiovascular disease and of CKD in people with diabetes.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 31 March 2022 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register were identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included randomised controlled trials (RCTs) of altered salt intake in individuals with type 1 and type 2 diabetes. Studies were included when there was a difference between low and high sodium intakes of at least 34 mmol/day.\nData collection and analysis\nTwo authors independently assessed studies and resolved differences by discussion. We calculated mean effect sizes as mean difference (MD) and 95% confidence intervals (CI) using the random-effects model. Confidence in the evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nThirteen RCTs (313 participants), including 21 comparisons (studies), met our inclusion criteria. One RCT (two studies) was added to this review update. Participants included 99 individuals with type 1 diabetes and 214 individuals with type 2 diabetes. Two RCTs (four studies) included some participants with reduced overall kidney function. The remaining studies either reported that participants with reduced glomerular filtration rate (GFR) were excluded from the study or only included participants with microalbuminuria and normal GFR. Five studies used a parallel study design, and 16 used a cross-over design. Studies were at high risk of bias for most criteria. Random sequence generation and allocation concealment were adequate in only three and two studies, respectively. One study was at low risk of bias for blinding of participants and outcome assessment, but no studies were at low risk for selective reporting. Twelve studies reported non-commercial funding sources, three reported conflicts of interest, and eight reported adequate washout between interventions in cross-over studies.\nThe median net reduction in 24-hour urine sodium excretion (24-hour UNa) in seven long-term studies (treatment duration four to 12 weeks) was 76 mmol (range 51 to 124 mmol), and in 10 short-term studies (treatment duration five to seven days) was 187 mmol (range 86 to 337 mmol). Data were only available graphically in four studies. In long-term studies, reduced sodium intake may lower systolic BP (SBP) by 6.15 mm Hg (7 studies: 95% CI -9.27 to -3.03; I² = 12%), diastolic BP (DBP) by 3.41 mm Hg (7 studies: 95% CI -5.56 to -1.27; I² = 41%) and mean arterial pressure (MAP) by 4.60 mm Hg (4 studies: 95% CI -7.26 to -1.94; I² = 28%). In short-term studies, low sodium intake may reduce SBP by 8.43 mm Hg (5 studies: 95% CI -14.37 to -2.48; I² = 88%), DBP by 2.95 mm Hg (5 studies: 95% CI -4.96 to -0.94; I² = 70%) and MAP by 2.37 mm Hg (9 studies: 95% CI -4.75 to -0.01; I² = 65%). There was considerable heterogeneity in most analyses but particularly among short-term studies. All analyses were considered to be of low certainty evidence.\nSBP, DBP and MAP reductions may not differ between hypertensive and normotensive participants or between individuals with type 1 or type 2 diabetes. In hypertensive participants, SBP, DBP and MAP may be reduced by 6.45, 3.15 and 4.88 mm Hg, respectively, while in normotensive participants, they may be reduced by 8.43, 2.95 and 2.15 mm Hg, respectively (all low certainty evidence). SBP, DBP and MAP may be reduced by 7.35, 3.04 and 4.30 mm Hg, respectively, in participants with type 2 diabetes and by 7.35, 3.20, and 0.08 mm Hg, respectively, in participants with type 1 diabetes (all low certainty evidence).\nEight studies provided measures of urinary protein excretion before and after salt restriction; four reported a reduction in urinary albumin excretion with salt restriction. Pooled analyses showed no changes in GFR (12 studies: MD -1.87 mL/min/1.73 m², 95% CI -5.05 to 1.31; I² = 32%) or HbA1c (6 studies: MD -0.62, 95% CI -1.49 to 0.26; I² = 95%) with salt restriction (low certainty evidence). Body weight was reduced in studies lasting one to two weeks but not in studies lasting for longer periods (low certainty evidence). Adverse effects were reported in only one study; 11% and 21% developed postural hypotension on the low-salt diet and the low-salt diet combined with hydrochlorothiazide, respectively.\nAuthors' conclusions\nThis systematic review shows an important reduction in SBP and DBP in people with diabetes with normal GFR during short periods of salt restriction, similar to that obtained with single drug therapy for hypertension. These data support the international recommendations that people with diabetes with or without hypertension or evidence of kidney disease should reduce salt intake to less than 5 g/day (2 g sodium).\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Lowering dietary salt intake to the recommended levels of 5 g/day or less would benefit people with diabetes by lowering BP by similar amounts to a single BP medication."
}
] |
query-laysum | 1075 | [
{
"role": "user",
"content": "Abstract: Background\nIn low- and middle-income countries (LMICs), health services are under-utilised, and several studies have reported improvements in neonatal outcomes following health education imparted to mothers in homes, at health units, or in hospitals. However, evaluating health educational strategy to deliver newborn care, such as one-to-one counselling or group counselling via peer or support groups, or delivered by health professionals, requires rigorous assessment of methodological design and quality, as well as assessment of cost-effectiveness, affordability, sustainability, and reproducibility in diverse health systems.\nObjectives\nTo compare a community health educational strategy versus no strategy or the existing approach to health education on maternal and newborn care in LMICs, as imparted to mothers or their family members specifically in community settings during the antenatal and/or postnatal period, in terms of effectiveness for improving neonatal health and survival (i.e. neonatal mortality, neonatal morbidity, access to health care, and cost).\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 4), in the Cochrane Library, MEDLINE via PubMed (1966 to 2 May 2017), Embase (1980 to 2 May 2017), and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to 2 May 2017). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nCommunity-based randomised controlled, cluster-randomised, or quasi-randomised controlled trials.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted the data. We assessed the quality of evidence using the GRADE method and prepared 'Summary of findings' tables.\nMain results\nWe included in this review 33 original trials (reported in 62 separate articles), which were conducted across Africa and Central and South America, with most reported from Asia, specifically India, Pakistan, and Bangladesh. Of the 33 community educational interventions provided, 16 included family members in educational counselling, most frequently the mother-in-law or the expectant father. Most studies (n = 14) required one-to-one counselling between a healthcare worker and a mother, and 12 interventions involved group counselling for mothers and occasionally family members; the remaining seven incorporated components of both counselling methods.\nOur analyses show that community health educational interventions had a significant impact on reducing overall neonatal mortality (risk ratio (RR) 0.87, 95% confidence interval (CI) 0.78 to 0.96; random-effects model; 26 studies; n = 553,111; I² = 88%; very low-quality evidence), early neonatal mortality (RR 0.74, 95% CI 0.66 to 0.84; random-effects model; 15 studies that included 3 subsets from 3 studies; n = 321,588; I² = 86%; very low-quality evidence), late neonatal mortality (RR 0.54, 95% CI 0.40 to 0.74; random-effects model; 11 studies; n = 186,643; I² = 88%; very low-quality evidence), and perinatal mortality (RR 0.83, 95% CI 0.75 to 0.91; random-effects model; 15 studies; n = 262,613; I² = 81%; very low-quality evidence). Moreover, community health educational interventions increased utilisation of any antenatal care (RR 1.16, 95% CI 1.11 to 1.22; random-effects model; 18 studies; n = 307,528; I² = 96%) and initiation of breastfeeding (RR 1.56, 95% CI 1.37 to 1.77; random-effects model; 19 studies; n = 126,375; I² = 99%). In contrast, community health educational interventions were found to have a non-significant impact on use of modern contraceptives (RR 1.10, 95% CI 0.86 to 1.41; random-effects model; 3 studies; n = 22,237; I² = 80%); presence of skilled birth attendance at birth (RR 1.09, 95% CI 0.94 to 1.25; random-effects model; 10 studies; n = 117,870; I² = 97%); utilisation of clean delivery kits (RR 4.44, 95% CI 0.71 to 27.76; random-effects model; 2 studies; n = 17,087; I² = 98%); and care-seeking (RR 1.11, 95% CI 0.97 to 1.27; random-effects model; 7 studies; n = 46,154; I² = 93%).\nCost-effectiveness analysis conducted in seven studies demonstrated that the cost-effectiveness for intervention packages ranged between USD 910 and USD 11,975 for newborn lives saved and newborn deaths averted. For averted disability-adjusted life-year, costs ranged from USD 79 to USD 146, depending on the intervention strategy; for cost per year of lost lives averted, the most effective strategy was peer counsellors, and the cost was USD 33.\nAuthors' conclusions\nThis review offers encouraging evidence on the value of integrating packages of interventions with educational components delivered by a range of community workers in group settings in LMICs, with groups consisting of mothers, and additional education for family members, for improved neonatal survival, especially early and late neonatal survival.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A total of 33 experimental studies were conducted across Africa and Central and South America, with most reported from Asia, specifically India, Pakistan, and Bangladesh. Of the 33 community educational interventions, 16 required involvement of family members, most frequently the mother-in-law or the expectant father. Most studies (n = 14) involved one-to-one counselling between a range of community healthcare workers and mothers, and 12 involved group counselling consisting predominantly of mothers, with family members included occasionally; the remaining seven had components of both one-to-one and group counselling."
}
] |
query-laysum | 1076 | [
{
"role": "user",
"content": "Abstract: Background\nA significant number of patients who suffer with anxiety and related disorders (that is post-traumatic stress disorder (PTSD), social anxiety disorder (SAnD), panic disorder with or without agoraphobia (PD), specific phobia (SPh) and obsessive compulsive disorder (OCD)) fail to respond optimally to first-line treatment with medication or cognitive and behavioural therapies. The addition of d-cycloserine (DCS) to cognitive and behavioural therapies may improve treatment response by impacting the glutamatergic system. This systematic review aimed to investigate the effects of adding DCS to cognitive and behavioural therapies by synthesising data from relevant randomised controlled trials and following the guidelines recommended by Cochrane.\nObjectives\nTo assess the effect of DCS augmentation of cognitive and behavioural therapies compared to placebo augmentation of cognitive and behavioural therapies in the treatment of anxiety and related disorders. Additionally, to assess the efficacy and tolerability of DCS across different anxiety and related disorders.\nSearch methods\nThis review fully incorporates studies identified from a search of the Cochrane Depression, Anxiety and Neurosis Controlled Trials Register (CCDANCTR) to 12 March 2015. This register includes relevant randomised controlled trials (RCTs) from: the Cochrane Library (all years), EMBASE (1974 to date), MEDLINE (1950 to date), PsycINFO (1967 to date), the World Health Organization’s trials portal (ICTRP) and ClinicalTrials.gov . Reference lists from previous meta-analyses and reports of RCTs were also checked. No restrictions were placed on language, setting, date or publication status.\nSelection criteria\nAll RCTs of DCS augmentation of cognitive and behavioural therapies versus placebo augmentation of cognitive and behavioural therapies for anxiety and related disorders were included.\nData collection and analysis\nTwo authors (RO and TA) independently assessed RCTs for eligibility and inclusion, extracted outcomes and risk of bias data and entered these into a customised extraction form. Investigators were contacted to obtain missing data. In addition, data entry and analysis were performed by two review authors (KSW and HB).\nMain results\nTwenty-one published RCTs, with 788 participants in outpatient settings, were included in the review. Sixteen studies had an age range of 18 to 75 years, while four investigated paediatric populations aged 8 to 17 years and one included children, adolescents and adults. The 21 RCTs investigated OCD (number of RCTs (N) = 6), PTSD (N = 5), SAnD (N = 5), SPh (N = 3) and PD (N = 2). Most information from the studies was rated as having either low risk or unclear risk of bias.\nThere was no evidence of a difference between DCS augmentation of cognitive and behavioural therapies and placebo augmentation of cognitive and behavioural therapies for the treatment of anxiety and related disorders in adults at the endpoint (treatment responders, N = 9, risk ratio (RR) 1.10; 95% confidence interval (CI) 0.89 to 1.34; number of participants (n) = 449; low quality evidence) and between 1 and 12 months follow-up (N = 7, RR 1.08; 95% CI 0.90 to 1.31; n = 383). DCS augmentation of cognitive and behavioural therapies was not superior to placebo augmentation of cognitive and behavioural therapies for children and adolescents, both at the endpoint (N = 4, RR 1.01; 95% CI 0.78 to 1.31; n = 121; low quality evidence) and between 3 and 12 months follow-up (N = 3, RR 0.86; 95% CI 0.67 to 1.09; n = 91).\nThere was no evidence of a difference in treatment acceptability for DCS augmentation of cognitive and behavioural therapies compared with placebo augmentation of cognitive and behavioural therapies in adults (N = 16, RR 0.88; 95% CI 0.61 to 1.25; n = 740), nor in children and adolescents (N = 4, RR 0.90; 95% CI 0.17 to 4.69; n = 131). These conclusions were based on moderate quality evidence for adults, and very low quality evidence for children and adolescents. Although the observed difference was small, it is noteworthy that there was a high efficacy of exposure-based therapies alone in the included trials. Due to the limited number of studies, subgroup analysis of moderating factors for clinical and methodological effect could not take place.\nAuthors' conclusions\nThis review found no evidence of a difference between DCS augmentation of cognitive and behavioural therapies and placebo augmentation of cognitive and behavioural therapies for treating anxiety and related disorders in children, adolescents and adults. These findings are based on low quality evidence from heterogenous studies with small sample sizes and incomplete data for clinical response, which precludes us from drawing conclusions on the use of DCS augmentation of cognitive and behavioural therapies at this stage. Given there is some promising preliminary data from individual studies, further research is necessary to assess DCS compared with placebo augmentation of cognitive and behavioural therapies, and determine mechanisms of action as well as magnitude of effect in anxiety and related disorders.\n\nGiven the provided abstract, please respond to the following query: \"What should happen next?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "More trials are needed to enable a clearer understanding of the effect of treatment with DCS in combination with cognitive and behavioural therapies."
}
] |
query-laysum | 1077 | [
{
"role": "user",
"content": "Abstract: Background\nDiagnosis of endometrial (womb) cancer is normally made at an early stage, as most women with the disease experience abnormal vaginal bleeding, which prompts them to seek medical advice. However, delays in presentation and referral can result in delay in diagnosis and management, which can lead to unfavourable treatment outcomes. This is particularly a problem for pre- and peri-menopausal women. Providing educational information to women and healthcare providers regarding symptoms relating to endometrial cancer may raise awareness of the disease and reduce delayed treatment.\nObjectives\nTo assess the effectiveness of health education interventions targeting healthcare providers, or individuals, or both, to promote early presentation and referral for women with endometrial cancer symptoms.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase. We also searched registers of clinical trials, abstracts of scientific meetings and reference lists of review articles.\nSelection criteria\nWe planned to include randomised controlled trials (RCTs), both individually randomised and cluster-RCTs. In the absence of RCTs we planned to include well-designed non-randomised studies (NRS) with a parallel comparison assessing the benefits of any type of health education interventions.\nData collection and analysis\nTwo review authors independently evaluated whether potentially relevant studies met the inclusion criteria for the review, but none were found.\nMain results\nA comprehensive search of the literature yielded the following results: CENTRAL (1022 references), MEDLINE (2874 references), and Embase (2820 references). After de-duplication, we screened titles and abstracts of 4880 references and excluded 4864 that did not meet the review inclusion criteria. Of the 16 references that potentially met the review inclusion, we excluded all 16 reports after reviewing the full texts. We did not identify any ongoing trials.\nAuthors' conclusions\nThere is currently an absence of evidence to indicate the effectiveness of health education interventions involving healthcare providers or individuals or both to promote early presentation and referral for women with endometrial cancer symptoms. High-quality RCTs are needed to assess whether health education interventions enhance early presentation and referral. If health education interventions can be shown to reduce treatment delays in endometrial cancer, further studies would be required to determine which interventions are most effective.\n\nGiven the provided abstract, please respond to the following query: \"The aim of the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We undertook this review to assess whether endometrial cancer education led to women with endometrial cancer symptoms visiting their doctors and being referred for treatment earlier than when there was no educational information available."
}
] |
query-laysum | 1078 | [
{
"role": "user",
"content": "Abstract: Background\nSeveral studies have demonstrated that the use of pancreatic duct stents following pancreaticoduodenectomy is associated with a lower risk of pancreatic fistula. However, to date there is a lack of accord in the literature on whether the use of stents is beneficial and, if so, whether internal or external stenting, with or without replacement, is preferable. This is an update of a systematic review.\nObjectives\nTo determine the efficacy of pancreatic stents in preventing pancreatic fistula after pancreaticoduodenectomy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science, and four major Chinese biomedical databases up to November 2015. We also searched several major trials registers.\nSelection criteria\nRandomized controlled trials (RCTs) comparing the use of stents (either internal or external) versus no stents, and comparing internal stents versus external stents, replacement versus no replacement following pancreaticoduodenectomy.\nData collection and analysis\nTwo review authors independently extracted the data. The outcomes studied were incidence of pancreatic fistula, need for reoperation, length of hospital stay, overall complications, and in-hospital mortality. We showed the results as risk ratio (RR) or mean difference (MD), with 95% confidence interval (CI). We assessed the quality of evidence using GRADE (http://www.gradeworkinggroup.org/).\nMain results\nWe included eight studies (1018 participants). The average age of the participants ranged from 56 to 68 years. Most of the studies were conducted in single centers in Japan (four studies), China (two studies), France (one study), and the USA (one study). The risk of bias was low or unclear for most domains across the studies.\nStents versus no stents\nThe effect of stents on reducing pancreatic fistula in people undergoing pancreaticoduodenectomy was uncertain due to the low quality of the evidence (RR 0.67, 95% CI 0.39 to 1.14; 605 participants; 4 studies). The risk of in-hospital mortality was 3% in people who did receive stents compared with 2% (95% CI 1% to 6%) in people who had stents (RR 0.73, 0.28 to 1.94; 605 participants; 4 studies; moderate-quality evidence). The effect of stents on reoperation was uncertain due to wide confidence intervals (RR 0.67, 0.36 to 1.22; 512 participants; 3 studies; moderate-quality evidence). We found moderate-quality evidence that using stents reduces total hospital stay by just under four days (mean difference (MD) -3.68, 95% CI -6.52 to -0.84; 605 participants; 4 studies). The risk of delayed gastric emptying, wound infection, and intra-abdominal abscess was uncertain (gastric emptying: RR 0.75, 95% CI 0.24 to 2.35; moderate-quality evidence) (wound infection: RR 0.73, 95% CI 0.40 to 1.32; moderate-quality evidence) (abscess: RR 1.38, 0.49 to 3.85; low-quality evidence). Subgroup analysis by type of stent provided limited evidence that external stents lead to lower risk of fistula compared with internal stents.\nExternal versus internal stents\nThe effect of external stents on the risk of pancreatic fistula, reoperation, delayed gastric emptying, and intra-abdominal abscess compared with internal stents was uncertain due to low-quality evidence (fistula: RR 1.44, 0.94 to 2.21; 362 participants; 3 studies) (reoperation: RR 2.02, 95% CI 0.38 to 10.79; 319 participants; 3 studies) (gastric emptying: RR 1.65, 0.66 to 4.09; 362 participants; 3 studies) (abscess: RR 1.91, 95% CI 0.80 to 4.58; 362 participants; 3 studies). The rate of in-hospital mortality was lower in studies comparing internal and external stents than in those comparing stents with no stents. One death occurred in the external-stent group (RR 0.33, 0.01 to 7.99; low-quality evidence). There were no cases of pancreatitis in participants who had internal stents compared with three in those who had external stents (RR 0.15, 0.01 to 2.73; low-quality evidence). The difference between internal and external stents on total hospital stay was uncertain due to the wide confidence intervals around the average effect of 1.7 days fewer with internal stents (9.18 days fewer to 5.84 days longer; 262 participants; 2 studies; low-quality evidence). The analysis of wound infection could not exclude a protective effect with either approach (RR 1.41, 0.44 to 4.48; 319 participants; 2 studies; moderate-quality evidence).\nOperative replacement of pancreatic juice versus not replacing pancreatic juice\nThere was insufficient evidence available from a small trial to ascertain the effect of replacing pancreatic juice.\nAuthors' conclusions\nThis systematic review has identified limited evidence on the effects of stents. We have not been able to identify convincing direct evidence of superiority of external over internal stents. We found a limited number of RCTs with small sample sizes. Further RCTs on the use of stents after pancreaticoduodenectomy are warranted.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Are pancreatic stents useful in preventing pancreatic fistula after pancreaticoduodenectomy?"
}
] |
query-laysum | 1079 | [
{
"role": "user",
"content": "Abstract: Background\nChronic inflammatory joint diseases (IJDs) affect 1% to 2% of the population in developed countries. IJDs include rheumatoid arthritis (RA), ankylosing spondylitis (AS), psoriatic arthritis (PsA), and other forms of spondyloarthritis (SpA). Tobacco smoking is considered a significant environmental risk factor for developing IJDs. There are indications that smoking exacerbates the symptoms and worsens disease outcomes.\nObjectives\nThe objective of this review was to investigate the evidence for effects of smoking cessation interventions on smoking cessation and disease activity in smokers with IJD.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), the Cochrane Library; PubMed/MEDLINE; Embase; PsycINFO; the Cumulative Index to Nursing and Allied Health Literature (CINAHL); and three trials registers to October 2018.\nSelection criteria\nWe included randomised controlled trials testing any form of smoking cessation intervention for adult daily smokers with a diagnosis of IJD, and measuring smoking cessation at least six months after baseline.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane.\nMain results\nWe included two studies with 57 smokers with a diagnosis of rheumatoid arthritis (RA). We identified no studies including other IJDs. One pilot study compared a smoking cessation intervention specifically for people with RA with a less intensive, generic smoking cessation intervention. People included in the study had a mean age of 56.5 years and a disease duration of 7.7 years (mean). The second study tested effects of an eight-week cognitive-behavioural patient education intervention on cardiovascular disease (CVD) risk for people with RA and compared this with information on CVD risk only. The intervention encouraged participants to address multiple behaviours impacting CVD risk, including smoking cessation, but did not target smoking cessation alone. People included in the study had a mean age of 62.2 years (intervention group) and 60.8 years (control group), and disease duration of 11.6 years (intervention group) and 14.1 years (control group). It was not appropriate to perform a meta-analysis of abstinence data from the two studies due to clinical heterogeneity between interventions. Neither of the studies individually provided evidence to show benefit of the interventions tested. Only one study reported on adverse effects. These effects were non-serious, and numbers were comparable between trial arms. Neither of the studies assessed or reported disease activity or any of the predefined secondary outcomes. We assessed the overall certainty of evidence as very low due to indirectness, imprecision, and high risk of detection bias based on GRADE.\nAuthors' conclusions\nWe found very little research investigating the efficacy of smoking cessation intervention specifically in people with IJD. Included studies are limited by imprecision, risk of bias, and indirectness. Neither of the included studies investigated whether smoking cessation intervention reduced disease activity among people with IJD. High-quality, adequately powered studies are warranted. In particular, researchers should ensure that they measure disease markers and quality of life, in addition to long-term smoking cessation.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We rated the overall quality of the included studies as very low because studies were very few and included few participants; only one of the studies tested an intervention that specifically tried to help people to quit smoking, and there is a chance that people who received the intensive intervention were more likely to incorrectly report that they had stopped smoking when in fact they had not. As a result, further large studies should be carried out to test stop smoking programmes for people with IJD. Researchers should ensure that they measure whether people's IJD symptoms improve, and should confirm whether people have stopped smoking."
}
] |
query-laysum | 1080 | [
{
"role": "user",
"content": "Abstract: Background\nVentilator-associated pneumonia (VAP) is a significant cause of morbidity and mortality, complicating the medical course of approximately 10% of mechanically-ventilated patients, with an estimated attributable mortality of 13%. To treat VAP empirically, the American Thoracic Society currently recommends antibiotic therapy based on the patients' risk of colonisation by an organism with multidrug resistance. The selection of initial antibiotic therapy in VAP is important, as inappropriate initial antimicrobial treatment is associated with higher mortality and longer hospital stay in intensive care unit (ICU) patients.\nWhile guidelines exist for the antibiotic treatment of hospital-acquired pneumonia (HAP) from the American Thoracic Society and the British Society for Antimicrobial Chemotherapy, there are many limitations in the quality of available evidence. This systematic review aimed to summarise the results of all randomised controlled trials (RCTs) that compare empirical antibiotic regimens for VAP.\nObjectives\nThe primary objective of this review was to assess the effect of different empirical antimicrobial therapies on the survival and clinical cure of adult patients with ventilator-associated pneumonia (VAP). Secondary objectives included reporting the incidence of adverse events, new superinfections, length of hospital stay, and length of intensive care unit (ICU) stay associated with these therapies.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS, CINAHL and Web of Science to December 2015; we searched ClinicalTrials.gov to September 2016.\nSelection criteria\nTwo review authors independently assessed RCTs comparing empirical antibiotic treatments of VAP in adult patients, where VAP was defined as new-onset pneumonia that developed more than 48 hours after endotracheal intubation. Physicians and researchers were not required to be blinded for inclusion in this review.\nData collection and analysis\nTwo review authors independently extracted study data. We pooled studies and analysed them in two ways. We examined monotherapy, or a single experimental antimicrobial drug, versus combination therapy, or multiple experimental antimicrobial drugs. We also examined carbapenem therapy versus non-carbapenem therapy.\nMain results\nWe included 12 studies with 3571 participants. All included studies examined the empiric use of one antimicrobial regimen versus another for the treatment of adults with VAP, but the particular drug regimens examined by each study varied. There was potential for bias because some studies did not report outcomes for all participants. All but one study reported sources of funding or author affiliations with pharmaceutical companies.\nWe found no statistical difference in all-cause mortality between monotherapy and combination therapy (N = 4; odds ratio (OR) monotherapy versus combination 0.97, 95% confidence interval (CI) 0.73 to 1.30), clinical cure (N = 2; OR monotherapy versus combination 0.88, 95% CI 0.56 to 1.36), length of stay in ICU (mean difference (MD) 0.65, 95% CI 0.07 to 1.23) or adverse events (N = 2; OR monotherapy versus combination 0.93, 95% CI 0.68 to 1.26). We downgraded the quality of evidence for all-cause mortality, adverse events, and length of ICU stay to moderate for this comparison. We determined clinical cure for this comparison to be of very low-quality evidence.\nFor our second comparison of combination therapy with optional adjunctives only one meta-analysis could be performed due to a lack of trials comparing the same antibiotic regimens. Two studies compared tigecycline versus imipenem-cilastatin for clinical cure in the clinically evaluable population and there was a statistically significant increase in clinical cure for imipenem-cilastatin (N = 2; OR tigecycline versus imipenem-cilastatin 0.44, 95% CI 0.23 to 0.84). Of importance, this effect was due to a single study.\nWe found no statistical difference in all-cause mortality between carbapenem and non-carbapenem therapies (N = 1; OR carbapenem versus non-carbapenem 0.59, 95% CI 0.30 to 1.19) or adverse events (N = 3; OR carbapenem versus non-carbapenem 0.78, 95% CI 0.56 to 1.09), but we found that carbapenems are associated with a statistically significant increase in the clinical cure (N = 3; OR carbapenem versus non-carbapenem 1.53, 95% CI 1.11 to 2.12 for intention-to-treat (ITT) analysis and N = 2; OR carbapenem versus non-carbapenem 2.29, 95% CI 1.19 to 4.43 for clinically evaluable patients analysis). For this comparison we downgraded the quality of evidence for mortality, and clinical cure (ITT and clinically evaluable populations) to moderate. We determined the quality of evidence for adverse events to be low.\nAuthors' conclusions\nWe did not find a difference between monotherapy and combination therapy for the treatment of people with VAP. Since studies did not identify patients with increased risk for multidrug-resistant bacteria, these data may not be generalisable to all patient groups. However, this is the largest meta-analysis comparing monotherapy to multiple antibiotic therapies for VAP and contributes further evidence to the safety of using effective monotherapy for the empiric treatment of VAP.\nDue to lack of studies, we could not evaluate the best antibiotic choice for VAP, but carbapenems as a class may result in better clinical cure than other tested antibiotics.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All included studies looked at the use of one antibiotic treatment plan versus another, but these varied among studies. There was potential for bias because some studies did not report outcomes for all participants, and funding for many was provided by pharmaceutical companies and study authors were affiliated with these companies.\nWe used statistical techniques to evaluate our results. For single versus multiple antibiotics, we found no difference in rates of death or cure, or adverse events. For our comparison of combination therapies with optional adjunctives we were only able to analyse clinical cure for one the antibiotics Tigecycline and imipenem-cilastatin for which imipenem-cilastatin was found to have higher clinica cure. We also looked at carbapenem (antibiotics used to treat infections caused by multidrug-resistant bacteria) versus non-carbapenem treatment; we found no difference in death rate or adverse effects, but we found that carbapenems are associated with an increase in clinical cure."
}
] |
query-laysum | 1081 | [
{
"role": "user",
"content": "Abstract: Background\nFunctional constipation is defined as chronic constipation with no identifiable underlying cause. It is a significant cause of morbidity in children, accounting for up to 25% of visits to paediatric gastroenterologists. Probiotic preparations may sufficiently alter the gut microbiome and promote normal gut physiology in a way that helps relieve functional constipation. Several studies have sought to address this hypothesis, as well as the role of probiotics in other functional gut disorders. Therefore, it is important to have a focused review to assess the evidence to date.\nObjectives\nTo evaluate the efficacy and safety of probiotics for the management of chronic constipation without a physical explanation in children.\nSearch methods\nOn 28 June 2021, we searched CENTRAL, MEDLINE, Embase, CINAHL, AMED, WHO ICTR, and ClinicalTrials.gov, with no language, date, publication status, or document type limitations.\nSelection criteria\nWe included randomised controlled trials (RCTs) that assessed probiotic preparations (including synbiotics) compared to placebo, no treatment or any other interventional preparation in people aged between 0 and 18 years old with a diagnosis of functional constipation according to consensus criteria (such as Rome IV).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 14 studies (1127 randomised participants): 12 studies assessed probiotics in the treatment of functional constipation, whilst two studies investigated synbiotic preparations.\nThree studies compared probiotics to placebo in relation to the frequency of defecation at study end, but we did not pool them as there was very significant unexplained heterogeneity. Four studies compared probiotics to placebo in relation to treatment success. There may be no difference in global improvement/treatment success (RR 1.29, 95% CI 0.73 to 2.26; 313 participants; low-certainty evidence). Five studies compared probiotics to placebo in relation to withdrawals due to adverse events, with the pooled effect suggesting there may be no difference (RR 0.64, 95% CI 0.21 to 1.95; 357 participants; low-certainty evidence).\nThe pooled estimate from three studies that compared probiotics plus an osmotic laxative to osmotic laxative alone found there may be no difference in frequency of defecation (MD -0.01, 95% CI -0.57 to 0.56; 268 participants; low-certainty evidence). Two studies compared probiotics plus an osmotic laxative to osmotic laxative alone in relation to global improvement/treatment success, and found there may be no difference between the treatments (RR 0.95, 95% CI 0.79 to 1.15; 139 participants; low-certainty evidence). Three studies compared probiotics plus osmotic laxative to osmotic laxative alone in relation to withdrawals due to adverse events, but it is unclear if there is a difference between them (RR 2.86, 95% CI 0.12 to 68.35; 268 participants; very low-certainty evidence).\nTwo studies compared probiotics versus magnesium oxide. It is unclear if there is a difference in frequency of defecation (MD 0.28, 95% CI -0.58 to 1.14; 36 participants), treatment success (RR 1.08, 95% CI 0.74 to 1.57; 36 participants) or withdrawals due to adverse events (RR 0.50, 95% CI 0.05 to 5.04; 77 participants). The certainty of the evidence is very low for these outcomes.\nOne study assessed the role of a synbiotic preparation in comparison to placebo. There may be higher treatment success in favour of synbiotics compared to placebo (RR 2.32, 95% CI 1.54 to 3.47; 155 participants; low-certainty evidence). The study reported that there were no withdrawals due to adverse effects in either group.\nOne study assessed a synbiotic plus paraffin compared to paraffin alone. It is uncertain if there is a difference in frequency of defecation (MD 0.74, 95% CI -0.96, 2.44; 66 participants; very low-certainty evidence), or treatment success (RR 0.91, 95% CI 0.71 to 1.17; 66 participants; very low-certainty evidence). The study reported that there were no withdrawals due to adverse effects in either group.\nOne study compared a synbiotic preparation to paraffin. It is uncertain if there is a difference in frequency of defecation (MD -1.53, 95% CI -3.00, -0.06; 60 participants; very low-certainty evidence) or in treatment success (RR 0.86, 95% CI 0.65, 1.13; 60 participants; very low-certainty evidence). The study reported that there were no withdrawals due to adverse effects in either group.\nAll secondary outcomes were either not reported or reported in a way that did not allow for analysis.\nAuthors' conclusions\nThere is insufficient evidence to conclude whether probiotics are efficacious in successfully treating chronic constipation without a physical explanation in children or changing the frequency of defecation, or whether there is a difference in withdrawals due to adverse events when compared with placebo. There is limited evidence from one study to suggest a synbiotic preparation may be more likely than placebo to lead to treatment success, with no difference in withdrawals due to adverse events.\nThere is insufficient evidence to draw efficacy or safety conclusions about the use of probiotics in combination with or in comparison to any of the other interventions reported. The majority of the studies that presented data on serious adverse events reported that no events occurred. Two studies did not report this outcome.\nFuture studies are needed to confirm efficacy, but the research community requires guidance on the best context for probiotics in such studies, considering where they should be best considered in a potential treatment hierarchy and should align with core outcome sets to support future interpretation of findings.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for randomised controlled trials (studies in which participants are assigned to one of two or more treatment groups using a random method) comparing any probiotic or synbiotic treatment with any other treatment (such as dummy/placebo treatments) in children with chronic constipation without a physical explanation. We found 14 trials including a total of 1127 participants who were aged under 18. We made the following conclusions.\n• There may be no difference in treatment success when comparing probiotics to placebo.\n• We were unable to draw conclusions on whether there is a difference in the frequency of defecation.\n• There may be no difference in treatment success when comparing probiotics and laxatives to laxatives alone.\n• We were unable to draw conclusions on frequency of defecation or treatment success when comparing probiotics to magnesium oxide.\n• Synbiotics may be better than placebo in improving the frequency of defection.\n• There may be no difference in how many people withdrew from the trials because of side effects when comparing probiotics to placebo, or probiotics and laxative to laxative alone.\n• We have limited confidence in the evidence because the studies only included a small number of children, and due to lack of detail on some of the methods used.\nAll analyses were limited due to differences in the specific probiotics or the treatments they were compared with, low numbers of children included in the studies and, most importantly, the use of a range of different measures of success. This meant that combining studies was difficult, so the overall ability of this review to answer its core questions was limited."
}
] |
query-laysum | 1082 | [
{
"role": "user",
"content": "Abstract: Background\nVestibular migraine is a form of migraine where one of the main features is recurrent attacks of vertigo. These episodes are often associated with other features of migraine, including headache and sensitivity to light or sound. These unpredictable and severe attacks of vertigo can lead to a considerable reduction in quality of life. The condition is estimated to affect just under 1% of the population, although many people remain undiagnosed. A number of pharmacological interventions have been used or proposed to be used as prophylaxis for this condition, to help reduce the frequency of the attacks. These are predominantly based on treatments that are in use for headache migraine, with the belief that the underlying pathophysiology of these conditions is similar.\nObjectives\nTo assess the benefits and harms of pharmacological treatments used for prophylaxis of vestibular migraine.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 23 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with definite or probable vestibular migraine comparing beta-blockers, calcium channel blockers, antiepileptics, antidepressants, diuretics, monoclonal antibodies against calcitonin gene-related peptide (or its receptor), botulinum toxin or hormonal modification with either placebo or no treatment. We excluded studies with a cross-over design, unless data from the first phase of the study could be identified.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) improvement in headache, 6) improvement in other migrainous symptoms and 7) other adverse effects. We considered outcomes reported at three time points: < 3 months, 3 to < 6 months, > 6 to 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included three studies with a total of 209 participants. One evaluated beta-blockers and the other two evaluated calcium channel blockers. We did not identify any evidence for the remaining interventions of interest.\nBeta-blockers versus placebo\nOne study (including 130 participants, 61% female) evaluated the use of 95 mg metoprolol once daily for six months, compared to placebo. The proportion of people who reported improvement in vertigo was not assessed in this study. Some data were reported on the frequency of vertigo attacks at six months and the occurrence of serious adverse effects. However, this is a single, small study and for all outcomes the certainty of evidence was low or very low. We are unable to draw meaningful conclusions from the numerical results.\nCalcium channel blockers versus no treatment\nTwo studies, which included a total of 79 participants (72% female), assessed the use of 10 mg flunarizine once daily for three months, compared to no intervention. All of the evidence for this comparison was of very low certainty. Most of our outcomes were only reported by a single study, therefore we were unable to conduct any meta-analysis. Some data were reported on improvement in vertigo and change in vertigo, but no information was available regarding serious adverse events. We are unable to draw meaningful conclusions from the numerical results, as these data come from single, small studies and the certainty of the evidence was very low.\nAuthors' conclusions\nThere is very limited evidence from placebo-controlled randomised trials regarding the efficacy and potential harms of pharmacological interventions for prophylaxis of vestibular migraine. We only identified evidence for two of our interventions of interest (beta-blockers and calcium channel blockers) and all evidence was of low or very low certainty. Further research is necessary to identify whether these treatments are effective at improving symptoms and whether there are any harms associated with their use.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out:\n- whether there is evidence that any medications work to prevent attacks of vestibular migraine, or reduce the symptoms when an attack occurs;\n- whether the treatments might cause any harm."
}
] |
query-laysum | 1083 | [
{
"role": "user",
"content": "Abstract: Background\nRocuronium bromide is a routinely used muscle relaxant in anaesthetic practice. Its use, however, is associated with intense pain on injection. While it is well established that rocuronium bromide injection causes pain in awake patients, anaesthetized patients also tend to show withdrawal movements of the limbs when this muscle relaxant is administered. Various strategies, both pharmacological and non-pharmacological, have been studied to reduce the incidence and severity of pain on rocuronium bromide injection. We wanted to find out which of the existing modalities was best to reduce pain on rocuronium injection.\nObjectives\nThe objectives of this review were to assess the ability of both pharmacological and non-pharmacological interventions to reduce or eliminate the pain that accompanies rocuronium bromide administration.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2013, Issue 7), MEDLINE via Ovid SP (1966 to July 2013) and EMBASE via Ovid SP (1980 to July 2013). We also searched specific websites. We reran the searches in February 2015 and will deal with the 11 studies of interest found through this search when we update the review.\nSelection criteria\nWe included all randomized controlled trials (RCTs) that compared the use of any drug or a non-pharmacological method with control patients, or those receiving no treatment to reduce the severity of pain with rocuronium injection. Our primary outcome was pain on rocuronium bromide injection measured by a pain score assessment. Our secondary outcomes were rise in heart rate and blood pressure following administration of rocuronium and adverse events related to the interventions.\nData collection and analysis\nWe used the standardized methods for conducting a systematic review as described in the Cochrane Handbook for Systematic Reviews of Interventions. Two authors independently extracted details of trial methodology and outcome data from reports of all trials considered eligible for inclusion. We made all analyses on an intention-to-treat basis. We used a fixed-effect model where there was no evidence of significant heterogeneity between studies and a random-effects model if heterogeneity was likely.\nMain results\nWe included 66 studies with 7840 participants in the review, though most analyses were based on data from fewer participants. In total there are 17 studies awaiting classification. No studies were at a low risk of bias. We noted substantial statistical and clinical heterogeneity between trials. Most of the studies reported the primary outcome pain as assessed by verbal response from participants in an awake state but some trials reported withdrawal of the injected limb as a proxy for pain after induction of anaesthesia in response to rocuronium administration. Few studies reported adverse events and no study reported heart rate and blood pressure changes after administration of rocuronium. Lidocaine was the most commonly studied intervention drug, used in 29 trials with 2256 participants. The risk ratio (RR) of pain on injection if given lidocaine compared to placebo was 0.23 (95% confidence interval (CI) 0.17 to 0.31; I² = 65%, low quality of evidence). The RR of pain on injection if fentanyl and remifentanil were given compared to placebo was 0.42 (95% CI 0.26 to 0.70; I² = 79%, low quality of evidence) and (RR 0.10, 95% CI 0.04 to 0.26; I² = 74%, low quality of evidence), respectively. Pain on injection of intervention drugs was reported with the use of lidocaine and acetaminophen in one study. Cough was reported with the use of fentanyl (one study), remifentanil (five studies, low quality evidence) and alfentanil (one study). Breath holding and chest tightness were reported with the use of remifentanil in two studies (very low quality evidence) and one study (very low quality evidence), respectively. The overall rate of complications was low.\nAuthors' conclusions\nThe evidence to suggest that the most commonly investigated pharmacological interventions reduce pain on injection of rocuronium is of low quality due to risk of bias and inconsistency. There is low or very low quality evidence for adverse events, due to risk of bias, inconsistency and imprecision of effect. We did not compare the various interventions with one another and so cannot comment on the superiority of one intervention over another. Complications were reported more often with use of opioids.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Rocuronium bromide is a muscle relaxant used as part of general anaesthesia for surgery. Muscle relaxants are used to relax the muscles of the airway to enable endotracheal intubation (placing a breathing tube in the windpipe to support the airway while the person is unconscious) and to facilitate the surgery. However, rocuronium bromide can cause intense pain as it is injected in some people. We wanted to find out whether giving another drug, such as a painkiller or another anaesthetic, or a non-drug intervention, such as diluting the rocuronium, would be useful in reducing the pain experienced by some people on injection of rocuronium."
}
] |
query-laysum | 1084 | [
{
"role": "user",
"content": "Abstract: Background\nWhile laparoscopic cholecystectomy is generally considered less painful than open surgery, pain is one of the important reasons for delayed discharge after day-surgery and overnight stay following laparoscopic cholecystectomy. The safety and effectiveness of different pharmacological interventions such as non-steroidal anti-inflammatory drugs, opioids, and anticonvulsant analgesics in people undergoing laparoscopic cholecystectomy is unknown.\nObjectives\nTo assess the benefits and harms of different analgesics in people undergoing laparoscopic cholecystectomy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Science Citation Index Expanded, and the World Health Organization International Clinical Trials Registry Platform portal (WHO ICTRP) to March 2013 to identify randomised clinical trials of relevance to this review.\nSelection criteria\nWe considered only randomised clinical trials (irrespective of language, blinding, or publication status) comparing different pharmacological interventions with no intervention or inactive controls for outcomes related to benefit in this review. We considered comparative non-randomised studies with regards to treatment-related harms. We also considered trials that compared one class of drug with another class of drug for this review.\nData collection and analysis\nTwo review authors collected the data independently. We analysed the data with both fixed-effect and random-effects models using Review Manager 5 analysis. For each outcome, we calculated the risk ratio (RR) or mean difference (MD) with 95% confidence intervals (CI).\nMain results\nWe included 25 trials with 2505 participants randomised to the different pharmacological agents and inactive controls. All the trials were at unclear risk of bias. Most trials included only low anaesthetic risk people undergoing elective laparoscopic cholecystectomy. Participants were allowed to take additional analgesics as required in 24 of the trials. The pharmacological interventions in all the included trials were aimed at preventing pain after laparoscopic cholecystectomy. There were considerable differences in the pharmacological agents used and the methods of administration. The estimated effects of the intervention on the proportion of participants who were discharged as day-surgery, the length of hospital stay, or the time taken to return to work were imprecise in all the comparisons in which these outcomes were reported (very low quality evidence). There was no mortality in any of the groups in the two trials that reported mortality (183 participants, very low quality evidence). Differences in serious morbidity outcomes between the groups were imprecise across all the comparisons (very low quality evidence). None of the trials reported patient quality of life or time taken to return to normal activity. The pain at 4 to 8 hours was generally reduced by about 1 to 2 cm on the visual analogue scale of 1 to 10 cm in the comparisons involving the different pharmacological agents and inactive controls (low or very low quality evidence). The pain at 9 to 24 hours was generally reduced by about 0.5 cm (a modest reduction) on the visual analogue scale of 1 to 10 cm in the comparisons involving the different pharmacological agents and inactive controls (low or very low quality evidence).\nAuthors' conclusions\nThere is evidence of very low quality that different pharmacological agents including non-steroidal anti-inflammatory drugs, opioid analgesics, and anticonvulsant analgesics reduce pain scores in people at low anaesthetic risk undergoing elective laparoscopic cholecystectomy. However, the decision to use these drugs has to weigh the clinically small reduction in pain against uncertain evidence of serious adverse events associated with many of these agents. Further randomised clinical trials of low risk of systematic and random errors are necessary. Such trials should include important clinical outcomes such as quality of life and time to return to work in their assessment.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 25 randomised clinical trials involving 2505 people undergoing laparoscopic cholecystectomy. Most participants in the trials were low anaesthetic risk people undergoing planned laparoscopic cholecystectomy. The choice of whether the participants received the different painkillers (or not) was determined by a method similar to the toss of coin so that the treatments compared were conducted in people who were as similar as possible. The treatments in all the included trials were aimed at decreasing the pain after laparoscopic cholecystectomy before the participants reported pain. Participants were allowed to take additional painkillers as required in most of the trials."
}
] |
query-laysum | 1085 | [
{
"role": "user",
"content": "Abstract: Background\nDental caries is a major public health problem in most industrialised countries, affecting 60% to 90% of school children. Community water fluoridation was initiated in the USA in 1945 and is currently practised in about 25 countries around the world; health authorities consider it to be a key strategy for preventing dental caries. Given the continued interest in this topic from health professionals, policy makers and the public, it is important to update and maintain a systematic review that reflects contemporary evidence.\nObjectives\nTo evaluate the effects of water fluoridation (artificial or natural) on the prevention of dental caries.\nTo evaluate the effects of water fluoridation (artificial or natural) on dental fluorosis.\nSearch methods\nWe searched the following electronic databases: The Cochrane Oral Health Group's Trials Register (to 19 February 2015); The Cochrane Central Register of Controlled Trials (CENTRAL; Issue 1, 2015); MEDLINE via OVID (1946 to 19 February 2015); EMBASE via OVID (1980 to 19 February 2015); Proquest (to 19 February 2015); Web of Science Conference Proceedings (1990 to 19 February 2015); ZETOC Conference Proceedings (1993 to 19 February 2015). We searched the US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization's WHO International Clinical Trials Registry Platform for ongoing trials. There were no restrictions on language of publication or publication status in the searches of the electronic databases.\nSelection criteria\nFor caries data, we included only prospective studies with a concurrent control that compared at least two populations - one receiving fluoridated water and the other non-fluoridated water - with outcome(s) evaluated at at least two points in time. For the assessment of fluorosis, we included any type of study design, with concurrent control, that compared populations exposed to different water fluoride concentrations. We included populations of all ages that received fluoridated water (naturally or artificially fluoridated) or non-fluoridated water.\nData collection and analysis\nWe used an adaptation of the Cochrane 'Risk of bias' tool to assess risk of bias in the included studies.\nWe included the following caries indices in the analyses: decayed, missing and filled teeth (dmft (deciduous dentition) and DMFT (permanent dentition)), and proportion caries free in both dentitions. For dmft and DMFT analyses we calculated the difference in mean change scores between the fluoridated and control groups. For the proportion caries free we calculated the difference in the proportion caries free between the fluoridated and control groups.\nFor fluorosis data we calculated the log odds and presented them as probabilities for interpretation.\nMain results\nA total of 155 studies met the inclusion criteria; 107 studies provided sufficient data for quantitative synthesis.\nThe results from the caries severity data indicate that the initiation of water fluoridation results in reductions in dmft of 1.81 (95% CI 1.31 to 2.31; 9 studies at high risk of bias, 44,268 participants) and in DMFT of 1.16 (95% CI 0.72 to 1.61; 10 studies at high risk of bias, 78,764 participants). This translates to a 35% reduction in dmft and a 26% reduction in DMFT compared to the median control group mean values. There were also increases in the percentage of caries free children of 15% (95% CI 11% to 19%; 10 studies, 39,966 participants) in deciduous dentition and 14% (95% CI 5% to 23%; 8 studies, 53,538 participants) in permanent dentition. The majority of studies (71%) were conducted prior to 1975 and the widespread introduction of the use of fluoride toothpaste.\nThere is insufficient information to determine whether initiation of a water fluoridation programme results in a change in disparities in caries across socioeconomic status (SES) levels.\nThere is insufficient information to determine the effect of stopping water fluoridation programmes on caries levels.\nNo studies that aimed to determine the effectiveness of water fluoridation for preventing caries in adults met the review's inclusion criteria.\nWith regard to dental fluorosis, we estimated that for a fluoride level of 0.7 ppm the percentage of participants with fluorosis of aesthetic concern was approximately 12% (95% CI 8% to 17%; 40 studies, 59,630 participants). This increases to 40% (95% CI 35% to 44%) when considering fluorosis of any level (detected under highly controlled, clinical conditions; 90 studies, 180,530 participants). Over 97% of the studies were at high risk of bias and there was substantial between-study variation.\nAuthors' conclusions\nThere is very little contemporary evidence, meeting the review's inclusion criteria, that has evaluated the effectiveness of water fluoridation for the prevention of caries.\nThe available data come predominantly from studies conducted prior to 1975, and indicate that water fluoridation is effective at reducing caries levels in both deciduous and permanent dentition in children. Our confidence in the size of the effect estimates is limited by the observational nature of the study designs, the high risk of bias within the studies and, importantly, the applicability of the evidence to current lifestyles. The decision to implement a water fluoridation programme relies upon an understanding of the population's oral health behaviour (e.g. use of fluoride toothpaste), the availability and uptake of other caries prevention strategies, their diet and consumption of tap water and the movement/migration of the population. There is insufficient evidence to determine whether water fluoridation results in a change in disparities in caries levels across SES. We did not identify any evidence, meeting the review's inclusion criteria, to determine the effectiveness of water fluoridation for preventing caries in adults.\nThere is insufficient information to determine the effect on caries levels of stopping water fluoridation programmes.\nThere is a significant association between dental fluorosis (of aesthetic concern or all levels of dental fluorosis) and fluoride level. The evidence is limited due to high risk of bias within the studies and substantial between-study variation.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tooth decay is a worldwide problem affecting most adults and children. Untreated decay may cause pain and lead to teeth having to be removed. In many parts of the world, tooth decay is decreasing. Children from poorer backgrounds still tend to have greater levels of decay. Fluoride is a mineral that prevents tooth decay. It occurs naturally in water at varying levels. Fluoride can also be added to the water with the aim of preventing tooth decay. Fluoride is present in most toothpastes and available in mouthrinses, varnishes and gels. If young children swallow too much fluoride while their permanent teeth are forming, there is a risk of marks developing on those teeth. This is called ‘dental fluorosis’. Most fluorosis is very mild, with faint white lines or streaks visible only to dentists under good lighting in the clinic. More noticeable fluorosis, which is less common, may cause people concern about how their teeth look."
}
] |
query-laysum | 1086 | [
{
"role": "user",
"content": "Abstract: Background\nPain in the neonate is associated with acute behavioural and physiological changes. Cumulative pain is associated with morbidities, including adverse neurodevelopmental outcomes. Studies have shown a reduction in changes in physiological parameters and pain score measurements following pre-emptive analgesic administration in neonates experiencing pain or stress. Non-pharmacological measures (such as holding, swaddling and breastfeeding) and pharmacological measures (such as acetaminophen, sucrose and opioids) have been used for analgesia. This is an update of a review first published in 2006 and updated in 2012.\nObjectives\nThe primary objective was to evaluate the effectiveness of breastfeeding or supplemental breast milk in reducing procedural pain in neonates. The secondary objective was to conduct subgroup analyses based on the type of control intervention, gestational age and the amount of supplemental breast milk given.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and trial registries (ICTRP, ISRCTN and clinicaltrials.gov) in August 2022; searches were limited from 2011 forwards. We checked the reference lists of included studies and relevant systematic reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) or quasi-RCTs of breastfeeding or supplemental breast milk versus no treatment/other measures in neonates. We included both term (≥ 37 completed weeks postmenstrual age) and preterm infants (< 37 completed weeks' postmenstrual age) up to a maximum of 44 weeks' postmenstrual age. The study must have reported on either physiological markers of pain or validated pain scores.\nData collection and analysis\nWe assessed the methodological quality of the trials using the information provided in the studies and by personal communication with the authors. We extracted data on relevant outcomes, estimated the effect size and reported this as a mean difference (MD). We used the GRADE approach to assess the certainty of evidence.\nMain results\nOf the 66 included studies, 36 evaluated breastfeeding, 29 evaluated supplemental breast milk and one study compared them against each other. The procedures conducted in the studies were: heel lance (39), venipuncture (11), intramuscular vaccination (nine), eye examination for retinopathy of prematurity (four), suctioning (four) and adhesive tape removal as procedure (one). We noted marked heterogeneity in the control interventions and pain assessment measures amongst the studies. Since many studies included multiple arms with breastfeeding/supplemental breast milk as the main comparator, we were not able to synthesise all interventions together. Individual interventions are compared to breastfeeding/supplemental breast milk and reported. The numbers of studies/participants presented with the findings are not taken from pooled analyses (as is usual in Cochrane Reviews), but are the overall totals in each comparison.\nOverall, the included studies were at low risk of bias except for masking of intervention and outcome assessment, where nearly one-third of studies were at high risk of bias.\nBreastfeeding versus control\nBreastfeeding may reduce the increase in heart rate compared to holding by mother, skin-to-skin contact, bottle feeding mother's milk, moderate concentration of sucrose/glucose (20% to 33%) with skin-to-skin contact (low-certainty evidence, 8 studies, 784 participants).\nBreastfeeding likely reduces the duration of crying compared to no intervention, lying on table, rocking, heel warming, holding by mother, skin-to-skin contact, bottle feeding mother's milk and moderate concentration of glucose (moderate-certainty evidence, 16 studies, 1866 participants).\nBreastfeeding may reduce percentage time crying compared to holding by mother, skin-to-skin contact, bottle feeding mother's milk, moderate concentration sucrose and moderate concentration of sucrose with skin-to-skin contact (low-certainty evidence, 4 studies, 359 participants).\nBreastfeeding likely reduces the Neonatal Infant Pain Scale (NIPS) score compared to no intervention, holding by mother, heel warming, music, EMLA cream, moderate glucose concentration, swaddling, swaddling and holding (moderate-certainty evidence, 12 studies, 1432 participants).\nBreastfeeding may reduce the Neonatal Facial Coding System (NFCS) score compared to no intervention, holding, pacifier and moderate concentration of glucose (low-certainty evidence, 2 studies, 235 participants).\nBreastfeeding may reduce the Douleur Aigue Nouveau-né (DAN) score compared to positioning, holding or placebo (low-certainty evidence, 4 studies, 709 participants).\nIn the majority of the other comparisons there was little or no difference between the breastfeeding and control group in any of the outcome measures.\nSupplemental breast milk versus control\nSupplemental breast milk may reduce the increase in heart rate compared to water or no intervention (low-certainty evidence, 5 studies, 336 participants).\nSupplemental breast milk likely reduces the duration of crying compared to positioning, massage or placebo (moderate-certainty evidence, 11 studies, 1283 participants).\nSupplemental breast milk results in little or no difference in percentage time crying compared to placebo or glycine (low-certainty evidence, 1 study, 70 participants).\nSupplemental breast milk results in little or no difference in NIPS score compared to no intervention, pacifier, moderate concentration of sucrose, eye drops, gentle touch and verbal comfort, and breast milk odour and verbal comfort (low-certainty evidence, 3 studies, 291 participants).\nSupplemental breast milk may reduce NFCS score compared to glycine (overall low-certainty evidence, 1 study, 40 participants). DAN scores were lower when compared to massage and water; no different when compared to no intervention, EMLA and moderate concentration of sucrose; and higher when compared to rocking or pacifier (low-certainty evidence, 2 studies, 224 participants).\nDue to the high number of comparator interventions, other measures of pain were assessed in a very small number of studies in both comparisons, rendering the evidence of low certainty.\nThe majority of studies did not report on adverse events, considering the benign nature of the intervention. Those that reported on adverse events identified none in any participants.\nSubgroup analyses were not conducted due to the small number of studies.\nAuthors' conclusions\nModerate-/low-certainty evidence suggests that breastfeeding or supplemental breast milk may reduce pain in neonates undergoing painful procedures compared to no intervention/positioning/holding or placebo or non-pharmacological interventions. Low-certainty evidence suggests that moderate concentration (20% to 33%) glucose/sucrose may lead to little or no difference in reducing pain compared to breastfeeding. The effectiveness of breast milk for painful procedures should be studied in the preterm population, as there are currently a limited number of studies that have assessed its effectiveness in this population.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated how well breastfeeding or supplemental breast milk (expressed breast milk given via feeding tube or by placing breast milk in baby's mouth) works as a pain reliever in newborn babies while they undergo painful procedures (e.g. vaccination, heel prick, blood sampling for tests or eye examinations). The babies' pain responses (e.g. changes in heart rate, oxygen level, blood pressure, percentages of crying time, duration of crying etc.) were assessed by health care professionals to measure the pain that babies are experiencing."
}
] |
query-laysum | 1087 | [
{
"role": "user",
"content": "Abstract: Background\nAcute asthma is a common cause of presentations to acute care centres, such as the emergency department (ED), and while the majority of patients can be discharged, relapse requiring additional medical care is common. Systemic corticosteroids are a major part in the treatment of moderate to severe acute asthma; however, there is no clear evidence regarding the most effective route of administration for improving outcomes in patients discharged from acute care.\nObjectives\nTo examine the effectiveness and safety of a single dose of intramuscular (IM) corticosteroids provided prior to discharge compared to a short course of oral corticosteroids in the treatment of acute asthma patients discharged from an ED or equivalent acute care setting.\nSearch methods\nThe Cochrane Airways Group conducted searches of the Cochrane Airways Group Register of Trials, most recently on 14 March 2018. In addition in April 2017 we completed an extensive search of nine electronic databases including Medline, Embase, EBM ALL, Global Health, International Pharmaceutical Abstracts, CINAHL, SCOPUS, Proquest Dissertations and Theses Global, and LILACS. Furthermore, we searched the grey literature to identify any additional studies.\nSelection criteria\nWe included randomized controlled trials or controlled clinical trials if they compared the effectiveness of intramuscular (IM) versus oral corticosteroids to treat paediatric or adult patients presenting with acute asthma to an ED or equivalent acute care setting. Two independent reviewers assessed study eligibility and study quality. We resolved disagreements via a third party and assessed risk of bias using the Cochrane 'Risk of bias' tool. We assessed the quality of the evidence using GRADE.\nData collection and analysis\nFor dichotomous outcomes, we calculated individual and pooled statistics as risk ratios (RRs) with 95% confidence intervals (CIs) using a random-effects model. We reported continuous outcomes using mean difference (MD) or standardised mean difference (SMD) with 95% CIs using a random-effects model. We reported heterogeneity using I² and Cochran Q statistics. We used standard procedures recommended by Cochrane.\nMain results\nNine studies involving 804 participants (IM = 402 participants; oral = 402 participants) met our review inclusion criteria. Four studies enrolled children (n = 245 participants), while five studies enrolled adults (n = 559 participants). All of the studies recruited participants presenting to an ED, except one study which recruited participants attending a primary care clinic. All of the paediatric studies compared intramuscular (IM) dexamethasone to oral prednisone/prednisolone. In the adult studies, the IM corticosteroid provided ranged from methylprednisolone, betamethasone, dexamethasone, or triamcinolone, while the regimen of oral corticosteroids provided consisted of prednisone, methylprednisolone, or dexamethasone. Only five studies were placebo controlled. For the purposes of this review, we did not take corticosteroid dose equivalency into account in the analysis. The most common co-intervention provided to participants during the acute care visit included short-acting beta₂-agonists (SABA), methylxanthines, and ipratropium bromide. In some instances, some studies reported providing some participants with supplemental oral or IV corticosteroids during their stay in the ED. Co-interventions provided to participants at discharge consisted primarily of SABA, methylxanthine, long-acting beta₂-agonists (LABA), and ipratropium bromide. The risk of bias of the included studies ranged from unclear to high across various domains. The primary outcome of interest was relapse to additional care defined as an unscheduled visit to a health practitioner for worsening asthma symptoms, or requiring subsequent treatment with corticosteroids which may have occurred at any time point after discharge from the ED.\nWe found intramuscular and oral corticosteroids to be similarly effective in reducing the risk for relapse (RR 0.94, 95% CI 0.72 to 1.24; 9 studies, 804 participants; I² = 0%; low-quality evidence). We found no subgroup differences in relapse rates between paediatric and adult participants (P = 0.71), relapse occurring within or after 10 days post-discharge (P = 0.22), or participants with mild/moderate or severe exacerbations (P = 0.35). While we found no statistical difference between participants receiving IM versus oral corticosteroids regarding the risk for adverse events (RR 0.83, 95% CI 0.64 to 1.07; 5 studies, 404 participants; I² = 0%; moderate-quality evidence), an estimated 50 fewer patients per 1000 receiving IM corticosteroids reported experiencing adverse events (95% from 106 fewer to 21 more). We found inconsistent reporting of specific adverse events across the studies. There were no differences in the frequency of specific adverse events including nausea and vomiting, pain, swelling, redness, insomnia, or personality changes. We did not seek additional adverse events data.\nParticipants receiving IM corticosteroids or oral corticosteroids both reported decreases in peak expiratory flow (MD −7.78 L/min, 95% CI −38.83 L/min to 23.28 L/min; 4 studies, 272 participants; I² = 33%; moderate-quality evidence), similar symptom persistence (RR 0.41, 95% CI 0.14 to 1.20; 3 studies, 80 participants; I² = 44%; low-quality evidence), and 24-hour beta-agonist use (RR 0.54, 95% CI 0.21 to 1.37; 2 studies, 48 participants; I² = 0%; low-quality evidence).\nAuthors' conclusions\nThere is insufficient evidence to identify whether IM corticosteroids are more effective in reducing relapse compared to oral corticosteroids among children or adults discharged from an ED or equivalent acute care setting for acute asthma. While we found no statistical differences, patients receiving IM corticosteroids reported fewer adverse events. Additional studies comparing the effectiveness of IM versus oral corticosteroids could provide further evidence clarity. Furthermore, there is a need for studies comparing different IM corticosteroids (e.g. IM dexamethasone versus IM methylprednisone) and different oral corticosteroids (e.g. oral dexamethasone versus oral prednisone), with consideration for dosing and pharmacokinetic properties, to better identify the optimal IM or oral corticosteroid regimens to improve patient outcomes. Other factors, such as patient preference and potential issues with adherence, may dictate practitioner prescribing.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We examined the effectiveness of an injection of corticosteroids compared to corticosteroids taken by mouth to improve outcomes among patients who presented to an emergency department or similar acute care setting with acute asthma."
}
] |
query-laysum | 1088 | [
{
"role": "user",
"content": "Abstract: Background\nPeripheral arterial disease is a major health problem, and in about 1% to 2% of patients, the disease progresses to critical limb ischaemia (CLI), also known as critical limb-threatening ischaemia. In a substantial number of individuals with CLI, no effective treatment options other than amputation are available, with around a quarter of these patients requiring a major amputation during the following year. This is the second update of a review first published in 2011.\nObjectives\nTo evaluate the benefits and harms of local intramuscular transplantation of autologous adult bone marrow mononuclear cells (BMMNCs) as a treatment for CLI.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 8 November 2021.\nSelection criteria\nWe included all randomised controlled trials (RCTs) of CLI in which participants were randomly allocated to intramuscular administration of autologous adult BMMNCs or control (either no intervention, conventional conservative therapy, or placebo).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes of interest were all-cause mortality, pain, and amputation. Our secondary outcomes were angiographic analysis, ankle-brachial index (ABI), pain-free walking distance, side effects and complications. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included four RCTs involving a total of 176 participants with a clinical diagnosis of CLI. Participants were randomised to receive either intramuscular cell implantation of BMMNCs or control. The control arms varied between studies, and included conventional therapy, diluted autologous peripheral blood, and saline. There was no clear evidence of an effect on mortality related to the administration of BMMNCs compared to control (risk ratio (RR) 1.00, 95% confidence interval (CI) 0.15 to 6.63; 3 studies, 123 participants; very low-certainty evidence). All trials assessed changes in pain severity, but the trials used different forms of pain assessment tools, so we were unable to pool data. Three studies individually reported that no differences in pain reduction were observed between the BMMNC and control groups. One study reported that reduction in rest pain was greater in the BMMNC group compared to the control group (very low-certainty evidence). All four trials reported the rate of amputation at the end of the study period. We are uncertain if amputations were reduced in the BMMNC group compared to the control group, as a possible small effect (RR 0.52, 95% CI 0.27 to 0.99; 4 studies, 176 participants; very low-certainty evidence) was lost after undertaking sensitivity analysis (RR 0.52, 95% CI 0.19 to 1.39; 2 studies, 89 participants). None of the included studies reported any angiographic analysis. Ankle-brachial index was reported differently by each study, so we were not able to pool the data. Three studies reported no changes between groups, and one study reported greater improvement in ABI (as haemodynamic improvement) in the BMMNC group compared to the control group (very low-certainty evidence). One study reported pain-free walking distance, finding no clear difference between BMMNC and control groups (low-certainty evidence). We pooled the data for side effects reported during the follow-up, and this did not show any clear difference between BMMNC and control groups (RR 2.13, 95% CI 0.50 to 8.97; 4 studies, 176 participants; very low-certainty evidence). We downgraded the certainty of the evidence due to the concerns about risk of bias, imprecision, and inconsistency.\nAuthors' conclusions\nWe identified a small number of studies that met our inclusion criteria, and these differed in the controls they used and how they measured important outcomes. Limited data from these trials provide very low- to low-certainty evidence, and we are unable to draw conclusions to support the use of local intramuscular transplantation of BMMNC for improving clinical outcomes in people with CLI. Evidence from larger RCTs is needed in order to provide adequate statistical power to assess the role of this procedure.\n\nGiven the provided abstract, please respond to the following query: \"Why is this question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Critical limb ischaemia, or critical limb-threatening ischaemia, occurs when blood flow to the legs is reduced because of the worsening of peripheral arterial disease. Initially, patients experience cramping leg pain that limits walking (known as intermittent claudication), but over time some patients will experience more severe symptoms including pain at rest, leg ulceration, and gangrene. The available treatment options are very limited when the disease reaches this stage, especially when surgical or catheter revascularisation (a procedure to restore blood flow in blocked arteries or veins) is not an option. Many of these patients will go on to have the affected limb amputated. The use of mononuclear cell therapy (using the patient’s own cells) offers the possibility of an alternative treatment for patients, by supplying cells that could stimulate the formation of stable capillary vessels to improve the blood flow in the affected limb. These cells can be obtained from the bone marrow. They are purified in a laboratory and injected into the large muscle at the back of the lower leg."
}
] |
query-laysum | 1089 | [
{
"role": "user",
"content": "Abstract: Background\nPain during dental treatment, which is a common fear of patients, can be controlled successfully by local anaesthetic. Several different local anaesthetic formulations and techniques are available to dentists.\nObjectives\nOur primary objectives were to compare the success of anaesthesia, the speed of onset and duration of anaesthesia, and systemic and local adverse effects amongst different local anaesthetic formulations for dental anaesthesia. We define success of anaesthesia as absence of pain during a dental procedure, or a negative response to electric pulp testing or other simulated scenario tests. We define dental anaesthesia as anaesthesia given at the time of any dental intervention.\nOur secondary objective was to report on patients' experience of the procedures carried out.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; the Cochrane Library; 2018, Issue 1), MEDLINE (OVID SP), Embase, CINAHL PLUS, WEB OF SCIENCE, and other resources up to 31 January 2018. Other resources included trial registries, handsearched journals, conference proceedings, bibliographies/reference lists, and unpublished research.\nSelection criteria\nWe included randomized controlled trials (RCTs) testing different formulations of local anaesthetic used for clinical procedures or simulated scenarios. Studies could apply a parallel or cross-over design.\nData collection and analysis\nWe used standard Cochrane methodological approaches for data collection and analysis.\nMain results\nWe included 123 studies (19,223 participants) in the review. We pooled data from 68 studies (6615 participants) for meta-analysis, yielding 23 comparisons of local anaesthetic and 57 outcomes with 14 different formulations. Only 10 outcomes from eight comparisons involved clinical testing.\nWe assessed the included studies as having low risk of bias in most domains. Seventy-three studies had at least one domain with unclear risk of bias. Fifteen studies had at least one domain with high risk of bias due to inadequate sequence generation, allocation concealment, masking of local anaesthetic cartridges for administrators or outcome assessors, or participant dropout or exclusion.\nWe reported results for the eight most important comparisons.\nSuccess of anaesthesia\nWhen the success of anaesthesia in posterior teeth with irreversible pulpitis requiring root canal treatment is tested, 4% articaine, 1:100,000 epinephrine, may be superior to 2% lidocaine, 1:100,000 epinephrine (31% with 2% lidocaine vs 49% with 4% articaine; risk ratio (RR) 1.60, 95% confidence interval (CI) 1.10 to 2.32; 4 parallel studies; 203 participants; low-quality evidence).\nWhen the success of anaesthesia for teeth/dental tissues requiring surgical procedures and surgical procedures/periodontal treatment, respectively, was tested, 3% prilocaine, 0.03 IU felypressin (66% with 3% prilocaine vs 76% with 2% lidocaine; RR 0.86, 95% CI 0.79 to 0.95; 2 parallel studies; 907 participants; moderate-quality evidence), and 4% prilocaine plain (71% with 4% prilocaine vs 83% with 2% lidocaine; RR 0.86, 95% CI 0.75 to 0.99; 2 parallel studies; 228 participants; low-quality evidence) were inferior to 2% lidocaine, 1:100,000 epinephrine.\nComparative effects of 4% articaine, 1:100,000 epinephrine and 4% articaine, 1:200,000 epinephrine on success of anaesthesia for teeth/dental tissues requiring surgical procedures are uncertain (RR 0.85, 95% CI 0.71 to 1.02; 3 parallel studies; 930 participants; very low-quality evidence).\nComparative effects of 0.5% bupivacaine, 1:200,000 epinephrine and both 4% articaine, 1:200,000 epinephrine (odds ratio (OR) 0.87, 95% CI 0.27 to 2.83; 2 cross-over studies; 37 participants; low-quality evidence) and 2% lidocaine, 1:100,000 epinephrine (OR 0.58, 95% CI 0.07 to 5.12; 2 cross-over studies; 31 participants; low-quality evidence) on success of anaesthesia for teeth requiring extraction are uncertain.\nComparative effects of 2% mepivacaine, 1:100,000 epinephrine and both 4% articaine, 1:100,000 epinephrine (OR 3.82, 95% CI 0.61 to 23.82; 1 parallel and 1 cross-over study; 110 participants; low-quality evidence) and 2% lidocaine, 1:100,000 epinephrine (RR 1.16, 95% CI 0.25 to 5.45; 2 parallel studies; 68 participants; low-quality evidence) on success of anaesthesia for teeth requiring extraction and teeth with irreversible pulpitis requiring endodontic access and instrumentation, respectively, are uncertain.\nFor remaining outcomes, assessing success of dental local anaesthesia via meta-analyses was not possible.\nOnset and duration of anaesthesia\nFor comparisons assessing onset and duration, no clinical studies met our outcome definitions.\nAdverse effects (continuous pain measured on 170-mm Heft-Parker visual analogue scale (VAS))\nDifferences in post-injection pain between 4% articaine, 1:100,000 epinephrine and 2% lidocaine, 1:100,000 epinephrine are small, as measured on a VAS (mean difference (MD) 4.74 mm, 95% CI -1.98 to 11.46 mm; 3 cross-over studies; 314 interventions; moderate-quality evidence). Lidocaine probably resulted in slightly less post-injection pain than articaine (MD 6.41 mm, 95% CI 1.01 to 11.80 mm; 3 cross-over studies; 309 interventions; moderate-quality evidence) on the same VAS.\nFor remaining comparisons assessing local and systemic adverse effects, meta-analyses were not possible. Other adverse effects were rare and minor.\nPatients' experience\nPatients' experience of procedures was not assessed owing to lack of data.\nAuthors' conclusions\nFor success (absence of pain), low-quality evidence suggests that 4% articaine, 1:100,000 epinephrine was superior to 2% lidocaine, 1:100,000 epinephrine for root treating of posterior teeth with irreversible pulpitis, and 2% lidocaine, 1:100,000 epinephrine was superior to 4% prilocaine plain when surgical procedures/periodontal treatment was provided. Moderate-quality evidence shows that 2% lidocaine, 1:100,000 epinephrine was superior to 3% prilocaine, 0.03 IU felypressin when surgical procedures were performed.\nAdverse events were rare. Moderate-quality evidence shows no difference in pain on injection when 4% articaine, 1:100,000 epinephrine and 2% lidocaine, 1:100,000 epinephrine were compared, although lidocaine resulted in slightly less pain following injection.\nMany outcomes tested our primary objectives in simulated scenarios, although clinical alternatives may not be possible.\nFurther studies are needed to increase the strength of the evidence. These studies should be clearly reported, have low risk of bias with adequate sample size, and provide data in a format that will allow meta-analysis. Once assessed, results of the 34 ‘Studies awaiting classification (full text unavailable)’ may alter the conclusions of the review.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review suggests that of the 14 types of local anaesthetic tested, evidence to support the use of one over another is limited to the outcome of success (absence of pain), from three comparisons of local anaesthetic. Findings show that 4% articaine, 1:100,000 epinephrine was superior to 2% lidocaine, 1:100,000 epinephrine in posterior teeth with inflamed pulps requiring root canal treatment. No difference between these solutions was seen when pain on injection was assessed, and although lidocaine resulted in less post-injection pain, the difference was minimal. Researchers found that 2% lidocaine, 1:100,000 epinephrine was superior to 3% prilocaine, 0.03 IU felypressin and 4% prilocaine plain for surgical procedures and surgical procedures/periodontal treatment, respectively. Speeds of onset were within clinically acceptable times, and durations were variable, making them suitable for different applications. Both of these latter outcomes were tested in experimental ways that may not reflect clinical findings. Unwanted effects were rare. Patients' experience of the procedures was not assessed owing to lack of data."
}
] |
query-laysum | 1090 | [
{
"role": "user",
"content": "Abstract: Background\nNon-specific low back pain is a common, potentially disabling condition usually treated with self-care and non-prescription medication. For chronic low back pain, current guidelines recommend exercise therapy. Yoga is a mind–body exercise sometimes used for non-specific low back pain.\nObjectives\nTo evaluate the benefits and harms of yoga for treating chronic non-specific low back pain in adults compared to sham yoga, no specific treatment, a minimal intervention (e.g. education), or another active treatment, focusing on pain, function, quality of life, and adverse events.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 31 August 2021 without language or publication status restrictions.\nSelection criteria\nWe included randomized controlled trials of yoga compared to sham yoga, no intervention, any other intervention and yoga added to other therapies.\nData collection and analysis\nWe followed standard Cochrane methods. Our major outcomes were 1. back-specific function, 2. pain, 3. clinical improvement, 4. mental and physical quality of life, 5. depression, and 6. adverse events. Our minor outcome was 1. work disability. We used GRADE to assess certainty of evidence for the major outcomes.\nMain results\nWe included 21 trials (2223 participants) from the USA, India, the UK, Croatia, Germany, Sweden, and Turkey. Participants were recruited from both clinical and community settings. Most were women in their 40s or 50s. Most trials used iyengar, hatha, or viniyoga yoga. Trials compared yoga to a non-exercise control including waiting list, usual care, or education (10 trials); back-focused exercise such as physical therapy (five trials); both exercise and non-exercise controls (four trials); both non-exercise and another mind–body exercise (qigong) (one trial); and yoga plus exercise to exercise alone (one trial). One trial comparing yoga to exercise was an intensive residential one-week program, and we analyzed this trial separately. All trials were at high risk of performance and detection bias because participants and providers were not blinded to treatment, and outcomes were self-assessed.\nWe found no trials comparing yoga to sham yoga.\nLow-certainty evidence from 11 trials showed that there may be a small clinically unimportant improvement in back-specific function with yoga (mean difference [MD] −1.69, 95% confidence interval [CI] −2.73 to −0.65 on the 0- to 24-point Roland-Morris Disability Questionnaire [RMDQ], lower = better, minimal clinically important difference [MCID] 5 points; 1155 participants) and moderate-certainty evidence from nine trials showed a clinically unimportant improvement in pain (MD −4.53, 95% CI −6.61 to −2.46 on a 0 to 100 scale, 0 no pain, MCID 15 points; 946 participants) compared to no exercise at three months. Low-certainty evidence from four trials showed that there may be a clinical improvement with yoga (risk ratio [RR] 2.33, 95% CI 1.46 to 3.71; assessed as participant rating that back pain was improved or resolved; 353 participants). Moderate-certainty evidence from six trials showed that there is probably a small improvement in physical and mental quality of life (physical: MD 1.80, 95% CI 0.27 to 3.33 on the 36-item Short Form [SF-36] physical health scale, higher = better; mental: MD 2.38, 95% CI 0.60 to 4.17 on the SF-36 mental health scale, higher = better; both 686 participants). Low-certainty evidence from three trials showed little to no improvement in depression (MD −1.25, 95% CI −2.90 to 0.46 on the Beck Depression Inventory, lower = better; 241 participants). There was low-certainty evidence from eight trials that yoga increased the risk of adverse events, primarily increased back pain, at six to 12 months (RR 4.76, 95% CI 2.08 to 10.89; 43/1000 with yoga and 9/1000 with no exercise; 1037 participants).\nFor yoga compared to back-focused exercise controls (8 trials, 912 participants) at three months, we found moderate-certainty evidence from four trials for little or no difference in back-specific function (MD −0.38, 95% CI −1.33 to 0.62 on the RMDQ, lower = better; 575 participants) and very low-certainty evidence from two trials for little or no difference in pain (MD 2.68, 95% CI −2.01 to 7.36 on a 0 to 100 scale, lower = better; 326 participants). We found very low-certainty evidence from three trials for no difference in clinical improvement assessed as participant rating that back pain was improved or resolved (RR 0.97, 95% CI 0.72 to 1.31; 433 participants) and very low-certainty evidence from one trial for little or no difference in physical and mental quality of life (physical: MD 1.30, 95% CI −0.95 to 3.55 on the SF-36 physical health scale, higher = better; mental: MD 1.90, 95% CI −1.17 to 4.97 on the SF-36 mental health scale, higher = better; both 237 participants). No studies reported depression. Low-certainty evidence from five trials showed that there was little or no difference between yoga and exercise in the risk of adverse events at six to 12 months (RR 0.93, 95% CI 0.56 to 1.53; 84/1000 with yoga and 91/1000 with non-yoga exercise; 640 participants).\nAuthors' conclusions\nThere is low- to moderate-certainty evidence that yoga compared to no exercise results in small and clinically unimportant improvements in back-related function and pain. There is probably little or no difference between yoga and other back-related exercise for back-related function at three months, although it remains uncertain whether there is any difference between yoga and other exercise for pain and quality of life. Yoga is associated with more adverse events than no exercise, but may have the same risk of adverse events as other exercise. In light of these results, decisions to use yoga instead of no exercise or another exercise may depend on availability, cost, and participant or provider preference. Since all studies were unblinded and at high risk of performance and detection bias, it is unlikely that blinded comparisons would find a clinically important benefit.\n\nGiven the provided abstract, please respond to the following query: \"What is non-specific low back pain?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Low back pain is a common health problem. In many cases, there is no known cause for the pain and it is termed 'non-specific' back pain. For some people, the pain may last for three months or more and at this point it is termed 'chronic.' Non-specific low back pain is usually treated with over-the-counter pain medicines and exercise, and does not require surgery or other invasive procedures. Yoga is sometimes used to help treat or manage low back pain."
}
] |
query-laysum | 1091 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with heart failure report various symptoms and show a trajectory of periodic exacerbations and recoveries, where each exacerbation event may lead to death. Current clinical practice guidelines indicate the importance of discussing future care strategies with people with heart failure. Advance care planning (ACP) is the process of discussing an individual's future care plan according to their values and preferences, and involves the person with heart failure, their family members or surrogate decision-makers, and healthcare providers. Although it is shown that ACP may improve discussion about end-of-life care and documentation of an individual's preferences, the effects of ACP for people with heart failure are uncertain.\nObjectives\nTo assess the effects of advance care planning (ACP) in people with heart failure compared to usual care strategies that do not have any components promoting ACP.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, Social Work Abstracts, and two clinical trials registers in October 2019. We checked the reference lists of included studies. There were no restrictions on language or publication status.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared ACP with usual care in people with heart failure. Trials could have parallel group, cluster-randomised, or cross-over designs. We included interventions that implemented ACP, such as discussing and considering values, wishes, life goals, and preferences for future medical care. The study participants comprised adults (18 years of age or older) with heart failure.\nData collection and analysis\nTwo review authors independently extracted outcome data from the included studies, and assessed their risk of bias. We contacted trial authors when we needed to obtain missing information.\nMain results\nWe included nine RCTs (1242 participants and 426 surrogate decision-makers) in this review. The meta-analysis included seven studies (876 participants). Participants' mean ages ranged from 62 to 82 years, and 53% to 100% of the studies' participants were men. All included studies took place in the US or the UK.\nOnly one study reported concordance between participants' preferences and end-of-life care, and it enrolled people with heart failure or renal disease. Owing to one study with small sample size, the effects of ACP on concordance between participants' preferences and end-of-life care were uncertain (risk ratio (RR) 1.19, 95% confidence interval (CI) 0.91 to 1.55; participants = 110; studies = 1; very low-quality evidence). It corresponded to an assumed risk of 625 per 1000 participants receiving usual care and a corresponding risk of 744 per 1000 (95% CI 569 to 969) for ACP. There was no evidence of a difference in quality of life between groups (standardised mean difference (SMD) 0.06, 95% CI –0.26 to 0.38; participants = 156; studies = 3; low-quality evidence). However, one study, which was not included in the meta-analysis, showed that the quality of life score improved by 14.86 points in the ACP group compared with 11.80 points in the usual care group.\nCompletion of documentation by medical staff regarding discussions with participants about ACP processes may have increased (RR 1.68. 95% CI 1.23 to 2.29; participants = 92; studies = 2; low-quality evidence). This corresponded to an assumed risk of 489 per 1000 participants with usual care and a corresponding risk of 822 per 1000 (95% CI 602 to 1000) for ACP. One study, which was not included in the meta-analysis, also showed that ACP helped to improve documentation of the ACP process (hazard ratio (HR) 2.87, 95% CI 1.09 to 7.59; participants = 232).\nThree studies reported that implementation of ACP led to an improvement of participants' depression (SMD –0.58, 95% CI –0.82 to –0.34; participants = 278; studies = 3; low-quality evidence). We were uncertain about the effects of ACP on the quality of communication when compared to the usual care group (MD –0.40, 95% CI –1.61 to 0.81; participants = 9; studies = 1; very low-quality evidence). We also noted an increase in all-cause mortality in the ACP group (RR 1.32, 95% CI 1.04 to 1.67; participants = 795; studies = 5).\nThe studies did not report participants' satisfaction with care/treatment and caregivers' satisfaction with care/treatment.\nAuthors' conclusions\nACP may help to increase documentation by medical staff regarding discussions with participants about ACP processes, and may improve an individual's depression. However, the quality of the evidence about these outcomes was low. The quality of the evidence for each outcome was low to very low due to the small number of studies and participants included in this review. Additionally, the follow-up periods and types of ACP intervention were varied. Therefore, further studies are needed to explore the effects of ACP that consider these differences carefully.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of evidence was low/very low. Since the number of studies and patients in this review was small, the effects of ACP were limited. There is clearly a need for high-quality evidence from large studies to fully explore the effects of ACP for people with heart failure, in particular their quality of life and whether end-of-life care received after ACP actually meet their preferences."
}
] |
query-laysum | 1092 | [
{
"role": "user",
"content": "Abstract: Background\nThis is the second update of a Cochrane Review originally published in 2009. Millions of workers worldwide are exposed to noise levels that increase their risk of hearing disorders. There is uncertainty about the effectiveness of hearing loss prevention interventions.\nObjectives\nTo assess the effectiveness of non-pharmaceutical interventions for preventing occupational noise exposure or occupational hearing loss compared to no intervention or alternative interventions.\nSearch methods\nWe searched the CENTRAL; PubMed; Embase; CINAHL; Web of Science; BIOSIS Previews; Cambridge Scientific Abstracts; and OSH UPDATE to 3 October 2016.\nSelection criteria\nWe included randomised controlled trials (RCT), controlled before-after studies (CBA) and interrupted time-series (ITS) of non-clinical interventions under field conditions among workers to prevent or reduce noise exposure and hearing loss. We also collected uncontrolled case studies of engineering controls about the effect on noise exposure.\nData collection and analysis\nTwo authors independently assessed study eligibility and risk of bias and extracted data. We categorised interventions as engineering controls, administrative controls, personal hearing protection devices, and hearing surveillance.\nMain results\nWe included 29 studies. One study evaluated legislation to reduce noise exposure in a 12-year time-series analysis but there were no controlled studies on engineering controls for noise exposure. Eleven studies with 3725 participants evaluated effects of personal hearing protection devices and 17 studies with 84,028 participants evaluated effects of hearing loss prevention programmes (HLPPs).\nEffects on noise exposure\nEngineering interventions following legislation\nOne ITS study found that new legislation in the mining industry reduced the median personal noise exposure dose in underground coal mining by 27.7 percentage points (95% confidence interval (CI) −36.1 to −19.3 percentage points) immediately after the implementation of stricter legislation. This roughly translates to a 4.5 dB(A) decrease in noise level. The intervention was associated with a favourable but statistically non-significant downward trend in time of the noise dose of −2.1 percentage points per year (95% CI −4.9 to 0.7, 4 year follow-up, very low-quality evidence).\nEngineering intervention case studies\nWe found 12 studies that described 107 uncontrolled case studies of immediate reductions in noise levels of machinery ranging from 11.1 to 19.7 dB(A) as a result of purchasing new equipment, segregating noise sources or installing panels or curtains around sources. However, the studies lacked long-term follow-up and dose measurements of workers, and we did not use these studies for our conclusions.\nHearing protection devices\nIn general hearing protection devices reduced noise exposure on average by about 20 dB(A) in one RCT and three CBAs (57 participants, low-quality evidence). Two RCTs showed that, with instructions for insertion, the attenuation of noise by earplugs was 8.59 dB better (95% CI 6.92 dB to 10.25 dB) compared to no instruction (2 RCTs, 140 participants, moderate-quality evidence).\nAdministrative controls: information and noise exposure feedback\nOn-site training sessions did not have an effect on personal noise-exposure levels compared to information only in one cluster-RCT after four months' follow-up (mean difference (MD) 0.14 dB; 95% CI −2.66 to 2.38). Another arm of the same study found that personal noise exposure information had no effect on noise levels (MD 0.30 dB(A), 95% CI −2.31 to 2.91) compared to no such information (176 participants, low-quality evidence).\nEffects on hearing loss\nHearing protection devices\nIn two studies the authors compared the effect of different devices on temporary threshold shifts at short-term follow-up but reported insufficient data for analysis. In two CBA studies the authors found no difference in hearing loss from noise exposure above 89 dB(A) between muffs and earplugs at long-term follow-up (OR 0.8, 95% CI 0.63 to 1.03 ), very low-quality evidence). Authors of another CBA study found that wearing hearing protection more often resulted in less hearing loss at very long-term follow-up (very low-quality evidence).\nCombination of interventions: hearing loss prevention programmes\nOne cluster-RCT found no difference in hearing loss at three- or 16-year follow-up between an intensive HLPP for agricultural students and audiometry only. One CBA study found no reduction of the rate of hearing loss (MD −0.82 dB per year (95% CI −1.86 to 0.22) for a HLPP that provided regular personal noise exposure information compared to a programme without this information.\nThere was very-low-quality evidence in four very long-term studies, that better use of hearing protection devices as part of a HLPP decreased the risk of hearing loss compared to less well used hearing protection in HLPPs (OR 0.40, 95% CI 0.23 to 0.69). Other aspects of the HLPP such as training and education of workers or engineering controls did not show a similar effect.\nIn three long-term CBA studies, workers in a HLPP had a statistically non-significant 1.8 dB (95% CI −0.6 to 4.2) greater hearing loss at 4 kHz than non-exposed workers and the confidence interval includes the 4.2 dB which is the level of hearing loss resulting from 5 years of exposure to 85 dB(A). In addition, of three other CBA studies that could not be included in the meta-analysis, two showed an increased risk of hearing loss in spite of the protection of a HLPP compared to non-exposed workers and one CBA did not.\nAuthors' conclusions\nThere is very low-quality evidence that implementation of stricter legislation can reduce noise levels in workplaces. Controlled studies of other engineering control interventions in the field have not been conducted. There is moderate-quality evidence that training of proper insertion of earplugs significantly reduces noise exposure at short-term follow-up but long-term follow-up is still needed.\nThere is very low-quality evidence that the better use of hearing protection devices as part of HLPPs reduces the risk of hearing loss, whereas for other programme components of HLPPs we did not find such an effect. The absence of conclusive evidence should not be interpreted as evidence of lack of effectiveness. Rather, it means that further research is very likely to have an important impact.\n\nGiven the provided abstract, please respond to the following query: \"Effects on hearing loss\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 16 studies with 81,220 participants on the long-term effects of hearing protection on hearing loss.\nHearing protection\nThe use of hearing protection devices in a well-implemented HLPP was associated with less hearing loss. For other elements of programmes such as worker training, audiometry alone, noise monitoring, or providing feedback on personal noise exposure, there was no clear effect. Two studies with 3242 workers found that there was no difference in the long-term effect of earmuffs versus earplugs on hearing loss.\nHearing loss prevention programmes\nFour studies provided very low-quality evidence that, compared to non-exposed workers, average HLPPs do not reduce the risk of hearing loss to below a level at least equivalent to that of workers who are exposed to 85 dB(A). Two comparable additional studies showed that the risk of hearing loss is still substantial despite being covered by a HLPP. However, one low-quality study showed that a stricter HLPP can protect workers from hearing loss.\nThe absence of conclusive evidence should not be interpreted as evidence of lack of effectiveness. Rather, it means that further research is very likely to affect the conclusions we reached. Higher-quality prevention programmes, better quality of studies, especially in the field of engineering controls, and better implementation of legislation are needed to prevent noise-induced hearing loss."
}
] |
query-laysum | 1093 | [
{
"role": "user",
"content": "Abstract: Background\nA new Cochrane Review entitled 'Cognitive behavioural therapy for anxiety disorders in children and adolescents' was published on 16 November 2020 which supersedes this publication.\nCitation: James AC, Reardon T, Soler A, James G, Creswell C. Cognitive behavioural therapy for anxiety disorders in children and adolescents. Cochrane Database of Systematic Reviews 2020, Issue 11. Art. No.: CD013162. DOI: 10.1002/14651858.CD013162.pub2.\nA previous Cochrane review (James 2005) showed that cognitive behavioural therapy (CBT) was effective in treating childhood anxiety disorders; however, questions remain regarding (1) the relative efficacy of CBT versus non-CBT active treatments; (2) the relative efficacy of CBT versus medication and the combination of CBT and medication versus placebo; and (3) the long-term effects of CBT.\nObjectives\nTo examine (1) whether CBT is an effective treatment for childhood and adolescent anxiety disorders in comparison with (a) wait-list controls; (b) active non-CBT treatments (i.e. psychological placebo, bibliotherapy and treatment as usual (TAU)); and (c) medication and the combination of medication and CBT versus placebo; and (2) the long-term effects of CBT.\nSearch methods\nSearches for this review included the Cochrane Central Register of Controlled Trials (CENTRAL) and the Cochrane Depression, Anxiety and Neurosis Group Register, which consists of relevant randomised controlled trials from the bibliographic databases-The Cochrane Library (1970 to July 2012), EMBASE, (1970 to July 2012) MEDLINE (1970 to July 2012) and PsycINFO (1970 to July 2012).\nSelection criteria\nAll randomised controlled trials (RCTs) of CBT versus waiting list, active control conditions, TAU or medication were reviewed. All participants must have met the criteria of the Diagnostic and Statistical Manual (DSM) or the International Classification of Diseases (ICD) for an anxiety diagnosis, excluding simple phobia, obsessive-compulsive disorder, post-traumatic stress disorder and elective mutism.\nData collection and analysis\nThe methodological quality of included trials was assessed by three reviewers independently. For the dichotomous outcome of remission of anxiety diagnosis, the odds ratio (OR) with 95% confidence interval (CI) based on the random-effects model, with pooling of data via the inverse variance method of weighting, was used. Significance was set at P < 0.05. Continuous data on each child’s anxiety symptoms were pooled using the standardised mean difference (SMD).\nMain results\nForty-one studies consisting of 1806 participants were included in the analyses. The studies involved children and adolescents with anxiety of mild to moderate severity in university and community clinics and school settings. For the primary outcome of remission of any anxiety diagnosis for CBT versus waiting list controls, intention-to-treat (ITT) analyses with 26 studies and 1350 participants showed an OR of 7.85 (95% CI 5.31 to 11.60, Z = 10.34, P < 0.0001), but with evidence of moderate heterogeneity (P = 0.05, I² = 30%). The number needed to treat (NNT) was 3.0 (95% CI 1.75 to 3.03). No difference in outcome was noted between individual, group and family/parental formats. ITT analyses revealed that CBT was no more effective than non-CBT active control treatments (six studies, 426 participants) or TAU in reducing anxiety diagnoses (two studies, 88 participants). The few controlled follow-up studies (n = 4) indicate that treatment gains in the remission of anxiety diagnosis are not statistically significant.\nAuthors' conclusions\nCognitive behavioural therapy is an effective treatment for childhood and adolescent anxiety disorders; however, the evidence suggesting that CBT is more effective than active controls or TAU or medication at follow-up, is limited and inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Search databases were used to find all high-quality studies of CBT for anxiety in children and young people published between 1970 and July 2012. To be included in the review, studies had to be randomised controlled trials and had to include children and young people with a clear diagnosis of anxiety.\nForty-one studies with a total of 1806 participants were included in the review. The review authors rated the overall quality of the studies as 'moderate'."
}
] |
query-laysum | 1094 | [
{
"role": "user",
"content": "Abstract: Background\nThere is great global variation in the sleeping arrangements for healthy newborn infants. Bed sharing is a type of sleeping practice in which the sleeping surface (e.g. bed, couch or armchair, or some other sleeping surface) is shared between the infant and another person. The possible physiological benefits include better oxygen and cardiopulmonary stability, fewer crying episodes, less risk of hypothermia, and a longer duration of breastfeeding. On the other hand, the most important harmful effect of bed sharing is that it may increase the risk of sudden infant death syndrome (SIDS). Studies have found conflicting evidence regarding the safety and efficacy of bed sharing during infancy.\nObjectives\nTo evaluate the efficacy and safety of bed sharing, started during the neonatal period, on breastfeeding status (exclusive and total duration of breastfeeding), incidence of SIDS, rates of hypothermia, neonatal and infant mortality, and long-term neurodevelopmental outcomes.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2020, Issue 7) in the Cochrane Library; MEDLINE via PubMed (1966 to 23 July 2020), CINAHL (1982 to 23 July 2020), and LILACS (1980 to 23 July 2020). We also searched clinical trials databases, and the reference lists of retrieved articles, for randomised controlled trials (RCTs) and quasi-RCTS.\nSelection criteria\nWe planned to include RCTs or quasi-RCTs (including cluster-randomised trials) that included term neonates initiated on bed sharing within 24 hours of birth (and continuing to bed share with the mother in the first four weeks of life, followed by a variable time period thereafter), and compared them to a 'no bed sharing' group.\nData collection and analysis\nWe used standard methodological procedures as recommended by Cochrane. We planned to use the GRADE approach to assess the certainty of evidence.\nMain results\nOur search strategy yielded 6231 records. After removal of duplicate records, we screened 2745 records by title and abstract. We excluded 2739 records that did not match our inclusion criteria. We obtained six full-text studies for assessment. These six studies did not meet the eligibility criteria and were excluded.\nAuthors' conclusions\nWe did not find any studies that met our inclusion criteria. There is a need for RCTs on bed sharing in healthy term neonates that directly assess efficacy (i.e. studies in a controlled setting, like hospital) or effectiveness (i.e. studies conducted in community or home settings) and safety. Future studies should assess outcomes such as breastfeeding status and risk of SIDS. They should also include neonates from high-income countries and low- and middle-income countries, especially those countries where bed sharing is more prevalent because of cultural practices (e.g. Asian countries).\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "'bed sharing' is a type of sleeping practice in which the sleeping surface (e.g. bed, couch or armchair, or some other sleeping surface) is shared between the infant and another person. The possible reasons that families choose to bed share include: ease in breastfeeding; temperature regulation (avoidance of hypothermia); spending quality time with the infant; helping the infant sleep and being able to easily comfort the infant in case they become agitated; being able to attend to them quickly in case of any mishap; providing close care during an illness; and promotion of bonding. However, for many families worldwide, the practice of bed sharing is not a choice. In high-income countries, bed sharing is regarded as a controversial practice, and has drawn special attention with regard to its role in sudden infant death syndrome (SIDS). But in low- and middle-income countries, bed sharing has been the standard practice for many groups who could not afford different sleeping surfaces. In these less rich societies, bed sharing is believed to contribute to: longer durations of breastfeeding; increased time and duration of infant arousals; decreased time and duration spent in deep sleep; and increases in the mother’s awareness of the infant's condition."
}
] |
query-laysum | 1095 | [
{
"role": "user",
"content": "Abstract: Background\nThis review replaces part of an earlier review that evaluated gabapentin for both neuropathic pain and fibromyalgia, now split into separate reviews for the two conditions. This review will consider pain in fibromyalgia only.\nFibromyalgia is associated with widespread pain lasting longer than three months, and is frequently associated with symptoms such as poor sleep, fatigue, depression, and reduced quality of life. Fibromyalgia is more common in women.\nGabapentin is an antiepileptic drug widely licensed for treatment of neuropathic pain. It is not licensed for the treatment of fibromyalgia, but is commonly used because fibromyalgia can respond to the same medicines as neuropathic pain.\nObjectives\nTo assess the analgesic efficacy of gabapentin for fibromyalgia pain in adults and the adverse events associated with its use in clinical trials.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online, MEDLINE via Ovid and Embase via Ovid from inception to 24 May 2016. We also searched the reference lists of retrieved studies and reviews, and searched online clinical trial registries.\nSelection criteria\nRandomised, double-blind trials of eight weeks' duration or longer for treating fibromyalgia pain in adults, comparing gabapentin with placebo or an active comparator.\nData collection and analysis\nTwo independent review authors extracted data and assessed trial quality and risk of bias. We planned to use dichotomous data to calculate risk ratio and number needed to treat for one additional event, using standard methods. We assessed the evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation) and created a 'Summary of findings' table.\nMain results\nTwo studies tested gabapentin to treat fibromyalgia pain. One was identified in previous versions of the review and is included here. We identified another study as a conference abstract, with insufficient detail to determine eligibility for inclusion; it is awaiting assessment. The one included study of 150 participants was a 12-week, multi-centre, randomised, double-blind, placebo-controlled, parallel-group study using last-observation-carried-forward imputation for withdrawals. The maximum dose was 2400 mg daily. The overall risk of bias was low, except for attrition bias.\nAt the end of the trial, the outcome of 50% reduction in pain over baseline was not reported. The outcome of 30% or greater reduction in pain over baseline was achieved by 38/75 participants (49%) with gabapentin compared with 23/75 (31%) with placebo (very low quality). A patient global impression of change any category of \"better\" was achieved by 68/75 (91%) with gabapentin and 35/75 (47%) with placebo (very low quality).\nNineteen participants discontinued the study because of adverse events: 12 in the gabapentin group (16%) and 7 in the placebo group (9%) (very low quality). The number of serious adverse events were not reported, and no deaths were reported (very low quality).\nAuthors' conclusions\nWe have only very low quality evidence and are very uncertain about estimates of benefit and harm because of a small amount of data from a single trial. There is insufficient evidence to support or refute the suggestion that gabapentin reduces pain in fibromyalgia.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We rated the quality of the evidence as very low because there was only a single small study with important study limitations. Several factors reduced our confidence in the result. Very low quality evidence means that we are very uncertain about the results."
}
] |
query-laysum | 1096 | [
{
"role": "user",
"content": "Abstract: Background\nHealth organisations recommend exclusive breastfeeding for six months. However, the addition of other fluids or foods before six months is common in many countries. Recently, research has suggested that introducing solid food at around four months of age while the baby continues to breastfeed is more protective against developing food allergies compared to exclusive breastfeeding for six months. Other studies have shown that the risks associated with non-exclusive breastfeeding are dependent on the type of additional food or fluid given. Given this background we felt it was important to update the previous version of this review to incorporate the latest findings from studies examining exclusive compared to non-exclusive breastfeeding.\nObjectives\nTo assess the benefits and harms of additional food or fluid for full-term healthy breastfeeding infants and to examine the timing and type of additional food or fluid.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (1 March 2016) and reference lists of all relevant retrieved papers.\nSelection criteria\nRandomised or quasi-randomised controlled trials in infants under six months of age comparing exclusive breastfeeding versus breastfeeding with any additional food or fluids.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. Two review authors assessed the quality of the evidence using the GRADE approach.\nMain results\nWe included 11 trials (2542 randomised infants/mothers). Nine trials (2226 analysed) provided data on outcomes of interest to this review. The variation in outcome measures and time points made it difficult to pool results from trials. Data could only be combined in a meta-analysis for one primary (breastfeeding duration) and one secondary (weight change) outcome. None of the trials reported on physiological jaundice. Infant mortality was only reported in one trial.\nFor the majority of older trials, the description of study methods was inadequate to assess the risk of bias. Most studies that we could assess showed a high risk of other biases and over half were at high risk of selection bias.\nProviding breastfeeding infants with artifical milk, compared to exclusive breastfeeding, did not affect rates of breastfeeding at hospital discharge (risk ratio (RR) 1.02, 95% confidence interval (CI) 0.97 to 1.08; one trial, 100 infants; low-quality evidence). At three months, breastfeeding infants who were provided with artificial milk had higher rates of any breastfeeding compared to exclusively breastfeeding infants (RR 1.21, 95% CI 1.05 to 1.41; two trials, 137 infants; low-quality evidence). Infants who were given artifical milk in the first few days after birth before breastfeeding, had less \"obvious or probable symptoms\" of allergy compared to exclusively breastfeeding infants (RR 0.56, 95% CI 0.35 to 0.91; one trial, 207 infants; very low-quality evidence). No difference was found in maternal confidence when comparing non-exclusive breastfeeding infants who were provided with artificial milk with exclusive breastfeeding infants (mean difference (MD) 0.10, 95% CI -0.34 to 0.54; one study, 39 infants; low-quality evidence). Rates of breastfeeding were lower in the non-exclusive breastfeeding group compared to the exclusive breastfeeding group at four, eight, 12 (RR 0.68, 95% CI 0.53 to 0.87; one trial, 170 infants; low-quality evidence), 16 and 20 weeks.\nThe addition of glucose water resulted in fewer episodes of hypoglycaemia (below 2.2 mmol/L) compared to the exclusive breastfeeding group, reported at 12 hours (RR 0.07, 95% CI 0.00 to 1.20; one trial, 170 infants; very low-quality evidence), but no significant difference at 24 hours (RR 1.57, 95% CI 0.27 to 9.17; one trial, 170 infants; very low-quality evidence). Weight loss was lower for infants who received additional glucose water (one trial, 170 infants) at six, 12, 24 and 48 hours of life (MD -32.50 g, 95% CI -52.09 to -12.91; low-quality evidence) compared to the exclusively breastfeeding infants but no difference between groups was observed at 72 hours of life (MD 3.00 g, 95% CI -20.83 to 26.83; very low-quality evidence). In another trial with the water and glucose water arms combined (one trial, 47 infants), we found no significant difference in weight loss between the additional fluid group and the exclusively breastfeeding group on either day three or day five (MD -1.03%, 95% CI -2.24 to 0.18; very low-quality evidence) and (MD -0.20%, 95% CI -0.86 to 0.46; very low-quality evidence).\nInfant mortality was reported in one trial with no deaths occurring in either group (1162 infants). The early introduction of potentially allergenic foods, compared to exclusively breastfeeding, did not reduce the risk of \"food allergy\" to one or more of these foods between one to three years of age (RR 0.80, 95% CI 0.51 to 1.25; 1162 children), visible eczema at 12 months stratified by visible eczema at enrolment (RR 0.86, 95% CI 0.51 to 1.44; 284 children), or food protein-induced enterocolitis syndrome reactions (RR 2.00, 95% CI 0.18 to 22.04; 1303 children) (all moderate-quality evidence). Breastfeeding infants receiving additional foods from four months showed no difference in infant weight gain (g) from 16 to 26 weeks compared to exclusive breastfeeding to six months (MD -39.48, 95% CI -128.43 to 49.48; two trials, 260 children; low-quality evidence) or weight z-scores (MD -0.01, 95% CI -0.15 to 0.13; one trial, 100 children; moderate-quality evidence).\nAuthors' conclusions\nWe found no evidence of benefit to newborn infants on the duration of breastfeeding from the brief use of additional water or glucose water. The quality of the evidence on formula supplementation was insufficient to suggest a change in practice away from exclusive breastfeeding. For infants at four to six months, we found no evidence of benefit from additional foods nor any risks related to morbidity or weight change. The majority of studies showed high risk of other bias and most outcomes were based on low-quality evidence which meant that we were unable to fully assess the benefits or harms of supplementation or to determine the impact from timing and type of supplementation.\nWe found no evidence to disagree with the current international recommendation that healthy infants exclusively breastfeed for the first six months.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review includes 11 randomised controlled trials involving 2542 infants.\nGiving babies small amounts of artificial milk for a few days after birth in addition to breastfeeding did not effect the number of infants with any breastfeeding at hospital discharge though did slightly increase the likelihood of any breastfeeding at three months of age. There was no difference in the level of maternal confidence between the groups. The use of artificial milk before breastfeeding had a slight protective effect against allergy symptoms at 18 months of age compared to exclusive breastfeeding, however the trial did not perform diagnostic challenges or other tests to confirm the allergy symptoms noticed and thus requires caution in interpretation.\nThe likelihood of infants continuing to breastfeed was higher in the exclusive breastfeeding group than the group provided with additional water or glucose water in the first few days after birth. There was no evidence of benefit related to glucose levels, temperature, weight loss to breastfeeding newborn infants who were given additional water or glucose water.\nBreastfeeding infants receiving complementary foods at four to six months did not show reduced risk of food allergy, eczema, or food protein-induced enterocolitis syndrome reactions. There was no difference between the early complementary foods and the exclusive breastfeeding groups for the percentage of days of cough, congestion, nasal discharge and hoarseness, fever, iron deficiency or weight gain."
}
] |
query-laysum | 1097 | [
{
"role": "user",
"content": "Abstract: Background\nThe projected rise in the incidence of type 2 diabetes mellitus (T2DM) could develop into a substantial health problem worldwide. Whether metformin can prevent or delay T2DM and its complications in people with increased risk of developing T2DM is unknown.\nObjectives\nTo assess the effects of metformin for the prevention or delay of T2DM and its associated complications in persons at increased risk for the T2DM.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials, MEDLINE, Scopus, ClinicalTrials.gov, the World Health Organization (WHO) International Clinical Trials Registry Platform and the reference lists of systematic reviews, articles and health technology assessment reports. We asked investigators of the included trials for information about additional trials. The date of the last search of all databases was March 2019.\nSelection criteria\nWe included randomised controlled trials (RCTs) with a duration of one year or more comparing metformin with any pharmacological glucose-lowering intervention, behaviour-changing intervention, placebo or standard care in people with impaired glucose tolerance, impaired fasting glucose, moderately elevated glycosylated haemoglobin A1c (HbA1c) or combinations of these.\nData collection and analysis\nTwo review authors read all abstracts and full-text articles and records, assessed risk of bias and extracted outcome data independently. We used a random-effects model to perform meta-analysis and calculated risk ratios (RRs) for dichotomous outcomes and mean differences (MDs) for continuous outcomes, using 95% confidence intervals (CIs) for effect estimates. We assessed the certainty of the evidence using GRADE.\nMain results\nWe included 20 RCTs randomising 6774 participants. One trial contributed 48% of all participants. The duration of intervention in the trials varied from one to five years. We judged none of the trials to be at low risk of bias in all 'Risk of bias' domains.\nOur main outcome measures were all-cause mortality, incidence of T2DM, serious adverse events (SAEs), cardiovascular mortality, non-fatal myocardial infarction or stroke, health-related quality of life and socioeconomic effects.The following comparisons mostly reported only a fraction of our main outcome set.\nFifteen RCTs compared metformin with diet and exercise with or without placebo: all-cause mortality was 7/1353 versus 7/1480 (RR 1.11, 95% CI 0.41 to 3.01; P = 0.83; 2833 participants, 5 trials; very low-quality evidence); incidence of T2DM was 324/1751 versus 529/1881 participants (RR 0.50, 95% CI 0.38 to 0.65; P < 0.001; 3632 participants, 12 trials; moderate-quality evidence); the reporting of SAEs was insufficient and diverse and meta-analysis could not be performed (reported numbers were 4/118 versus 2/191; 309 participants; 4 trials; very low-quality evidence); cardiovascular mortality was 1/1073 versus 4/1082 (2416 participants; 2 trials; very low-quality evidence). One trial reported no clear difference in health-related quality of life after 3.2 years of follow-up (very low-quality evidence). Two trials estimated the direct medical costs (DMC) per participant for metformin varying from $220 to $1177 versus $61 to $184 in the comparator group (2416 participants; 2 trials; low-quality evidence).\nEight RCTs compared metformin with intensive diet and exercise: all-cause mortality was 7/1278 versus 4/1272 (RR 1.61, 95% CI 0.50 to 5.23; P = 0.43; 2550 participants, 4 trials; very low-quality evidence); incidence of T2DM was 304/1455 versus 251/1505 (RR 0.80, 95% CI 0.47 to 1.37; P = 0.42; 2960 participants, 7 trials; moderate-quality evidence); the reporting of SAEs was sparse and meta-analysis could not be performed (one trial reported 1/44 in the metformin group versus 0/36 in the intensive exercise and diet group with SAEs). One trial reported that 1/1073 participants in the metformin group compared with 2/1079 participants in the comparator group died from cardiovascular causes. One trial reported that no participant died due to cardiovascular causes (very low-quality evidence). Two trials estimated the DMC per participant for metformin varying from $220 to $1177 versus $225 to $3628 in the comparator group (2400 participants; 2 trials; very low-quality evidence).\nThree RCTs compared metformin with acarbose: all-cause mortality was 1/44 versus 0/45 (89 participants; 1 trial; very low-quality evidence); incidence of T2DM was 12/147 versus 7/148 (RR 1.72, 95% CI 0.72 to 4.14; P = 0.22; 295 participants; 3 trials; low-quality evidence); SAEs were 1/51 versus 2/50 (101 participants; 1 trial; very low-quality evidence).\nThree RCTs compared metformin with thiazolidinediones: incidence of T2DM was 9/161 versus 9/159 (RR 0.99, 95% CI 0.41 to 2.40; P = 0.98; 320 participants; 3 trials; low-quality evidence). SAEs were 3/45 versus 0/41 (86 participants; 1 trial; very low-quality evidence).\nThree RCTs compared metformin plus intensive diet and exercise with identical intensive diet and exercise: all-cause mortality was 1/121 versus 1/120 participants (450 participants; 2 trials; very low-quality evidence); incidence of T2DM was 48/166 versus 53/166 (RR 0.55, 95% CI 0.10 to 2.92; P = 0.49; 332 participants; 2 trials; very low-quality evidence). One trial estimated the DMC of metformin plus intensive diet and exercise to be $270 per participant compared with $225 in the comparator group (94 participants; 1 trial; very-low quality evidence).\nOne trial in 45 participants compared metformin with a sulphonylurea. The trial reported no patient-important outcomes.\nFor all comparisons there were no data on non-fatal myocardial infarction, non-fatal stroke or microvascular complications.\nWe identified 11 ongoing trials which potentially could provide data of interest for this review. These trials will add a total of 17,853 participants in future updates of this review.\nAuthors' conclusions\nMetformin compared with placebo or diet and exercise reduced or delayed the risk of T2DM in people at increased risk for the development of T2DM (moderate-quality evidence). However, metformin compared to intensive diet and exercise did not reduce or delay the risk of T2DM (moderate-quality evidence). Likewise, the combination of metformin and intensive diet and exercise compared to intensive diet and exercise only neither showed an advantage or disadvantage regarding the development of T2DM (very low-quality evidence). Data on patient-important outcomes such as mortality, macrovascular and microvascular diabetic complications and health-related quality of life were sparse or missing.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Fifteen studies compared metformin against diet and exercise. Eight studies compared metformin against intensive diet and exercise and three studies compared metformin plus intensive diet and exercise against intensive diet and exercise only. When compared to standard diet and exercise metformin slightly reduces or delays development of diabetes. However, when compared to intensive diet and exercise, metformin does not provide an additional benefit in reducing or delaying development of diabetes.\nSeven studies compared metformin with another glucose-lowering drug: three studies compared metformin with acarbose. Three studies compared metformin with a thiazolidinedione (such as pioglitazone). There was neither an advantage or disadvantage when comparing metformin with these drugs with respect to the development of diabetes. One study compared metformin with a sulphonylurea (glimepiride). The trial did not report patient-important outcomes.\nIn general, the reporting of serious side effects was sparse. Few participants died and we did not detect a clear difference between the intervention and comparator groups. We also did not detect an advantage or disadvantage of metformin in relation to health-related quality of life. Our included studies did not report on non-fatal heart attacks, strokes or complications of diabetes such as kidney or eye disease. Few studies estimated the direct medical costs. When compared to diet and exercise, metformin was more expensive. When compared to intensive diet and exercise, metformin was less expensive.\nWe identified 11 ongoing studies which potentially could provide data for this review. These studies will add a total of 17,853 participants in future updates of our review.\nFuture studies should investigate more patient-important outcomes such as complications of diabetes and especially the side effects of the drugs. We do not know whether 'prediabetes' is just a condition defined by laboratory measurements, or whether it is in fact a real risk factor for diabetes. It is also unknown whether treatment of this condition translates into better patient-important outcomes."
}
] |
query-laysum | 1098 | [
{
"role": "user",
"content": "Abstract: Background\nDeep vein thrombosis (DVT) is a condition in which a clot forms in the deep veins, most commonly of the leg. It occurs in approximately one in 1000 people. If left untreated, the clot can travel up to the lungs and cause a potentially life-threatening pulmonary embolism (PE). Previously, a DVT was treated with the anticoagulants heparin and vitamin K antagonists. However, two forms of direct oral anticoagulants (DOACs) have been developed: oral direct thrombin inhibitors (DTIs) and oral factor Xa inhibitors, which have characteristics that may be favourable compared to conventional treatment, including oral administration, a predictable effect, lack of frequent monitoring or dose adjustment and few known drug interactions. DOACs are now commonly being used for treating DVT: recent guidelines recommended DOACs over conventional anticoagulants for both DVT and PE treatment. This Cochrane Review was first published in 2015. It was the first systematic review to measure the effectiveness and safety of these drugs in the treatment of DVT. This is an update of the 2015 review.\nObjectives\nTo assess the effectiveness and safety of oral DTIs and oral factor Xa inhibitors versus conventional anticoagulants for the long-term treatment of DVT.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase and CINAHL databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 1 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) in which people with a DVT, confirmed by standard imaging techniques, were allocated to receive an oral DTI or an oral factor Xa inhibitor compared with conventional anticoagulation or compared with each other for the treatment of DVT.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were recurrent venous thromboembolism (VTE), recurrent DVT and PE. Secondary outcomes included all-cause mortality, major bleeding, post-thrombotic syndrome (PTS) and quality of life (QoL). We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe identified 10 new studies with 2950 participants for this update. In total, we included 21 RCTs involving 30,895 participants. Three studies investigated oral DTIs (two dabigatran and one ximelagatran), 17 investigated oral factor Xa inhibitors (eight rivaroxaban, five apixaban and four edoxaban) and one three-arm trial investigated both a DTI (dabigatran) and factor Xa inhibitor (rivaroxaban). Overall, the studies were of good methodological quality.\nMeta-analysis comparing DTIs to conventional anticoagulation showed no clear difference in the rate of recurrent VTE (odds ratio (OR) 1.17, 95% confidence interval (CI) 0.83 to 1.65; 3 studies, 5994 participants; moderate-certainty evidence), recurrent DVT (OR 1.11, 95% CI 0.74 to 1.66; 3 studies, 5994 participants; moderate-certainty evidence), fatal PE (OR 1.32, 95% CI 0.29 to 6.02; 3 studies, 5994 participants; moderate-certainty evidence), non-fatal PE (OR 1.29, 95% CI 0.64 to 2.59; 3 studies, 5994 participants; moderate-certainty evidence) or all-cause mortality (OR 0.66, 95% CI 0.41 to 1.08; 1 study, 2489 participants; moderate-certainty evidence). DTIs reduced the rate of major bleeding (OR 0.58, 95% CI 0.38 to 0.89; 3 studies, 5994 participants; high-certainty evidence).\nFor oral factor Xa inhibitors compared with conventional anticoagulation, meta-analysis demonstrated no clear difference in recurrent VTE (OR 0.85, 95% CI 0.71 to 1.01; 13 studies, 17,505 participants; moderate-certainty evidence), recurrent DVT (OR 0.70, 95% CI 0.49 to 1.01; 9 studies, 16,439 participants; moderate-certainty evidence), fatal PE (OR 1.18, 95% CI 0.69 to 2.02; 6 studies, 15,082 participants; moderate-certainty evidence), non-fatal PE (OR 0.93, 95% CI 0.68 to 1.27; 7 studies, 15,166 participants; moderate-certainty evidence) or all-cause mortality (OR 0.87, 95% CI 0.67 to 1.14; 9 studies, 10,770 participants; moderate-certainty evidence). Meta-analysis showed a reduced rate of major bleeding with oral factor Xa inhibitors compared with conventional anticoagulation (OR 0.63, 95% CI 0.45 to 0.89; 17 studies, 18,066 participants; high-certainty evidence).\nAuthors' conclusions\nThe current review suggests that DOACs may be superior to conventional therapy in terms of safety (major bleeding), and are probably equivalent in terms of efficacy. There is probably little or no difference between DOACs and conventional anticoagulation in the prevention of recurrent VTE, recurrent DVT, pulmonary embolism and all-cause mortality. DOACs reduced the rate of major bleeding compared to conventional anticoagulation. The certainty of evidence was moderate or high.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies in which people with a confirmed DVT were randomly allocated to one of two treatment groups. These types of studies give the most reliable evidence about treatment effects. People in the experimental groups received an oral direct thrombin inhibitor or an oral factor Xa inhibitor, and their results were compared to the results of people given conventional anticoagulation. All participants were given long-term treatment of DVT (minimum duration of 3 months)."
}
] |
query-laysum | 1099 | [
{
"role": "user",
"content": "Abstract: Background\nThe World Health Organization (WHO) recommends that people of all ages take regular and adequate physical activity. If unable to meet the recommendations due to health conditions, international guidance advises being as physically active as possible. Evidence from community interventions of physical activity indicate that people living with medical conditions are sometimes excluded from participation in studies. In this review, we considered the effects of activity-promoting interventions on physical activity and well-being in studies, as well as any adverse events experienced by participants living with inherited or acquired neuromuscular diseases (NMDs).\nObjectives\nTo assess the effects of interventions designed to promote physical activity in people with NMD compared with no intervention or alternative interventions.\nSearch methods\nOn 30 April 2020, we searched Cochrane Neuromuscular Specialised Register, CENTRAL, Embase, MEDLINE, and ClinicalTrials.Gov. WHO ICTRP was not accessible at the time.\nSelection criteria\nWe considered randomised or quasi-randomised trials, including cross-over trials, of interventions designed to promote physical activity in people with NMD compared to no intervention or alternative interventions. We specifically included studies that reported physical activity as an outcome measure. Our main focus was studies in which promoting physical activity was a stated aim but we also included studies in which physical activity was assessed as a secondary or exploratory outcome.\nData collection and analysis\nWe used standard Cochrane procedures.\nMain results\nThe review included 13 studies (795 randomised participants from 12 studies; number of participants unclear in one study) of different interventions to promote physical activity. Most studies randomised a minority of invited participants. No study involved children or adolescents and nine studies reported minimal entry criteria for walking. Participants had one of nine inherited or acquired NMDs.\nTypes of intervention included structured physical activity support, exercise support (as a specific form of physical activity), and behaviour change support that included physical activity or exercise. Only one included study clearly reported that the aim of intervention was to increase physical activity. Other studies reported or planned to analyse the effects of intervention on physical activity as a secondary or exploratory outcome measure. Six studies did not report results for physical activity outcomes, or the data were not usable. We judged 10 of the 13 included studies at high or unclear risk of bias from incomplete physical activity outcome reporting.\nWe did not perform a meta-analysis for any comparison because of differences in interventions and in usual care. We also found considerable variation in how studies reported physical activity as an outcome measure. The studies that reported physical activity measurement did not always clearly report intention-to-treat (ITT) analysis or whether final assessments occurred during or after intervention. Based on prespecified measures, we included three comparisons in our summary of findings.\nA physical activity programme (weight-bearing) compared to no physical activity programme\nOne study involved adults with diabetic peripheral neuropathy (DPN) and reported weekly duration of walking during and at the end of a one-year intervention using a StepWatch ankle accelerometer. Based on the point estimate and low-certainty evidence, intervention may have led to an important increase in physical activity per week; however, the 95% confidence interval (CI) included the possibility of no difference or an effect in either direction at three months (mean difference (MD) 34 minutes per week, 95% CI –92.19 to 160.19; 69 participants), six months (MD 68 minutes per week, 95% CI –55.35 to 191.35; 74 participants), and 12 months (MD 49 minutes per week, 95% CI –75.73 to 173.73; 70 participants). Study-reported effect estimates for foot lesions and full-thickness ulcers also included the possibility of no difference, a higher, or lower risk with intervention.\nA sensor-based, interactive exercise programme compared to no sensor-based, interactive exercise programme\nOne study involved adults with DPN and reported duration of walking over 48 hours at the end of four weeks' intervention using a t-shirt embedded PAMSys sensor. It was not possible to draw conclusions about the effectiveness of the intervention from the very low-certainty evidence (MD –0.64 hours per 48 hours, 95% CI –2.42 to 1.13; 25 participants). We were also unable to draw conclusions about impact on the Physical Component Score (PCS) for quality of life (MD 0.24 points, 95% CI –5.98 to 6.46; 35 participants; very low-certainty evidence), although intervention may have made little or no difference to the Mental Component Score (MCS) for quality of life (MD 5.10 points, 95% CI –0.58 to 10.78; 35 participants; low-certainty evidence).\nA functional exercise programme compared to a stretching exercise programme\nOne study involved adults with spinal and bulbar muscular atrophy and reported a daily physical activity count at the end of 12 weeks' intervention using an Actical accelerometer. It was not possible to draw conclusions about the effectiveness of either intervention (requiring compliance) due to low-certainty evidence and unconfirmed measurement units (MD –8701, 95% CI –38,293.30 to 20,891.30; 43 participants). Functional exercise may have made little or no difference to quality of life compared to stretching (PCS: MD –1.10 points, 95% CI –5.22 to 3.02; MCS: MD –1.10 points, 95% CI –6.79 to 4.59; 49 participants; low-certainty evidence).\nAlthough studies reported adverse events incompletely, we found no evidence of supported activity increasing the risk of serious adverse events.\nAuthors' conclusions\nWe found a lack of evidence relating to children, adolescents, and non-ambulant people of any age. Many people living with NMD did not meet randomised controlled trial eligibility criteria. There was variation in the components of supported activity intervention and usual care, such as physical therapy provision. We identified variation among studies in how physical activity was monitored, analysed, and reported. We remain uncertain of the effectiveness of promotional intervention for physical activity and its impact on quality of life and adverse events. More information is needed on the ITT population, as well as more complete reporting of outcomes. While there may be no single objective measure of physical activity, the study of qualitative and dichotomous change in self-reported overall physical activity might offer a pragmatic approach to capturing important change at an individual and population level.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Physical activity includes any type of everyday activity, such as work, travel, or exercise. Based on well-known health benefits, international guidance recommends that everyone does enough regular physical activity. This guidance is relevant to people with health conditions, including neuromuscular diseases, which affect the way that muscles and nerves work. However, many people with neuromuscular disease live with disability and face barriers to being more physically active. Research studies also sometimes exclude people who have health conditions."
}
] |
query-laysum | 1100 | [
{
"role": "user",
"content": "Abstract: Background\nInflammatory bowel disease (IBD) is an umbrella term used to describe a group of chronic, progressive inflammatory disorders of the digestive tract. Crohn's disease and ulcerative colitis are the two main types. Fatigue is a common, debilitating and burdensome symptom experienced by individuals with IBD. The subjective, complex nature of fatigue can often hamper its management. The efficacy and safety of pharmacological or non-pharmacological treatments for fatigue in IBD is not yet established through systematic review of studies.\nObjectives\nTo assess the efficacy and safety of pharmacological and non-pharmacological interventions for managing fatigue in IBD compared to no treatment, placebo or active comparator.\nSearch methods\nA systematic search of the databases Embase, MEDLINE, Cochrane Library, CINAHL, PsycINFO was undertaken from inception to July 2018. A top-up search was run in October 2019. We also searched the Cochrane IBD Group Specialized Register, the Cochrane Central Register of Controlled Trials, ongoing trials and research registers, conference abstracts and reference lists for potentially eligible studies.\nSelection criteria\nRandomised controlled trials of pharmacological and non-pharmacological interventions in children or adults with IBD, where fatigue was assessed as a primary or secondary outcome using a generic or disease-specific fatigue measure, a subscale of a larger quality of life scale or as a single-item measure, were included.\nData collection and analysis\nTwo authors independently screened search results and four authors extracted and assessed bias independently using the Cochrane 'Risk of bias' tool. The primary outcome was fatigue and the secondary outcomes included quality of life, adverse events (AEs), serious AEs and withdrawal due to AEs. Standard methodological procedures were used.\nMain results\nWe included 14 studies (3741 participants): nine trials of pharmacological interventions and five trials of non-pharmacological interventions. Thirty ongoing studies were identified, and five studies are awaiting classification. Data on fatigue were available from nine trials (1344 participants). In only four trials was managing fatigue the primary intention of the intervention (electroacupuncture, physical activity advice, cognitive behavioural therapy and solution-focused therapy).\nElectroacupuncture\nFatigue was measured with Functional Assessment of Chronic Illness Therapy - Fatigue (FACIT-F) (scores range from 0 to 52). The FACIT-F score at week eight was 8.00 points higher (better) in participants receiving electroacupuncture compared with no treatment (mean difference (MD) 8.00, 95% CI 6.45 to 9.55; 1 RCT; 27 participants; low-certainty evidence). Results at week 16 could not be calculated. FACIT-F scores were also higher with electroacupuncture compared to sham electroacupuncture at week eight (MD 5.10, 95% CI 3.49 to 6.71; 1 RCT; 30 participants; low-certainty evidence) but not at week 16 (MD 2.60, 95% CI 0.74 to 4.46; 1 RCT; 30 participants; low-certainty evidence). No adverse events were reported, except for one adverse event in the sham electroacupuncture group.\nCognitive behavioural therapy (CBT) and solution-focused therapy\nCompared with a fatigue information leaflet, the effects of CBT on fatigue are very uncertain (Inflammatory Bowel Disease-Fatigue (IBD-F) section I: MD -2.16, 95% CI -6.13 to 1.81; IBD-F section II: MD -21.62, 95% CI -45.02 to 1.78; 1 RCT, 18 participants, very low-certainty evidence). The efficacy of solution-focused therapy on fatigue is also very uncertain, because standard summary data were not reported (1 RCT, 98 participants).\nPhysical activity advice\nOne 2 x 2 factorial trial (45 participants) found physical activity advice may reduce fatigue but the evidence is very uncertain. At week 12, compared to a control group receiving no physical activity advice plus omega 3 capsules, FACIT-F scores were higher (better) in the physical activity advice plus omega 3 group (FACIT-F MD 6.40, 95% CI -1.80 to 14.60, very low-certainty evidence) and the physical activity advice plus placebo group (FACIT-F MD 9.00, 95% CI 1.64 to 16.36, very low-certainty evidence). Adverse events were predominantly gastrointestinal and similar across physical activity groups, although more adverse events were reported in the no physical activity advice plus omega 3 group.\nPharmacological interventions\nCompared with placebo, adalimumab 40 mg, administered every other week ('eow') (only for those known to respond to adalimumab induction therapy), may reduce fatigue in patients with moderately-to-severely active Crohn's disease, but the evidence is very uncertain (FACIT-F MD 4.30, 95% CI 1.75 to 6.85; very low-certainty evidence). The adalimumab 40 mg eow group was less likely to experience serious adverse events (OR 0.56, 95% CI 0.33 to 0.96; 521 participants; moderate-certainty evidence) and withdrawal due to adverse events (OR 0.48, 95%CI 0.26 to 0.87; 521 participants; moderate-certainty evidence).\nFerric maltol may result in a slight increase in fatigue, with better SF-36 vitality scores reported in the placebo group compared to the treatment group following 12 weeks of treatment (MD -9.31, 95% CI -17.15 to -1.47; 118 participants; low-certainty evidence). There may be little or no difference in adverse events (OR 0.55, 95% CI 0.26 to 1.18; 120 participants; low-certainty evidence)\nAuthors' conclusions\nThe effects of interventions for the management of fatigue in IBD are uncertain. No firm conclusions regarding the efficacy and safety of interventions can be drawn. Further high-quality studies, with a larger number of participants, are required to assess the potential benefits and harms of therapies. Future studies should assess interventions specifically designed for fatigue management, targeted at selected IBD populations, and measure fatigue as the primary outcome.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The effects of interventions for the management of fatigue on IBD are uncertain, with limited evidence available. No firm conclusions regarding the benefits and harms (e.g. side effects) can be drawn, Further high-quality studies, with a larger number of participants, are needed to determine the potential effect of treatments on fatigue in IBD. Future studies should assess fatigue as a primary outcome, be specifically designed for fatigue management and targeted at specific IBD populations."
}
] |
query-laysum | 1101 | [
{
"role": "user",
"content": "Abstract: Background\nHodgkin lymphoma (HL) is a B-cell lymphoma accounting for 10% to 15% of all lymphoma in industrialised countries. It has a bimodal age distribution with one peak around the age of 30 years and another after the age of 60 years. Although HL accounts for fewer than 1% of all neoplasms worldwide, it is considered to be one of the most common malignancies in young adults and, with cure rates of 90%, one of the most curable cancers worldwide. Current treatment options for HL comprise more- or less-intensified regimens of chemotherapy plus radiotherapy, depending on disease stage. [18F]-fluorodeoxy-D-glucose (FDG)-positron emission tomography (PET, also called PET scanning) is an imaging tool that can be used to illustrate a tumour's metabolic activity, stage and progression. Therefore, it could be used as a standard interim procedure during HL treatment, to help distinguish between individuals who are good or poor early responders to therapy. Subsequent therapy could then be de-escalated in PET-negative individuals (good responders) or escalated in those who are PET-positive (poor responders). It is currently unknown whether such response-adapted therapeutic strategies are of benefit to individuals in terms of overall and progression-free survival, and the incidence of long-term adverse events (AEs).\nObjectives\nTo assess the effects of interim [18F]-FDG-PET imaging treatment modification in individuals with HL.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; latest issue) and MEDLINE (from 1990 to September 2014) as well as conference proceedings (American Society of Hematology; American Society of Clinical Oncology; European Hematology Association; and International Symposium on Hodgkin Lymphoma) for studies. Two review authors independently screened search results.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing FDG-PET-adapted therapy with non-adapted treatment in individuals with previously untreated HL of all stages and ages.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of trials. As none of the included studies provided HRs for OS, we described risk ratios (RRs) for this outcome and did not pool the data. As an effect measure we used hazard ratios (HRs) for progression-free survival (PFS). We described RRs for the dichotomous data on AEs. We also calculated 95% confidence intervals (CIs).\nMain results\nOur search strategies led to 308 potentially relevant references. From these, we included three studies involving 1999 participants. We judged the overall potential risk of bias as moderate. The studies were reported as RCTs; blinding was not reported, but given the study design it is likely that there was no blinding. One study was published in abstract form only; hence, detailed assessment of the risk of bias was not possible.\nTwo trials compared standard treatment (chemotherapy plus radiotherapy) with PET-adapted therapy (chemotherapy only) in individuals with early-stage HL and negative PET scans. The study design of the third trial was more complex. Participants with early-stage HL were divided into those with a favourable or unfavourable prognosis. They were then randomised to receive PET-adapted or standard treatment. Following a PET scan, participants were further divided into PET-positive and PET-negative groups. To date, data have been published for the PET-negative arms only, making it possible to perform a meta-analysis including all three trials.\nOf the 1999 participants included in the three trials only 1480 were analysed. The 519 excluded participants were either PET-positive, or were excluded because they did not match the inclusion criteria.\nOne study reported no deaths. The other two studies reported two deaths in participants receiving PET-adapted therapy and two in participants receiving standard therapy (very-low-quality evidence). Progression-free survival was shorter in participants with PET-adapted therapy (without radiotherapy) than in those receiving standard treatment with radiotherapy (HR 2.38; 95% CI 1.62 to 3.50; P value < 0.0001). This difference was also apparent in comparisons of participants receiving no additional radiotherapy (PET-adapted therapy) versus radiotherapy (standard therapy) (HR 1.86; 95% CI 1.07 to 3.23; P value = 0.03) and in those receiving chemotherapy but no radiotherapy (PET-adapted therapy) versus standard radiotherapy (HR 3.00; 95% CI 1.75 to 5.14; P value < 0.0001) (moderate-quality evidence). Short-term AEs only were assessed in one trial, which showed no evidence of a difference between the treatment arms (RR 0.91; 95% CI 0.54 to 1.53; P value = 0.72) (very-low-quality evidence). No data on long-term AEs were reported in any of the trials.\nAuthors' conclusions\nTo date, no robust data on OS, response rate, TRM, QoL, or short- and long-term AEs are available. However, this systematic review found moderate-quality evidence that PFS was shorter in individuals with early-stage HL and a negative PET scan receiving chemotherapy only (PET-adapted therapy) than in those receiving additional radiotherapy (standard therapy). More RCTs with longer follow ups may lead to more precise results for AEs, TRM and QoL, and could evaluate whether this PFS advantage will translate into an overall survival benefit.\nIt is still uncertain whether PET-positive individuals benefit from PET-based treatment adaptation and the effect of such an approach in those with advanced HL.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched important medical databases such as the Cochrane Central Register of Controlled Trials and MEDLINE. Two review authors independently screened, summarised and analysed the results. This lead to the inclusion of three randomised controlled trials (RCTs) with 1999 participants. Currently, only data for 1480 of these participants have been published and were included in this systematic review. Participants were randomised to receive either standard therapy (chemotherapy followed by radiotherapy) or PET-adapted therapy (chemotherapy only). The median age of participants was 32 years and 52% were male.\nThe evidence provided is current to September 2014."
}
] |
query-laysum | 1102 | [
{
"role": "user",
"content": "Abstract: Background\nFractures of the olecranon (the bony tip of the elbow) account for approximately 1% of all upper extremity fractures. Surgical intervention is often required to restore elbow function. Two key methods of surgery are tension band wire fixation and plate fixation.\nObjectives\nTo assess the effects (benefits and harms) of different surgical interventions in the treatment of olecranon fractures in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (22 September 2014), the Cochrane Central Register of Controlled Trials (CENTRAL, 2014, Issue 8), MEDLINE (1946 to September week 2 2014), EMBASE (1980 to 19 September 2014), trial registers, conference proceedings and reference lists of articles.\nSelection criteria\nRandomised controlled trials (RCT) and quasi-RCTs that compared different surgical interventions for the treatment of olecranon fractures in adults.\nData collection and analysis\nTwo review authors independently performed study selection, risk of bias assessment and data extraction. The primary outcomes of this review were function, pain and adverse events.\nMain results\nWe included six small trials involving 244 adults with olecranon fractures. Of these, four were RCTs and two were quasi-RCTs; both of were at high risk of selection bias. All six trials were at high risk of performance bias, reflecting lack of blinding, and four trials were at high risk of detection bias. The quality of the evidence for most outcomes was generally very low because of limitations in study design and implementation, and either imprecision of the results or inadequate outcome measures. Thus, we are very uncertain about the estimates of effect.\nOne trial (41 participants) comparing plate fixation with standard tension band wiring provided very low quality evidence at 16 to 86 weeks' follow-up of a better clinical outcome after plate fixation (good outcome (little pain or loss of elbow motion): 19/22 versus 9/19, risk ratio (RR) 1.82 favouring plate fixation, 95% confidence interval (CI) 1.10 to 3.01). There was very low quality evidence of less symptomatic prominent metalwork after plate fixation (1/22 versus 8/19; RR 0.11, 95% CI 0.01 to 0.79). The results for other adverse effects (infection and delayed or non-union) were inconclusive. Evidence is pending from a newly (September 2014) completed trial (67 participants) making the same comparison.\nFour trials compared four different modified techniques of tension band wiring (i.e. additional intramedullary screw fixation, biodegradable pins, Netz pins and cable pin system) versus standard tension band wiring. There was very low quality evidence of little difference at six to 14 months in function assessed by a non-validated scoring tool from the addition of an intramedullary screw. However, there were fewer cases of metalwork prominence in the intramedullary screw group (1/15 versus 8/15; RR 2.00, 95% CI 1.15 to 3.49; one trial; 30 participants). There was very low quality evidence from one trial (25 participants) of little difference in subjectively or objectively assessed good outcome at a mean of 20 months between tension band wiring with biodegradable implants versus metal implants. There were no adverse events, either non-union or sinus or fluid accumulation, reported. All 10 participants in the metalwork group had an extra operation to remove their metalwork at one year. One trial, which did not report on function or pain, provided very low quality evidence of lower rates of metalwork for any reason or for symptoms after Netz pin tension band wiring compared with standard tension band wiring (11/21 with Netz pin versus 17/25 with standard tension band wiring; RR 0.77, 95% CI 0.47 to 1.26; 46 participants); this evidence also supports the possibility of higher rates of metalwork removal for Netz pins. Two intra-operative complications occurred in the Netz pin group. The fourth trial, which compared the cable pin system with standard procedure, found low quality evidence that cable pin improved functional outcome at a mean of 21 months (Mayo Elbow Performance Score (MEPS), range 0 to 100: best outcome: mean difference (MD) 7.89 favouring cable pin, 95% CI 3.14 to 12.64; one trial; 62 participants). It also found low quality evidence of fewer postoperative complications in the cable pin group (1/30 with cable pin system versus 7/32 standard tension band wiring; RR 0.15, 95% CI 0.02 to 1.17), although the evidence did not rule out the converse.\nOne trial provided very low quality evidence of similar patient-reported function using the Disabilities of the Arm, Shoulder and Hand questionnaire (0 to 100: worst function) at two or more years after fixation using a novel olecranon memory connector (OMC) compared with locking plate fixation (MD -0.70 favouring OMC, 95% CI -4.20 to 2.80; 40 participants). The only adverse event was a superficial infection in the locking plate group.\nAuthors' conclusions\nThere is insufficient evidence to draw robust conclusions on the relative effects of the surgical interventions evaluated by the included trials. Further evidence, including patient-reported data, on the relative effects of plate versus tension band wiring is already pending from one recently completed RCT. Further RCTs, using good quality methods and reporting validated patient-reported measures of function, pain and activities of daily living at set follow-ups, are needed, including checking positive findings such as those relating to the use of an intramedullary screw and the cable pin system. Such trials should also include the systematic assessment of complications, further treatment including routine removal of metalwork and use of resources.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All six trials were small and had weaknesses that could affect the reliability of their results. We judged the overall quality of the evidence available for each comparison as either low or very low."
}
] |
query-laysum | 1103 | [
{
"role": "user",
"content": "Abstract: Background\nSurgery is a common treatment modality for stress urinary incontinence (SUI), usually offered to women for whom conservative treatments have failed. Midurethral tapes have superseded colposuspension because cure rates are comparable and recovery time is reduced. However, some women will not be cured after midurethral tape surgery. Currently, there is no consensus on how to manage the condition in these women.\nThis is an update of a Cochrane Review first published in 2013.\nObjectives\nTo assess the effects of interventions for treating recurrent stress urinary incontinence after failed minimally invasive synthetic midurethral tape surgery in women; and to summarise the principal findings of economic evaluations of these interventions.\nSearch methods\nWe searched the Cochrane Incontinence Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, ClinicalTrials.gov, WHO ICTRP and handsearching of journals and conference proceedings (searched 9 November 2018). We also searched the reference lists of relevant articles.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials in women who had recurrent stress urinary incontinence after previous minimally invasive midurethral tape surgery. We included conservative, pharmacological and surgical treatments.\nData collection and analysis\nTwo review authors checked the abstracts of identified studies to confirm their eligibility. We obtained full-text reports of relevant studies and contacted study authors directly for additional information where necessary. We extracted outcome data onto a standard proforma and processed them according to the guidance in the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nWe included one study in this review. This study was later reported in an originally unplanned secondary analysis of 46 women who underwent transobturator tape for recurrent SUI after one or more previous failed operations. We were unable to use the data, as they were not presented according to the nature of the first operation.\nWe excluded 12 studies, five because they were not randomised controlled trials (RCTs) and four because previous incontinence surgery was not performed using midurethral tape. We considered a further three to be ineligible because neither the trial report nor personal communication with the trialists could confirm whether any of the participants had previously undergone surgery with tape.\nWe had also planned to develop a brief economic commentary summarising the principal findings of relevant economic evaluations but supplementary systematic searches did not identify any such studies.\nAuthors' conclusions\nThere were insufficient data to assess the effects of any of the different management strategies for recurrent or persistent stress incontinence after failed midurethral tape surgery. No published papers have reported exclusively on women whose first operation was a midurethral tape. Evidence from further RCTs and economic evaluations is required to address uncertainties about the effects and costs of these treatments.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to assess the effects of conservative treatment (such as pelvic floor muscle training or bladder training), surgery and medication on the number of women who reported that their incontinence was improved or cured after treatment, along with other outcomes such as quality of life and adverse events. We were also interested in the effects on our outcomes of different types of midurethral tapes.\nOf the 46 eligible women in the included study, two-thirds were reported to have received a midurethral tape in their first surgery. However, the data in the report did not differentiate between women who had previously undergone surgery with a midurethral tape and those who had had colposuspension. This means that we cannot be certain that the results were due to the midurethral tape, so we could not use the data in this review.\nWe planned to summarise evidence about which treatments might be considered worthwhile uses of healthcare resources but we did not identify any studies that asked this question."
}
] |
query-laysum | 1104 | [
{
"role": "user",
"content": "Abstract: Background\nAntidepressants are a first-line treatment for adults with moderate to severe major depression. However, many people prescribed antidepressants for depression don't respond fully to such medication, and little evidence is available to inform the most appropriate 'next step' treatment for such patients, who may be referred to as having treatment-resistant depression (TRD). National Institute for Health and Care Excellence (NICE) guidance suggests that the 'next step' for those who do not respond to antidepressants may include a change in the dose or type of antidepressant medication, the addition of another medication, or the start of psychotherapy. Different types of psychotherapies may be used for TRD; evidence on these treatments is available but has not been collated to date.\nAlong with the sister review of pharmacological therapies for TRD, this review summarises available evidence for the effectiveness of psychotherapies for adults (18 to 74 years) with TRD with the goal of establishing the best 'next step' for this group.\nObjectives\nTo assess the effectiveness of psychotherapies for adults with TRD.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Controlled Trials Register (until May 2016), along with CENTRAL, MEDLINE, Embase, and PsycINFO via OVID (until 16 May 2017). We also searched the World Health Organization (WHO) trials portal (ICTRP) and ClinicalTrials.gov to identify unpublished and ongoing studies. There were no date or language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) with participants aged 18 to 74 years diagnosed with unipolar depression that had not responded to minimum four weeks of antidepressant treatment at a recommended dose. We excluded studies of drug intolerance. Acceptable diagnoses of unipolar depression were based onthe Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) or earlier versions, International Classification of Diseases (ICD)-10, Feighner criteria, or Research Diagnostic Criteria. We included the following comparisons.\n1. Any psychological therapy versus antidepressant treatment alone, or another psychological therapy.\n2. Any psychological therapy given in addition to antidepressant medication versus antidepressant treatment alone, or a psychological therapy alone.\nPrimary outcomes required were change in depressive symptoms and number of dropouts from study or treatment (as a measure of acceptability).\nData collection and analysis\nWe extracted data, assessed risk of bias in duplicate, and resolved disagreements through discussion or consultation with a third person. We conducted random-effects meta-analyses when appropriate. We summarised continuous outcomes using mean differences (MDs) or standardised mean differences (SMDs), and dichotomous outcomes using risk ratios (RRs).\nMain results\nWe included six trials (n = 698; most participants were women approximately 40 years of age). All studies evaluated psychotherapy plus usual care (with antidepressants) versus usual care (with antidepressants). Three studies addressed the addition of cognitive-behavioural therapy (CBT) to usual care (n = 522), and one each evaluated intensive short-term dynamic psychotherapy (ISTDP) (n = 60), interpersonal therapy (IPT) (n = 34), or group dialectical behavioural therapy (DBT) (n = 19) as the intervention. Most studies were small (except one trial of CBT was large), and all studies were at high risk of detection bias for the main outcome of self-reported depressive symptoms.\nA random-effects meta-analysis of five trials (n = 575) showed that psychotherapy given in addition to usual care (vs usual care alone) produced improvement in self-reported depressive symptoms (MD -4.07 points, 95% confidence interval (CI) -7.07 to -1.07 on the Beck Depression Inventory (BDI) scale) over the short term (up to six months). Effects were similar when data from all six studies were combined for self-reported depressive symptoms (SMD -0.40, 95% CI -0.65 to -0.14; n = 635). The quality of this evidence was moderate. Similar moderate-quality evidence of benefit was seen on the Patient Health Questionnaire-9 Scale (PHQ-9) from two studies (MD -4.66, 95% CI 8.72 to -0.59; n = 482) and on the Hamilton Depression Rating Scale (HAMD) from four studies (MD -3.28, 95% CI -5.71 to -0.85; n = 193).\nHigh-quality evidence shows no differential dropout (a measure of acceptability) between intervention and comparator groups over the short term (RR 0.85, 95% CI 0.58 to 1.24; six studies; n = 698).\nModerate-quality evidence for remission from six studies (RR 1.92, 95% CI 1.46 to 2.52; n = 635) and low-quality evidence for response from four studies (RR 1.80, 95% CI 1.2 to 2.7; n = 556) indicate that psychotherapy was beneficial as an adjunct to usual care over the short term.\nWith the addition of CBT, low-quality evidence suggests lower depression scores on the BDI scale over the medium term (12 months) (RR -3.40, 95% CI -7.21 to 0.40; two studies; n = 475) and over the long term (46 months) (RR -1.90, 95% CI -3.22 to -0.58; one study; n = 248). Moderate-quality evidence for adjunctive CBT suggests no difference in acceptability (dropout) over the medium term (RR 0.98, 95% CI 0.66 to 1.47; two studies; n = 549) and lower dropout over long term (RR 0.80, 95% CI 0.66 to 0.97; one study; n = 248).\nTwo studies reported serious adverse events (one suicide, two hospitalisations, and two exacerbations of depression) in 4.2% of the total sample, which occurred only in the usual care group (no events in the intervention group).\nAn economic analysis (conducted as part of an included study) from the UK healthcare perspective (National Health Service (NHS)) revealed that adjunctive CBT was cost-effective over nearly four years.\nAuthors' conclusions\nModerate-quality evidence shows that psychotherapy added to usual care (with antidepressants) is beneficial for depressive symptoms and for response and remission rates over the short term for patients with TRD. Medium- and long-term effects seem similarly beneficial, although most evidence was derived from a single large trial. Psychotherapy added to usual care seems as acceptable as usual care alone.\nFurther evidence is needed on the effectiveness of different types of psychotherapies for patients with TRD. No evidence currently shows whether switching to a psychotherapy is more beneficial for this patient group than continuing an antidepressant medication regimen. Addressing this evidence gap is an important goal for researchers.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Searches are current up to May 2017."
}
] |
query-laysum | 1105 | [
{
"role": "user",
"content": "Abstract: Background\nExperiencing anxiety and depression is very common in people living with dementia and mild cognitive impairment (MCI). There is uncertainty about the best treatment approach. Drug treatments may be ineffective and associated with adverse effects. Guidelines recommend psychological treatments. In this updated systematic review, we investigated the effectiveness of different psychological treatment approaches.\nObjectives\nPrimary objective\nTo assess the clinical effectiveness of psychological interventions in reducing depression and anxiety in people with dementia or MCI.\nSecondary objectives\nTo determine whether psychological interventions improve individuals' quality of life, cognition, activities of daily living (ADL), and reduce behavioural and psychological symptoms of dementia, and whether they improve caregiver quality of life or reduce caregiver burden.\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group's register, MEDLINE, Embase, four other databases, and three trials registers on 18 February 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared a psychological intervention for depression or anxiety with treatment as usual (TAU) or another control intervention in people with dementia or MCI.\nData collection and analysis\nA minimum of two authors worked independently to select trials, extract data, and assess studies for risk of bias. We classified the included psychological interventions as cognitive behavioural therapies (cognitive behavioural therapy (CBT), behavioural activation (BA), problem-solving therapy (PST)); 'third-wave' therapies (such as mindfulness-based cognitive therapy (MBCT)); supportive and counselling therapies; and interpersonal therapies. We compared each class of intervention with control. We expressed treatment effects as standardised mean differences or risk ratios. Where possible, we pooled data using a fixed-effects model. We used GRADE methods to assess the certainty of the evidence behind each result.\nMain results\nWe included 29 studies with 2599 participants. They were all published between 1997 and 2020. There were 15 trials of cognitive behavioural therapies (4 CBT, 8 BA, 3 PST), 11 trials of supportive and counselling therapies, three trials of MBCT, and one of interpersonal therapy. The comparison groups received either usual care, attention-control education, or enhanced usual care incorporating an active control condition that was not a specific psychological treatment. There were 24 trials of people with a diagnosis of dementia, and five trials of people with MCI. Most studies were conducted in community settings. We considered none of the studies to be at low risk of bias in all domains.\nCognitive behavioural therapies (CBT, BA, PST)\nCognitive behavioural therapies are probably slightly better than treatment as usual or active control conditions for reducing depressive symptoms (standardised mean difference (SMD) -0.23, 95% CI -0.37 to -0.10; 13 trials, 893 participants; moderate-certainty evidence). They may also increase rates of depression remission at the end of treatment (risk ratio (RR) 1.84, 95% CI 1.18 to 2.88; 2 studies, with one study contributing 2 independent comparisons, 146 participants; low-certainty evidence). We were very uncertain about the effect of cognitive behavioural therapies on anxiety at the end of treatment (SMD -0.03, 95% CI -0.36 to 0.30; 3 trials, 143 participants; very low-certainty evidence). Cognitive behavioural therapies probably improve patient quality of life (SMD 0.31, 95% CI 0.13 to 0.50; 7 trials, 459 participants; moderate-certainty evidence) and activities of daily living at end of treatment compared to treatment as usual or active control (SMD -0.25, 95% CI -0.40 to -0.09; 7 trials, 680 participants; moderate-certainty evidence).\nSupportive and counselling interventions\nMeta-analysis showed that supportive and counselling interventions may have little or no effect on depressive symptoms in people with dementia compared to usual care at end of treatment (SMD -0.05, 95% CI -0.18 to 0.07; 9 trials, 994 participants; low-certainty evidence). We were very uncertain about the effects of these treatments on anxiety, which was assessed only in one small pilot study.\nOther interventions\nThere were very few data and very low-certainty evidence on MBCT and interpersonal therapy, so we were unable to draw any conclusions about the effectiveness of these interventions.\nAuthors' conclusions\nCBT-based treatments added to usual care probably slightly reduce symptoms of depression for people with dementia and MCI and may increase rates of remission of depression. There may be important effect modifiers (degree of baseline depression, cognitive diagnosis, or content of the intervention). CBT-based treatments probably also have a small positive effect on quality of life and activities of daily living. Supportive and counselling interventions may not improve symptoms of depression in people with dementia. Effects of both types of treatment on anxiety symptoms are very uncertain. We are also uncertain about the effects of other types of psychological treatments, and about persistence of effects over time. To inform clinical guidelines, future studies should assess detailed components of these interventions and their implementation in different patient populations and in different settings.\n\nGiven the provided abstract, please respond to the following query: \"What do we mean by psychological treatments?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Psychological treatments, sometimes known as ‘talking therapies’, are based on psychological theories. They involve a therapist working together with an individual or a small group of people to develop skills and strategies to improve well-being. These treatments can be adapted for people with cognitive impairment."
}
] |
query-laysum | 1106 | [
{
"role": "user",
"content": "Abstract: Background\nParenteral opioids (intramuscular and intravenous drugs including patient-controlled analgesia) are used for pain relief in labour in many countries throughout the world. This review is an update of a review first published in 2010.\nObjectives\nTo assess the effectiveness, safety and acceptability to women of different types, doses and modes of administration of parenteral opioid analgesia in labour. A second objective is to assess the effects of opioids in labour on the baby in terms of safety, condition at birth and early feeding.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (11 May 2017) and reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials examining the use of intramuscular or intravenous opioids (including patient-controlled analgesia) for women in labour. Cluster-randomised trials were also eligible for inclusion, although none were identified. We did not include quasi-randomised trials. We looked at studies comparing an opioid with another opioid, placebo, no treatment, other non-pharmacological interventions (transcutaneous electrical nerve stimulation (TENS)) or inhaled analgesia.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We assessed the quality of each evidence synthesis using the GRADE approach.\nMain results\nWe included 70 studies that compared an opioid with placebo or no treatment, another opioid administered intramuscularly or intravenously or compared with TENS applied to the back. Sixty-one studies involving more than 8000 women contributed data to the review and these studies reported on 34 different comparisons; for many comparisons and outcomes only one study contributed data. All of the studies were conducted in hospital settings, on healthy women with uncomplicated pregnancies at 37 to 42 weeks' gestation. We excluded studies focusing on women with pre-eclampsia or pre-existing conditions or with a compromised fetus. Overall, the evidence was graded as low- or very low-quality regarding the analgesic effect of opioids and satisfaction with analgesia; evidence was downgraded because of study design limitations, and many of the studies were underpowered to detect differences between groups and so effect estimates were imprecise. Due to the large number of different comparisons, it was not possible to present GRADE findings for every comparison.\nFor the comparison of intramuscular pethidine (50 mg/100 mg) versus placebo, no clear differences were found in maternal satisfaction with analgesia measured during labour (number of women satisfied or very satisfied after 30 minutes: 50 women; 1 trial; risk ratio (RR) 7.00, 95% confidence interval (CI) 0.38 to 128.87, very low-quality evidence), or number of women requesting an epidural (50 women; 1 trial; RR 0.50, 95% CI 0.14 to 1.78; very low-quality evidence). Pain scores (reduction in visual analogue scale (VAS) score of at least 40 mm: 50 women; 1 trial; RR 25, 95% CI 1.56 to 400, low-quality evidence) and pain measured in labour (women reporting pain relief to be \"good\" or \"fair\" within one hour of administration: 116 women; 1 trial; RR 1.75, 95% CI 1.24 to 2.47, low-quality evidence) were both reduced in the pethidine group, and fewer women requested any additional analgesia (50 women; 1 trial; RR 0.71, 95% CI 0.54 to 0.94, low-quality evidence).\nThere was limited information on adverse effects and harm to women and babies. There were few results that clearly showed that one opioid was more effective than another. Overall, findings indicated that parenteral opioids provided some pain relief and moderate satisfaction with analgesia in labour. Opioid drugs were associated with maternal nausea, vomiting and drowsiness, although different opioid drugs were associated with different adverse effects. There was no clear evidence of adverse effects of opioids on the newborn. We did not have sufficient evidence to assess which opioid drug provided the best pain relief with the least adverse effects.\nAuthors' conclusions\nThough most evidence is of low- or very-low quality, for healthy women with an uncomplicated pregnancy who are giving birth at 37 to 42 weeks, parenteral opioids appear to provide some relief from pain in labour but are associated with drowsiness, nausea, and vomiting in the woman. Effects on the newborn are unclear. Maternal satisfaction with opioid analgesia was largely unreported. The review needs to be examined alongside related Cochrane reviews. More research is needed to determine which analgesic intervention is most effective, and provides greatest satisfaction to women with acceptable adverse effects for mothers and their newborn.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for trials on 11 May 2017. We included 70 studies though only 61 studies involving more than 8000 women contributed data to the review. All of the trials were conducted in hospital settings, on healthy women with uncomplicated pregnancies at 37 to 42 weeks' gestation. The trials compared an opioid (intramuscular or intravenous) with placebo (dummy treatment), no treatment, another opioid (or in three trials another medication or inhaled nitrous oxide) or transcutaneous electrical nerve stimulation (TENS) in 34 different comparisons. There were few opportunities to pool the findings, and for many outcomes only one trial contributed findings. The quality of the evidence was mainly assessed as low or very low for the outcomes of pain in labour and satisfaction with analgesia. Many of the studies included insufficient numbers of women to detect differences between groups."
}
] |
query-laysum | 1107 | [
{
"role": "user",
"content": "Abstract: Background\nHepatic encephalopathy is a common complication of cirrhosis which results in poor brain functioning. The spectrum of changes associated with hepatic encephalopathy ranges from the clinically 'indiscernible' or minimal hepatic encephalopathy to the clinically 'obvious' or overt hepatic encephalopathy. Flumazenil is a synthetic benzodiazepine antagonist with high affinity for the central benzodiazepine recognition site. Flumazenil may benefit people with hepatic encephalopathy through an indirect negative allosteric modulatory effect on gamma-aminobutyric acid receptor function. The previous version of this review, which included 13 randomised clinical trials, found no effect of flumazenil on all-cause mortality, based on an analysis of 10 randomised clinical trials, but found a beneficial effect on hepatic encephalopathy, based on an analysis of eight randomised clinical trials.\nObjectives\nTo evaluate the beneficial and harmful effects of flumazenil versus placebo or no intervention for people with cirrhosis and hepatic encephalopathy.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, and LILACS; meeting and conference proceedings; and bibliographies in May 2017.\nSelection criteria\nWe included randomised clinical trials regardless of publication status, blinding, or language in the analyses of benefits and harms, and observational studies in the assessment of harms.\nData collection and analysis\nTwo review authors extracted data independently. We undertook meta-analyses and presented results using risk ratios (RR) with 95% confidence intervals (CI) and I2 values as a marker of heterogeneity. We assessed bias control using the Cochrane Hepato-Biliary Group domains; determined the quality of the evidence using GRADE; evaluated the risk of small-study effects in regression analyses; and conducted trial sequential, subgroup, and sensitivity analyses.\nMain results\nWe identified 14 eligible randomised clinical trials with 867 participants, the majority of whom had an acute episode of overt hepatic encephalopathy. In addition, we identified one ongoing randomised clinical trial. We were unable to gather outcome data from two randomised clinical trials with 25 participants. Thus, our analyses include 842 participants from 12 randomised clinical trials comparing flumazenil versus placebo. We classified one randomised clinical trial at low risk of bias in the overall assessment and the remaining randomised clinical trials at high risk of bias. The duration of follow-up ranged from a few minutes to two weeks, but it was less than one day in the majority of the trials.\nIn total, 32/433 (7.4%) participants allocated to flumazenil versus 38/409 (9.3%) participants allocated to placebo died (RR 0.75, 95% CI 0.48 to 1.16; 11 randomised clinical trials; low quality evidence). The Trial Sequential Analysis and the one randomised clinical trial assessed as low risk of bias (RR 0.76, 95% CI 0.37 to 1.53) found no beneficial or harmful effects of flumazenil on all-cause mortality. The methods used to evaluate hepatic encephalopathy included several different clinical scales, electrophysiological variables, and psychometric tests. Flumazenil was associated with a beneficial effect on hepatic encephalopathy when including all randomised clinical trials (RR 0.75, 95% CI 0.71 to 0.80; 824 participants; nine randomised clinical trials; low quality evidence), or just the trial at low risk of bias (RR 0.78, 95% CI 0.72 to 0.84; 527 participants). The Trial Sequential Analysis supported a beneficial effect of flumazenil on hepatic encephalopathy. The randomised clinical trials included little information about causes of death and little information on non-fatal serious adverse events.\nAuthors' conclusions\nWe found low quality evidence suggesting a short-term beneficial effect of flumazenil on hepatic encephalopathy in people with cirrhosis, but no evidence of an effect on all-cause mortality. Additional evidence from large, high quality randomised clinical trials is needed to evaluate the potential benefits and harms of flumazenil in people with cirrhosis and hepatic encephalopathy.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What is hepatic encephalopathy?\nCirrhosis is a chronic disorder of the liver. People with cirrhosis may develop hepatic encephalopathy, a condition which results in poor brain functioning. In some people, there are obvious clinical features of disturbed brain functioning (overt hepatic encephalopathy); these changes may be short-lived or persist for long periods of time. In other people, there are no obvious clinical changes but some aspects of brain function, such as attention and the ability to perform complex tasks are impaired when tested (minimal hepatic encephalopathy). The reason people develop hepatic encephalopathy is complex but changes in brain neurotransmitters, which are the chemical messengers which allow nerve cells to communicate with one another, may play a role. The neurotransmitter gamma aminobutyric acid (GABA) is responsible for slowing or inhibiting brain activity and is thought to play a particularly important role.\nWhat is flumazenil?\nFlumazenil is a medicine that acts on one of the GABA receptors in the brain to modify its effects on these specialised cells and so may benefit people with hepatic encephalopathy. It has to be given into a vein (intravenous) and its effects do not last for more than a few hours."
}
] |
query-laysum | 1108 | [
{
"role": "user",
"content": "Abstract: Background\nSupplementary feeding may help food insecure and vulnerable people by optimising the nutritional value and adequacy of the diet, improving quality of life and improving various health parameters of disadvantaged families. In low- and middle-income countries (LMIC), the problems supplementary feeding aims to address are entangled with poverty and deprivation, the programmes are expensive and delivery is complicated.\nObjectives\n1. To summarise the evidence from systematic reviews of supplementary feeding for food insecure, vulnerable and malnourished populations, including children under five years of age, school-aged children, pregnant and lactating women, people with HIV or tuberculosis (or both), and older populations.\n2. To describe and explore the effects of supplementary feeding given to people in these groups, and to describe the range of outcomes between reviews and range of effects in the different groups.\nMethods\nIn January 2017, we searched the Cochrane Database of Systematic Reviews, MEDLINE, Embase and nine other databases. We included systematic reviews evaluating community-based supplementary feeding, and concerning food insecure, vulnerable and malnourished populations. Two review authors independently undertook selection of systematic reviews, data extraction and 'Risk of bias' assessment. We assessed review quality using the AMSTAR tool, and used GRADEpro 'Summary of findings' tables from each review to indicate the certainty of the evidence for the main comparisons. We summarised review findings in the text and reported the data for each outcome in additional tables. We also used forest plots to display results graphically.\nMain results\nThis overview included eight systematic reviews (with last search dates between May 2006 and February 2016). Seven were Cochrane Reviews evaluating interventions in pregnant women; children (aged from birth to five years) from LMIC; disadvantaged infants and young children (aged three months to five years); children with moderate acute malnutrition (MAM); disadvantaged school children; adults and children who were HIV positive or with active tuberculosis (with or without HIV). One was a non-Cochrane systematic review in older people with Alzheimer's disease. These reviews included 95 trials relevant to this overview, with the majority (74%) of participants from LMIC.\nThe number of included participants varied between 91 and 7940 adults, and 271 and more than 12,595 children. Trials included a wide array of nutritional interventions that varied in duration, frequency and format, with micronutrients often reported as cointerventions. Follow-up ranged from six weeks to two years; three trials investigated outcomes at four to 17 years of age. All reviews were rated as high quality (AMSTAR score between eight and 11). The GRADE certainty ratings ranged from very low to moderate for individual comparisons, with the evidence often comprising only one or two small trials, thereby resulting in many underpowered analyses (too small to detect small but important differences). The main outcome categories reported across reviews were death, anthropometry (adults and children) and other markers of nutritional status, disease-related outcomes, neurocognitive development and psychosocial outcomes, and adverse events.\nMortality data were limited and underpowered in meta-analysis in all populations (children with MAM, in children with HIV, and in adults with tuberculosis) with the exception of balanced energy and protein supplementation in pregnancy, which may have reduced the risk of stillbirth (risk ratio (RR) 0.60, 95% confidence interval (CI) 0.39 to 0.94; 5 trials, 3408 women). Supplementation in pregnancy also improved infant birth weight (mean difference (MD) 40.96 g, 95% CI 4.66 to 77.26; 11 trials, 5385 participants) and reduced risk of infants born small-for-gestational age (RR 0.79, 95% CI 0.69 to 0.90; 7 trials, 4408 participants). These effects did not translate into demonstrable long-term benefits for children in terms of growth and neurocognitive development in the one to two trials reporting on longer-term outcomes. In one study (505 participants), high-protein supplementation was associated with increased risk of small-for-gestational age babies.\nEffects on growth in children were mixed. In children under five years of age from LMIC, one review found that supplementary feeding had a little or no effect on child growth; however, a more recent review in a similar population found that those who received food supplementation gained an average of 0.12 kg more in weight (MD 0.12 kg, 95% CI 0.05 to 0.18; 9 trials, 1057 participants) and 0.27 cm more in height (MD 0.27 cm, 95% CI 0.07 to 0.48; 9 trials, 1463 participants) than those who were not supplemented. Supplementary food was generally more effective for younger children (younger than two years of age) and for those who were poorer or less well-nourished. In children with MAM, the provision of specially formulated food improved their weight, weight-for-height z scores and other key outcomes such as recovery rate (by 29%), as well as reducing the number of participants dropping out (by 70%). In LMIC, school meals seemed to lead to small benefits for children, including improvements in weight z scores, especially in children from lower-income countries, height z scores, cognition or intelligence quotient tests, and maths and spelling performance.\nSupplementary feeding in adults who were HIV positive increased the daily energy and protein intake compared to nutritional counselling alone. Supplementation led to an initial improvement in weight gain or body mass index but did not seem to confer long-term benefit.\nIn adults with tuberculosis, one small trial found a significant benefit on treatment completion and sputum conversion rate. There were also significant but modest benefits in terms of weight gain (up to 2.60 kg) during active tuberculosis.\nThe one study included in the Alzheimer's disease review found that three months of daily oral nutritional supplements improved nutritional outcomes in the intervention group.\nThere was little or no evidence regarding people's quality of life, adherence to treatment, attendance at clinic or the costs of supplementary feeding programmes.\nAuthors' conclusions\nConsidering the current evidence base included, supplementary food effects are modest at best, with inconsistent and limited mortality evidence. The trials reflected in the reviews mostly reported on short-term outcomes and across the whole of the supplementation trial literature it appears important outcomes, such as quality of life and cost of programmes, are not systematically reported or summarised.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Across a range of vulnerable populations, supplementary feeding programmes sometimes show modest benefit in nutritional outcomes. In a few studies examining mortality (death), effects were either small or absent, and research mostly looked at short-term effects."
}
] |
query-laysum | 1109 | [
{
"role": "user",
"content": "Abstract: Background\nAntiviral therapy for recurrent hepatitis C infection after liver transplantation is controversial due to unresolved balance between benefits and harms.\nObjectives\nTo compare the therapeutic benefits and harms of different antiviral regimens in patients with hepatitis C re-infected grafts after liver transplantation.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 1, 2013), MEDLINE, EMBASE, and Science Citation Index Expanded to February 2013.\nSelection criteria\nWe considered only randomised clinical trials (irrespective of language, blinding, or publication status) comparing various antiviral therapies (alone or in combination) in the treatment of hepatitis C virus recurrence in liver transplantation for the review.\nData collection and analysis\nTwo authors collected the data independently. We calculated the risk ratio (RR) or mean difference (MD) with 95% confidence intervals (CI) using the fixed-effect and the random-effects models based on available case-analysis. In the presence of only trials for a dichotomous outcome, we performed the Fisher's exact test.\nMain results\nOverall, 17 trials with 736 patients met the inclusion criteria for this review. All trials had high risk of bias. Five hundred and one patients randomised in 11 trials provided information for various comparisons in this systematic review after excluding post-randomisation drop-outs and patients from trials that did not report any of the outcomes of interest for this review. The comparisons for which outcomes were available included pegylated (peg) interferon versus control; peg interferon plus ribavirin versus control; ribavirin plus peg interferon versus peg interferon; peg interferon (1.5 μg/kg/week) plus ribavirin versus peg interferon (0.5 μg/kg/week) plus ribavirin; amantadine plus peg interferon plus ribavirin versus peg interferon plus ribavirin; interferon versus control; interferon plus ribavirin versus control; ribavirin versus interferon; and ribavirin versus placebo. Long-term follow-up was not available in these trials. There were no significant differences in mortality, retransplantation, graft rejections requiring retransplantation or medical treatment, or fibrosis worsening between the groups in any of the comparisons in which these outcomes were reported. Quality of life and liver decompensation were not reported in any of the trials. There was a significantly higher proportion of participants who developed serious adverse events in the ribavirin plus peg interferon combination therapy group than in the peg interferon monotherapy group (1 trial; 56 participants; 17/28 (60.7%) in the intervention group versus 5/28 (17.9%) in the control group; RR 3.40; 95% CI 1.46 to 7.94). There was no significant difference in proportion of participants who developed serious adverse events or in the number of serious adverse events between the intervention and control groups in the other comparisons that reported serious adverse events.\nAuthors' conclusions\nConsidering the lack of clinical benefit, there is currently no evidence to recommend or refute antiviral treatment for recurrent liver graft infection with hepatitis C virus. Further randomised clinical trials with low risk of bias and low risk of random errors with adequate duration of follow-up are necessary.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The liver is an important organ of the body and has various functions including generation of energy from food; production of material necessary for congealing, processing, and excretion of drugs and waste products in blood; and filtering out the harmful bacteria that enter the body through the gut. Hepatitis C virus can cause damage to the liver usually in an insidious manner (chronic hepatitis C infection). Sometimes, the liver damage can be so severe that the liver is not able to carry out the normal functions, resulting in liver failure. Liver transplantation is an effective treatment for the treatment of liver failure due to chronic hepatitis C infection. However, liver transplantation does not eradicate the virus and the virus can affect the donor liver graft. One of the proposed strategies to treat the recurrence of chronic hepatitis C virus infection in these patients is using antiviral treatments. The effectiveness of these treatments is not known. We performed a detailed review of the medical literature (to February 2013) to determine the benefits and harms of different antiviral treatments for patients with recurrent hepatitis C infection after undergoing liver transplantation for chronic hepatitis C virus infection. We sought evidence from randomised clinical trials only. When conducted properly, such trials provide the best evidence. Two authors independently identified the trials and obtained the information from the trials to minimise error."
}
] |
query-laysum | 1110 | [
{
"role": "user",
"content": "Abstract: Background\nVascular endothelial growth factor (VEGF) plays a key role in angiogenesis in foetal life. Researchers have recently attempted to use anti-VEGF agents for the treatment of retinopathy of prematurity (ROP), a vasoproliferative disorder. The safety and efficacy of these agents in preterm infants with ROP is currently uncertain.\nObjectives\nTo evaluate the efficacy and safety of anti-VEGF drugs when used either as monotherapy, that is without concomitant cryotherapy or laser therapy, or in combination with planned cryo/laser therapy in preterm infants with type 1 ROP (defined as zone I any stage with plus disease, zone I stage 3 with or without plus disease, or zone II stage 2 or 3 with plus disease).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 11), MEDLINE (1966 to 11 December 2016), Embase (1980 to 11 December 2016), CINAHL (1982 to 11 December 2016), and conference proceedings.\nSelection criteria\nRandomised or quasi-randomised controlled trials that evaluated the efficacy or safety of administration, or both, of anti-VEGF agents compared with conventional therapy in preterm infants with ROP.\nData collection and analysis\nWe used standard Cochrane and Cochrane Neonatal methods for data collection and analysis. We used the GRADE approach to assess the quality of the evidence.\nMain results\nSix trials involving a total of 383 infants fulfilled the inclusion criteria. Five trials compared intravitreal bevacizumab (n = 4) or ranibizumab (n = 1) with conventional laser therapy (monotherapy), while the sixth study compared intravitreal pegaptanib plus conventional laser therapy with laser/cryotherapy (combination therapy).\nWhen used as monotherapy, bevacizumab/ranibizumab did not reduce the risk of complete or partial retinal detachment (3 studies; 272 infants; risk ratio (RR) 1.04, 95% confidence interval (CI) 0.21 to 5.13; risk difference (RD) 0.00, 95% CI -0.04 to 0.04; very low-quality evidence), mortality before discharge (2 studies; 229 infants; RR 1.50, 95% CI 0.26 to 8.75), corneal opacity requiring corneal transplant (1 study; 286 eyes; RR 0.34, 95% CI 0.01 to 8.26), or lens opacity requiring cataract removal (3 studies; 544 eyes; RR 0.15, 95% CI 0.01 to 2.79). The risk of recurrence of ROP requiring retreatment also did not differ between groups (2 studies; 193 infants; RR 0.88, 95% CI 0.47 to 1.63; RD -0.02, 95% CI -0.12 to 0.07; very low-quality evidence). Subgroup analysis showed a significant reduction in the risk of recurrence in infants with zone I ROP (RR 0.15, 95% CI 0.04 to 0.62), but an increased risk of recurrence in infants with zone II ROP (RR 2.53, 95% CI 1.01 to 6.32). Pooled analysis of studies that reported eye-level outcomes also revealed significant increase in the risk of recurrence of ROP in the eyes that received bevacizumab (RR 5.36, 95% CI 1.22 to 23.50; RD 0.10, 95% CI 0.03 to 0.17). Infants who received intravitreal bevacizumab had a significantly lower risk of refractive errors (very high myopia) at 30 months of age (1 study; 211 eyes; RR 0.06, 95% CI 0.02 to 0.20; RD -0.40, 95% CI -0.50 to -0.30; low-quality evidence).\nWhen used in combination with laser therapy, intravitreal pegaptanib was found to reduce the risk of retinal detachment when compared to laser/cryotherapy alone (152 eyes; RR 0.26, 95% CI 0.12 to 0.55; RD -0.29, 95% CI -0.42 to -0.16; low-quality evidence). The incidence of recurrence of ROP by 55 weeks' postmenstrual age was also lower in the pegaptanib + laser therapy group (76 infants; RR 0.29, 95% CI 0.12 to 0.7; RD -0.35, 95% CI -0.55 to -0.16; low-quality evidence). There was no difference in the risk of perioperative retinal haemorrhages between the two groups (152 eyes; RR 0.62, 95% CI 0.24 to 1.56; RD -0.05, 95% CI -0.16 to 0.05; very low-quality evidence). However, the risk of delayed systemic adverse effects with any of the three anti-VEGF drugs is not known.\nAuthors' conclusions\nImplications for practice: Intravitreal bevacizumab/ranibizumab, when used as monotherapy, reduces the risk of refractive errors during childhood but does not reduce the risk of retinal detachment or recurrence of ROP in infants with type 1 ROP. While the intervention might reduce the risk of recurrence of ROP in infants with zone I ROP, it can potentially result in higher risk of recurrence requiring retreatment in those with zone II ROP. Intravitreal pegaptanib, when used in conjunction with laser therapy, reduces the risk of retinal detachment as well as the recurrence of ROP in infants with type 1 ROP. However, the quality of the evidence was very low to low for most outcomes due to risk of detection bias and other biases. The effects on other critical outcomes and, more importantly, the long-term systemic adverse effects of the drugs are not known. Insufficient data precludes strong conclusions favouring routine use of intravitreal anti-VEGF agents - either as monotherapy or in conjunction with laser therapy - in preterm infants with type 1 ROP.\nImplications for research: Further studies are needed to evaluate the effect of anti-VEGF agents on structural and functional outcomes in childhood and delayed systemic effects including adverse neurodevelopmental outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We graded the quality of the evidence as very low or low for most of the key outcomes."
}
] |